diff --git a/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/_index.md b/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/_index.md index cdec577f5..fb3847188 100644 --- a/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/_index.md @@ -1,17 +1,13 @@ --- -title: CircleCI Arm Native Workflows on AWS Graviton2 (EC2) +title: Deploy CircleCI Arm Native Workflows on AWS EC2 Graviton2 minutes_to_complete: 45 -draft: true -cascade: - draft: true - -who_is_this_for: This learning path is intended for software developers and DevOps engineers looking to set up and run CircleCI Arm native workflows on Linux Arm64 VMs, specifically on AWS EC2 Graviton2 instances (Neoverse N1), using self-hosted runners. +who_is_this_for: This is an introductory topic for developers and DevOps engineers who want to set up and run CircleCI Arm native workflows on Linux Arm64 virtual machines. You'll use AWS EC2 Graviton2 instances (Neoverse N1) and self-hosted runners. learning_objectives: - Provision an AWS EC2 Graviton2 Arm64 virtual machine - - Install and configure CircleCI self-hosted machine runners on Arm64 + - Install and configure a CircleCI self-hosted machine runners on Arm64 - Verify the runner by running a simple workflow and test computation - Define and execute CircleCI job using a machine executor - Check CPU architecture and execute a basic script to confirm if the runner is operational diff --git a/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/background.md b/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/background.md index e4836e896..a731b786f 100644 --- a/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/background.md +++ b/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/background.md @@ -1,27 +1,28 @@ --- -title: Getting Started with CircleCI on AWS Graviton2 (Arm Neoverse-N1) +title: Get Started with CircleCI on AWS Graviton2 weight: 2 layout: "learningpathall" --- +## AWS Graviton2 Arm instances on Amazon EC2 -## AWS Graviton2 Arm Instances on Amazon EC2 - -**AWS Graviton2** is a family of Arm-based processors designed by AWS and built on **Arm Neoverse-N1 cores**. These instances deliver exceptional price-to-performance efficiency, making them ideal for compute-intensive workloads such as CI/CD pipelines, microservices, containerized applications, and data processing tasks. +AWS Graviton2 is a family of Arm-based processors designed by AWS and built on Arm Neoverse-N1 cores. These instances deliver exceptional price-to-performance efficiency, making them ideal for compute-intensive workloads such as CI/CD pipelines, microservices, containerized applications, and data processing tasks. Graviton2-powered EC2 instances provide high performance and energy efficiency compared to traditional x86-based instances while maintaining compatibility with popular Linux distributions and open-source software stacks. -To learn more about AWS Graviton processors, refer to the [AWS Graviton2 Processor Overview](https://aws.amazon.com/ec2/graviton/). +To learn more about AWS Graviton processors, see the [AWS Graviton2 Processor Overview](https://aws.amazon.com/ec2/graviton/). ## CircleCI -**CircleCI** is a leading cloud-based **Continuous Integration and Continuous Delivery (CI/CD)** platform that automates the **building, testing, and deployment** of software projects. +CircleCI is a leading cloud-based Continuous Integration and Continuous Delivery (CI/CD) platform that automates the building, testing, and deployment of software projects. + +It seamlessly integrates with popular version control systems such as GitHub, Bitbucket, and GitLab, allowing developers to define automation workflows through a `.circleci/config.yml` file written in YAML syntax. + +CircleCI supports multiple execution environments, including Docker, Linux, macOS, and Windows, while providing advanced capabilities like parallel job execution, build caching, and matrix builds for optimized performance. -It seamlessly integrates with popular version control systems such as **GitHub**, **Bitbucket**, and **GitLab**, allowing developers to define automation workflows through a `.circleci/config.yml` file written in **YAML syntax**. +It is widely adopted by development teams to accelerate build cycles, enforce code quality, automate testing, and streamline application delivery. -CircleCI supports multiple execution environments, including **Docker**, **Linux**, **macOS**, and **Windows**, while providing advanced capabilities like **parallel job execution**, **build caching**, and **matrix builds** for optimized performance. +To learn more, visit the [CircleCI website](https://circleci.com/) and the [CircleCI documentation](https://circleci.com/docs/). -It is widely adopted by development teams to **accelerate build cycles, enforce code quality, automate testing, and streamline application delivery**. -To learn more, visit the [official CircleCI website](https://circleci.com/) and explore its [documentation](https://circleci.com/docs/). diff --git a/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/circleci-runner-installation.md b/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/circleci-runner-installation.md index ad04b6273..a84e0dc5a 100644 --- a/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/circleci-runner-installation.md +++ b/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/circleci-runner-installation.md @@ -1,20 +1,18 @@ --- -title: Install CircleCI Machine Runner on AWS Graviton2 +title: Install CircleCI machine runner on AWS Graviton2 weight: 6 ### FIXED, DO NOT MODIFY layout: learningpathall --- -## Install CircleCI Machine Runner on AWS Graviton2 +## Install CircleCI machine runner on AWS Graviton2 -This guide provides step-by-step instructions to install and configure the **CircleCI Machine Runner** on an **AWS Graviton2 (Neoverse N1) instance**. -With this setup, your self-hosted **Arm64 environment** can efficiently execute CircleCI jobs directly on the Graviton2 architecture, enabling faster builds and improved performance for ARM-based workloads. +This Learning Path shows you how to install and configure the CircleCI Machine Runner on an AWS Graviton2 (Neoverse N1) instance. With this setup, your self-hosted Arm64 environment can efficiently execute CircleCI jobs directly on the Graviton2 architecture, enabling faster builds and improved performance for Arm-based workloads. -### Add CircleCI Package Repository -For **Debian/Ubuntu-based systems** running on **AWS Graviton2 (Arm64)**, first add the official CircleCI repository. -This ensures you can install the CircleCI Runner package directly using `apt`. +## Add the CircleCI package repository +For Debian/Ubuntu-based systems running on AWS Graviton2 (Arm64), first add the official CircleCI repository. This ensures you can install the CircleCI Runner package directly using `apt`: ```console curl -s https://packagecloud.io/install/repositories/circleci/runner/script.deb.sh?any=true | sudo bash @@ -24,8 +22,8 @@ curl -s https://packagecloud.io/install/repositories/circleci/runner/script.deb. - It configures the repository on your system, allowing `apt` to fetch and install the CircleCI runner package. - After successful execution, the CircleCI repository will be added under `/etc/apt/sources.list.d/`. -### Configure the Runner Token -- Each self-hosted runner requires a unique authentication token generated from your Resource Class in the CircleCI Dashboard. +## Configure the runner token +- Each self-hosted runner requires a unique authentication token generated from your resource class in the CircleCI dashboard. - Copy the token from the CircleCI web interface. - Export the token as an environment variable and update the runner configuration file as shown: @@ -34,25 +32,24 @@ export RUNNER_AUTH_TOKEN="YOUR_AUTH_TOKEN" sudo sed -i "s/<< AUTH_TOKEN >>/$RUNNER_AUTH_TOKEN/g" /etc/circleci-runner/circleci-runner-config.yaml ``` -### Install the CircleCI Runner -Install the pre-built CircleCI runner package: +## Install the CircleCI runner +To install the CircleCI runner, use the following command: ```console sudo apt-get install -y circleci-runner ``` - -- Installs the latest CircleCI Machine Runner compatible with your Arm64 instance. -- Runner binary and configuration files are located in `/usr/bin/` and `/etc/circleci-runner/`. -### Configure the Runner Authentication Token -Update the CircleCI runner configuration with your authentication token. This token is generated from the Resource Class you created in the CircleCI Dashboard. +This command installs the latest CircleCI Machine Runner for your Arm64 system. The runner program is placed in `/usr/bin/`, and its configuration files are stored in `/etc/circleci-runner/`. + +## Configure the runner authentication token +Update the CircleCI runner configuration with your authentication token. This token is generated from the resource class you created in the CircleCI Dashboard. ```console export RUNNER_AUTH_TOKEN="YOUR_AUTH_TOKEN" sudo sed -i "s/<< AUTH_TOKEN >>/$RUNNER_AUTH_TOKEN/g" /etc/circleci-runner/circleci-runner-config.yaml ``` -### Enable and Start the CircleCI Runner +## Enable and start the CircleCI runner Set the CircleCI runner service to start automatically and verify it is running: ```console @@ -88,6 +85,6 @@ Oct 17 06:19:13 ip-172-31-34-224 circleci-runner[2226]: 06:19:13 c34c1 22.514ms This confirms that the CircleCI Runner is actively connected to your CircleCI account and ready to accept jobs. -Also, you can verify it from the dashboard: +You can also verify it from the dashboard: -![Self-Hosted Runners alt-text#center](images/runner.png "Figure 1: Self-Hosted Runners ") +![Diagram showing the CircleCI self-hosted runner architecture. The main subject is an AWS Graviton2 server labeled as a self-hosted runner, connected to the CircleCI cloud platform. Arrows indicate job requests flowing from CircleCI to the runner and job results returning to CircleCI. The environment includes icons for cloud infrastructure and developer workstations. The tone is technical and informative. Any visible text in the image is transcribed as: Self-Hosted Runners. alt-text#center](images/runner.png "Self-Hosted Runners ") diff --git a/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/circlecli-installation.md b/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/circlecli-installation.md index 2e1234c54..755ed4e4a 100644 --- a/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/circlecli-installation.md +++ b/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/circlecli-installation.md @@ -6,29 +6,29 @@ weight: 4 layout: learningpathall --- -## Install CircleCI CLI on AWS Graviton2 (Neoverse N1) Instance -This guide explains installing the **CircleCI Command Line Interface (CLI)** on an **AWS Graviton2 (Neoverse N1) Arm64 EC2 instance**. -The CLI enables you to interact with CircleCI directly from your terminal — for validating configuration files, managing pipelines, and operating self-hosted runners on your EC2 instance. +## Install CircleCI CLI on AWS Graviton2 (Neoverse N1) instance +This section walks you through how to install the CircleCI command line interface (CLI) on an AWS Graviton2 (Neoverse N1) Arm64 EC2 instance. +With the CLI, you can work with CircleCI from your terminal to check configuration files, manage pipelines, and run self-hosted runners on your EC2 instance. -### Install Required Packages -Before installing the CircleCI CLI, ensure your system has the necessary tools for downloading and extracting files. +## Install the required packages +Before installing the CircleCI CLI, ensure your system has the necessary tools for downloading and extracting files: ```console sudo apt update && sudo apt install -y curl tar gzip coreutils gpg git ``` -### Download and Extract the CircleCI CLI +## Download and extract the CircleCI CLI -Next, download the CircleCI CLI binary for **Linux arm64** and extract it. +Next, download the CircleCI CLI binary for Linux arm64 and extract it: ```console curl -fLSs https://github.com/CircleCI-Public/circleci-cli/releases/download/v0.1.33494/circleci-cli_0.1.33494_linux_arm64.tar.gz | tar xz sudo mv circleci-cli_0.1.33494_linux_arm64/circleci /usr/local/bin/ ``` -- The `curl` command fetches the official **CircleCI CLI archive** from GitHub. +- The `curl` command fetches the official CircleCI CLI archive from GitHub. - The `| tar xz` command extracts the compressed binary in a single step. - After extraction, a new folder named **`circleci-cli_0.1.33494_linux_arm64`** appears in your current directory. -### Verify the Installation +## Verify the installation To ensure that the CLI is installed successfully, check its version: diff --git a/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/instance.md b/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/instance.md index cf6bf669d..81d2dc18d 100644 --- a/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/instance.md +++ b/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/instance.md @@ -8,31 +8,30 @@ layout: learningpathall ## Overview -In this section, you will learn how to provision an **AWS Graviton2 Arm64 EC2 instance** on **Amazon Web Services (AWS)** using the **m6g.xlarge** instance type (2 vCPUs, 8 GB memory) in the **AWS Management Console**. +In this section, you'll learn how to provision an AWS Graviton2 Arm64 EC2 instance on Amazon Web Services (AWS) using the m6g.xlarge instance type (2 vCPUs, 8 GB memory) in the AWS Management Console. {{% notice Note %}} For support on AWS setup, see the Learning Path [Getting started with AWS](https://learn.arm.com/learning-paths/servers-and-cloud-computing/csp/aws/). {{% /notice %}} -## Provision an AWS EC2 Arm64 Graviton2 Instance in the AWS Management Console +## Provision the instance in the AWS Management Console -To create a virtual machine based on the AWS Graviton2 Instance type: +To create a virtual machine based on the AWS Graviton2 Instance type, follow these steps: - Navigate to the [AWS Management Console](https://aws.amazon.com/console/). - Go to **EC2 > Instances** and select **Launch Instance**. - Under **Instance configuration**: - - Enter an appropriate **Instance name**. - - Choose an **Amazon Machine Image (AMI)** such as **Ubuntu 24.04 ARM64**. + - Enter an appropriate **Instance name**. + - Choose an **Amazon Machine Image (AMI)** such as **Ubuntu 24.04 ARM64**. - ![AWS Management Console alt-text#center](images/aws1.png "Figure 1: Amazon Machine Image (AMI)") - + ![AWS Management Console showing the Amazon Machine Image selection screen with Ubuntu 24.04 ARM64 highlighted. The interface displays a list of available AMIs, each with details such as name, architecture, and description. The wider environment includes navigation menus on the left and a search bar at the top. The mood is neutral and instructional, focused on guiding users through selecting an appropriate AMI. Visible text includes Amazon Machine Image, Ubuntu 24.04 ARM64, and related AMI details. alt-text#center](images/aws1.png "Amazon Machine Image (AMI)") - Under **Instance type**, select a Graviton2-based type `m6g.xlarge`. - ![AWS Management Console alt-text#center](images/aws2.png "Figure 2: Instance type") + ![AWS Management Console displaying the instance type selection screen with m6g.xlarge highlighted. The primary subject is the list of available EC2 instance types, each showing details such as name, vCPUs, memory, and architecture. The m6g.xlarge row is selected, indicating 2 vCPUs and 8 GB memory, with Arm64 architecture. The wider environment includes navigation menus on the left and a search bar at the top. Visible text includes Instance type, m6g.xlarge, vCPUs, Memory, and Arm64. The tone is neutral and instructional, guiding users to select the correct instance type. #alt-text#center](images/aws2.png "Instance type") - Configure your **Key pair (login)** by either creating a new key pair or selecting an existing one to securely access your instance. - In **Network settings**, ensure that **Allow HTTP traffic from the internet** and **Allow HTTPS traffic from the internet** are checked. - ![AWS Management Console alt-text#center](images/aws3.png "Figure 3: Network settings") + ![AWS Management Console showing the Network settings configuration screen for launching an EC2 instance. The primary subject is the Network settings panel, where the options Allow HTTP traffic from the internet and Allow HTTPS traffic from the internet are both checked. The wider environment includes navigation menus on the left and a summary of instance configuration steps at the top. Visible text includes Network settings, Allow HTTP traffic from the internet, and Allow HTTPS traffic from the internet. The tone is neutral and instructional, guiding users to enable the correct network access for their instance. #alt-text#center](images/aws3.png "Network settings") - - Adjust **Storage** settings as needed — for most setups, 30 GB of gp3 (SSD) storage is sufficient. - - Click **Launch Instance** to create your EC2 virtual machine. + - Adjust the Storage settings. For most use cases, 30 GB of gp3 (SSD) storage is enough. + - Select **Launch Instance** to create your EC2 virtual machine. diff --git a/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/resource-class.md b/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/resource-class.md index 3e02ea50f..ecd3df14e 100644 --- a/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/resource-class.md +++ b/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/resource-class.md @@ -1,40 +1,31 @@ --- -title: Create Resource Class in CircleCI +title: Create a resource class in CircleCI weight: 5 ### FIXED, DO NOT MODIFY layout: learningpathall --- -## Create a Resource Class for Self-Hosted Runner in CircleCI -This guide describes creating a **Resource Class** in the **CircleCI Web Dashboard** for a **self-hosted runner**. -A Resource Class uniquely identifies the runner and links it to your CircleCI namespace, enabling jobs to run on your custom machine environment. +## Overview -### Steps +This section describes creating a resource class in the CircleCI Web Dashboard for a self-hosted runner. A resource class uniquely identifies the runner and links it to your CircleCI namespace, enabling jobs to run on your custom machine environment. -1. **Go to the CircleCI Web Dashboard** - - From the left sidebar, navigate to **Self-Hosted Runners**. - - You’ll see a screen asking you to accept the **terms of use**. - - **Check the box** that says **“Yes, I agree to the terms”** to enable runners. - - Then click **Self-Hosted Runners** to continue setup. +## Register a resource class for your CircleCI self-hosted runner -![Self-Hosted Runners alt-text#center](images/shrunner0.png "Figure 1: Self-Hosted Runners ") +To register a resource class for your CircleCI self-hosted runner, start by navigating to **Self-Hosted Runners** in the left sidebar of the CircleCI dashboard. You’ll be prompted to accept the terms of use; check the box labeled “Yes, I agree to the terms” to enable runners. Once you’ve agreed, select **Self-Hosted Runners** to continue with the setup process. -2. **Create a New Resource Class** - - Click **Create Resource Class**. +![CircleCI dashboard showing the Self-Hosted Runners section. The main subject is the Self-Hosted Runners setup screen with a checkbox labeled Yes I agree to the terms and a button to enable runners. The wider environment includes the CircleCI dashboard interface with navigation sidebar and setup instructions. Visible text: Self-Hosted Runners, Yes I agree to the terms. The emotional tone is neutral and instructional. alt-text#center](images/shrunner0.png "Self-Hosted Runners section") -![Self-Hosted Runners alt-text#center](images/shrunner1.png "Figure 2: Create Resource Class ") +To create a new resource class, select **Create Resource Class**. -3. **Fill in the Details** - - **Namespace:** Your CircleCI username or organization (e.g., `circleci`) - - **Resource Class Name:** A descriptive name for your runner, such as `arm64` +![CircleCI dashboard showing the Create Resource Class button. The main subject is the Self-Hosted Runners setup screen with a prominent button labeled Create Resource Class. The wider environment includes the CircleCI dashboard interface with navigation sidebar and setup instructions. Visible text: Create Resource Class. The emotional tone is neutral and instructional. alt-text#center](images/shrunner1.png "Create Resource Class") -![Self-Hosted Runners alt-text#center](images/shrunner2.png "Figure 3: Details Resource Class & Namespace") +Fill in the details for your new resource class by entering your CircleCI username or organization in the **Namespace** field (for example, `circleci`). In the **Resource Class Name** field, provide a descriptive name for your runner, such as `arm64`, to clearly identify its purpose or architecture. -4. **Save and Copy the Token** - - Once created, CircleCI will generate a **Resource Class Token**. - - Copy this token and store it securely — you will need it to register your runner on the AWS Arm VM. +![CircleCI dashboard showing the form to create a resource class. The main subject is the Details section with fields for Namespace and resource class Name. The wider environment includes the CircleCI dashboard interface with navigation sidebar and setup instructions. Visible text: Namespace, resource class Name, Create resource class. The emotional tone is neutral and instructional. alt-text#center](images/shrunner2.png "Create a resource class") -![Self-Hosted Runners alt-text#center](images/shrunner3.png "Figure 4: Resource Class Token") - -With your Resource Class and token ready, proceed to the next section to set up the CircleCI self-hosted runner. +After creation, CircleCI generates a **Resource Class Token**. Copy this token and store it securely - you need it to register your runner on the AWS Arm VM. + +![CircleCI dashboard showing resource Class Token field and copy button. The main subject is the resource Class Token displayed in a text box, with a button labeled Copy next to it. The wider environment includes the CircleCI dashboard interface with navigation sidebar and setup instructions. The emotional tone is neutral and instructional. Visible text: resource class Token, Copy. alt-text#center](images/shrunner3.png "Resource Class Token field and copy button") + +With your resource class and token ready, proceed to the next section to set up the CircleCI self-hosted runner. diff --git a/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/validation.md b/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/validation.md index 385df18f9..ed4a4d321 100644 --- a/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/validation.md +++ b/content/learning-paths/servers-and-cloud-computing/circleci-on-aws/validation.md @@ -1,5 +1,5 @@ --- -title: Verify CircleCI Arm64 Self-Hosted Runner +title: Verify CircleCI Arm64 Self-Hosted runner weight: 7 ### FIXED, DO NOT MODIFY @@ -8,9 +8,9 @@ layout: learningpathall ## Verify CircleCI Arm64 Self-Hosted Runner -This guide demonstrates validating your **self-hosted CircleCI runner** on an **Arm64 machine** by executing a simple workflow and a test computation. This ensures your runner is correctly configured and ready to process jobs. +This section walks you through validating your self-hosted CircleCI runner on an Arm64 machine by executing a simple workflow and a test computation. This ensures your runner is correctly configured and ready to process jobs. -### Create a Test Repository +## Create a test repository Start by creating a GitHub repository dedicated to verifying your Arm64 runner: ```console @@ -19,16 +19,23 @@ cd aws-circleci ``` This repository serves as a sandbox to confirm that your CircleCI runner can pick up and run jobs for Arm64 workflows. -### Add a Sample Script -Create a minimal shell script that will be used to confirm the runner executes commands correctly: +## Add a sample script +Create a minimal shell script to confirm your runner can execute commands: + +```bash +echo 'echo "Hello from CircleCI Arm64 Runner!"' > hello.sh +chmod +x hello.sh +``` + +This script prints a message when run, helping you verify that your self-hosted runner is working as expected. ```console echo 'echo "Hello from CircleCI Arm64 Runner!"' > hello.sh chmod +x hello.sh ``` -### Define the CircleCI Configuration -Create a `.circleci/config.yml` file to define the workflow that will run on your Arm64 runner: +## Define the CircleCI configuration +Now create a `.circleci/config.yml` file to define the workflow that runs on your Arm64 runner: ```yaml version: 2.1 @@ -59,13 +66,17 @@ workflows: jobs: - test-Arm64 ``` -- Defines a single job `test-Arm64` using a machine executor on a self-hosted Arm64 runner. -- Checks CPU architecture with `uname -m` and `lscpu` to verify the runner. -- Executes a simple script `hello.sh` to confirm the runner can run commands. -- Runs a sample computation step to display CPU info and print. +This configuration does the following: + +- Defines a single job called `test-Arm64` that uses a machine executor on your self-hosted Arm64 runner +- Verifies the runner's architecture by running `uname -m` and checking the output of `lscpu` +- Runs the `hello.sh` script to confirm the runner can execute commands +- Performs a sample computation step that displays CPU information and prints a success message -### Commit and Push to GitHub -Once all files you created (`hello.sh`, `.circleci/config.yml`) are ready, push your project to GitHub so CircleCI can build and verify the Arm64 runner automatically. +Each step helps you confirm that your CircleCI Arm64 runner is set up correctly and ready to process jobs. + +## Commit and push to GitHub +After you create `hello.sh` and `.circleci/config.yml`, push your project to GitHub so CircleCI can build and verify your Arm64 runner: ```console git add . @@ -74,36 +85,38 @@ git branch -M main git push -u origin main ``` -- **Add Changes**: Stage all modified and new files using `git add .`. -- **Commit Changes**: Commit the staged files with a descriptive message. -- **Set Main Branch**: Rename the current branch to `main`. -- **Add Remote Repository**: Link your local repository to GitHub. -- **Push Changes**: Push the committed changes to the `main` branch on GitHub. +Here's what each command does: +- git add . — stages all your files for commit +- git commit -m ... — saves your changes with a message +- git branch -M main — sets your branch to main (if it's not already) +- git push -u origin main — pushes your code to GitHub + +Once your code is on GitHub, CircleCI can start running your workflow automatically. +## Start the CircleCI runner and run your job -### Start CircleCI Runner and Execute Job -Ensure that your CircleCI runner is enabled and started. This will allow your self-hosted runner to pick up jobs from CircleCI. +Before you test your workflow, make sure your CircleCI runner is enabled and running. This lets your self-hosted runner pick up jobs from CircleCI: ```console sudo systemctl enable circleci-runner sudo systemctl start circleci-runner sudo systemctl status circleci-runner ``` -- **Enable CircleCI Runner**: Ensure the CircleCI runner is set to start automatically on boot. -- **Start and Check Status**: Start the CircleCI runner and verify it is running. +- Enable the runner so it starts automatically when your machine boots +- Start the runner and check its status to confirm it is running -After pushing your code to GitHub, open your **CircleCI Dashboard → Projects**, and confirm that your **test-Arm64 workflow** starts running using your **self-hosted runner**. +After you push your code to GitHub, go to your CircleCI Dashboard and select Projects. Look for your test-Arm64 workflow and check that it is running on your self-hosted runner. -If the setup is correct, you’ll see your job running under the resource class you created. +If everything is set up correctly, you’ll see your job running under the resource class you created. -### Output -Once the job starts running, CircleCI will: +## Output +Once the job starts running, CircleCI does the following: -- Verify Arm64 Runner: +- It verifies the Arm64 Runner: - ![Self-Hosted Runners alt-text#center](images/runnerv1.png "Figure 1: Self-Hosted Runners ") + ![CircleCI self-hosted runner dashboard showing a successful Arm64 job execution. The main panel displays job status as successful with green check marks. The sidebar lists workflow steps including checkout, verify Arm64 runner, and run sample computation. The environment is a web interface with a clean, professional layout. The overall tone is positive and confirms successful validation of the self-hosted runner. alt-text#center](images/runnerv1.png "Self-Hosted Runners ") -- Run sample computation: +- It runs a sample computation: - ![Self-Hosted Runners alt-text#center](images/computation.png "Figure 1: Self-Hosted Runners ") + ![CircleCI dashboard displaying the results of a sample computation job on a self-hosted Arm64 runner. The main panel shows the job status as successful with green check marks. Workflow steps listed in the sidebar include checkout, verify Arm64 runner, and run sample computation. The environment is a modern web interface with a clean, organized layout. On-screen text includes Success and CPU Info. The overall tone is positive, confirming the successful execution of the computation step on the Arm64 runner. alt-text#center](images/computation.png "Self-Hosted Runners ") All CircleCI jobs have run successfully, the sample computation completed, and all outputs are visible in the CircleCI Dashboard. diff --git a/content/learning-paths/servers-and-cloud-computing/puppet-on-gcp/_index.md b/content/learning-paths/servers-and-cloud-computing/puppet-on-gcp/_index.md index c509ef77f..dc5171540 100644 --- a/content/learning-paths/servers-and-cloud-computing/puppet-on-gcp/_index.md +++ b/content/learning-paths/servers-and-cloud-computing/puppet-on-gcp/_index.md @@ -1,26 +1,21 @@ --- -title: Deploy Puppet on Google Cloud C4A (Arm-based Axion VMs) - -draft: true -cascade: - draft: true +title: Deploy Puppet on Google Cloud C4A minutes_to_complete: 30 -who_is_this_for: This is an introductory topic for software developers deploying and optimizing Puppet workloads on Arm Linux environments, specifically using Google Cloud C4A virtual machines powered by Axion processors. +who_is_this_for: This is an introductory topic for developers deploying and optimizing Puppet workloads on Arm Linux environments, specifically using Google Cloud C4A virtual machines powered by Axion processors. learning_objectives: - - Provision an Arm-based SUSE SLES virtual machine on Google Cloud (C4A with Axion processors) - - Install Puppet on a SUSE Arm64 (C4A) instance + - Provision an Arm-based SUSE SLES (SUSE Linux Enterprise Server) virtual machine (VM) on Google Cloud C4A with Axion processors + - Install Puppet on a SUSE Arm64 C4A instance - Verify Puppet by applying a test manifest and confirming successful resource creation on Arm64 - Benchmark Puppet by measuring catalog compile time, apply speed, and resource usage on Arm64 +author: Pareena Verma prerequisites: - A [Google Cloud Platform (GCP)](https://cloud.google.com/free) account with billing enabled - Basic familiarity with [Puppet](https://www.puppet.com/) -author: Pareena Verma - ##### Tags skilllevels: Introductory subjects: Performance and Architecture diff --git a/content/learning-paths/servers-and-cloud-computing/puppet-on-gcp/background.md b/content/learning-paths/servers-and-cloud-computing/puppet-on-gcp/background.md index a83fa26d6..878ec7f70 100644 --- a/content/learning-paths/servers-and-cloud-computing/puppet-on-gcp/background.md +++ b/content/learning-paths/servers-and-cloud-computing/puppet-on-gcp/background.md @@ -1,27 +1,30 @@ --- -title: Getting started with Puppet on Google Axion C4A (Arm Neoverse-V2) +title: Get started with Arm-based Google Axion and Puppet weight: 2 layout: "learningpathall" --- -## Google Axion C4A Arm instances in Google Cloud +## Automate and optimize cloud deployments on Arm + +Modern cloud workloads demand scalable, efficient, and automated infrastructure management. By combining Arm-based Google Axion C4A instances with Puppet, you can take advantage of high-performance, energy-efficient virtual machines and powerful configuration management tools. This section introduces the key technologies you'll use to automate and optimize your cloud deployments on Arm in Google Cloud. + +## Explore Google Axion C4A Arm instances on Google Cloud Google Axion C4A is a family of Arm-based virtual machines built on Google’s custom Axion CPU, which is based on Arm Neoverse-V2 cores. Designed for high-performance and energy-efficient computing, these virtual machines offer strong performance for modern cloud workloads such as CI/CD pipelines, microservices, media processing, and general-purpose applications. The C4A series provides a cost-effective alternative to x86 virtual machines while leveraging the scalability and performance benefits of the Arm architecture in Google Cloud. -To learn more about Google Axion, refer to the [Introducing Google Axion Processors, our new Arm-based CPUs](https://cloud.google.com/blog/products/compute/introducing-googles-new-arm-based-cpu) blog. - -## Puppet +To learn more about Google Axion, see the Google blog [Introducing Google Axion Processors, our new Arm-based CPUs](https://cloud.google.com/blog/products/compute/introducing-googles-new-arm-based-cpu). -[Puppet](https://puppet.com/) is an **open-source configuration management and automation tool** designed to help system administrators and DevOps teams **manage infrastructure as code**. Developed by [Puppet Labs](https://puppet.com/company/), it automates the provisioning, configuration, and management of servers and services across large-scale environments. +## Explore Puppet +[Puppet](https://puppet.com/) is an open-source configuration management and automation tool designed to help system administrators and DevOps teams manage infrastructure as code. Developed by [Puppet Labs](https://puppet.com/company/), it automates the provisioning, configuration, and management of servers and services across large-scale environments. -Puppet uses a **declarative language** to define system configurations, ensuring that every machine’s state matches the desired setup described in its manifests. It supports both **agent-based** and **agentless** architectures, making it flexible for diverse deployment needs. +Puppet uses a declarative language to define system configurations, ensuring that every machine’s state matches the desired setup described in its manifests. It supports both agent-based and agentless architectures, making it flexible for diverse deployment needs. -Known for its **scalability**, **reliability**, and **idempotent behavior**, Puppet continuously enforces configurations, reducing manual effort and configuration drift. It integrates well with major platforms like **Linux**, **Windows**, **macOS**, and cloud providers such as **AWS**, **Azure**, and **GCP**. +Known for its scalability, reliability, and idempotent behavior, Puppet continuously enforces configurations, reducing manual effort and configuration drift. It integrates well with major platforms like Linux, Windows, macOS, and cloud providers such as AWS, Azure, and GCP. -Common use cases include **automating server configuration**, **applying security policies**, **software installation**, and **infrastructure auditing**. Puppet is widely used in enterprises for managing **hybrid and multi-cloud environments** efficiently. +Common use cases include automating server configuration, applying security policies, software installation, and infrastructure auditing. Puppet is widely used in enterprises for managing hybrid and multi-cloud environments efficiently. To learn more, visit the [official Puppet website](https://puppet.com/). diff --git a/content/learning-paths/servers-and-cloud-computing/puppet-on-gcp/baseline.md b/content/learning-paths/servers-and-cloud-computing/puppet-on-gcp/baseline.md index 8d2b43a6a..c2fcc9ec6 100644 --- a/content/learning-paths/servers-and-cloud-computing/puppet-on-gcp/baseline.md +++ b/content/learning-paths/servers-and-cloud-computing/puppet-on-gcp/baseline.md @@ -1,16 +1,15 @@ --- -title: Puppet Baseline Testing on Google Axion C4A Arm Virtual Machine +title: Perform Puppet baseline testing weight: 5 ### FIXED, DO NOT MODIFY layout: learningpathall --- -## Puppet baseline testing on GCP SUSE VMs +## Overview +You can perform baseline testing of Puppet on a GCP SUSE Arm64 VM to make sure your installation works as expected. In this Learning Path, you'll verify Puppet and Facter versions, run basic Puppet commands, apply a simple manifest, and confirm that system facts are collected correctly. These steps help you validate your setup before moving on to advanced Puppet tasks. -You can perform baseline testing of Puppet on a GCP SUSE Arm64 VM to verify that the installation works correctly. You will check Puppet and Facter versions, run basic Puppet commands, apply a simple manifest, and confirm that system facts are collected accurately. - -### Verify the Puppet installation +## Verify the Puppet installation Verify that Puppet and Facter are correctly installed and respond to version checks. @@ -49,7 +48,7 @@ The output confirms the Ruby version and architecture: ruby 3.1.4p223 (2023-03-30 revision 957bb7cb81) [aarch64-linux] ``` -### Run a simple Puppet command +## Run a simple Puppet command Check that Puppet responds to commands by running `puppet help`. If the help menu appears, Puppet is working correctly. @@ -96,9 +95,9 @@ See 'puppet help ' for help on a specific subcommand. Puppet v8.10.0 ``` -### Test a Simple Puppet Manifest +## Test a simple Puppet manifest -Create a basic Puppet script to make sure Puppet can apply configurations. If it successfully creates the test file, your Puppet agent functions as expected. +Create a basic Puppet script to make sure Puppet can apply configurations. The purpose of this script is to verify that Puppet can successfully apply configurations on your system. If the script runs and creates the specified test file, it confirms that your Puppet agent is functioning correctly: ```bash cd ~ @@ -124,7 +123,7 @@ Notice: /Stage[main]/Main/File[/tmp/puppet_test.txt]/ensure: defined content as Notice: Applied catalog in 0.01 seconds ``` -Open the file created by Puppet to confirm the content matches your script. This step validates that Puppet executed your manifest correctly. +Open the file created by Puppet to confirm the content matches your script. This step validates that Puppet executed your manifest correctly: ```console cat /tmp/puppet_test.txt @@ -135,7 +134,7 @@ Output: Hello from Puppet on SUSE ARM64! ``` -### Check Facter integration +## Check Facter integration Run `facter` commands to verify that it collects accurate system details, such as the OS and CPU type. This ensures Puppet can gather the facts it needs for automation decisions. @@ -207,4 +206,13 @@ The output is similar to the following: } ``` -With these checks complete, proceed to the Puppet benchmarking section to run workload-focused tests on the GCP SUSE VMs. +## What you've accomplished and what's next + +You've completed the essential baseline checks for Puppet on your GCP SUSE Arm64 VM. At this point, you've: + +- Verified that Puppet, Facter, and Ruby are installed and working +- Confirmed Puppet responds to commands and applies manifests +- Validated that Facter collects accurate system facts + +This progress means your environment is ready for more advanced testing. Next, you'll move on to Puppet benchmarking, where you'll run workload-focused tests to measure performance on your GCP SUSE VM. + diff --git a/content/learning-paths/servers-and-cloud-computing/puppet-on-gcp/benchmarking.md b/content/learning-paths/servers-and-cloud-computing/puppet-on-gcp/benchmarking.md index 134d987da..c06156d3d 100644 --- a/content/learning-paths/servers-and-cloud-computing/puppet-on-gcp/benchmarking.md +++ b/content/learning-paths/servers-and-cloud-computing/puppet-on-gcp/benchmarking.md @@ -1,5 +1,5 @@ --- -title: Puppet Benchmarking +title: Benchmark Puppet weight: 6 ### FIXED, DO NOT MODIFY @@ -7,13 +7,12 @@ layout: learningpathall --- -## Puppet Benchmark on GCP SUSE Arm64 VM +## Benchmark Puppet on a GCP SUSE Arm64 VM -This guide explains how to perform a **Puppet standalone benchmark** on a **Google Cloud Platform (GCP) SUSE Linux Arm64 VM**. -It measures Puppet’s local execution performance without requiring a Puppet Master. +This section walks you through how to perform a Puppet standalone benchmark on a Google Cloud Platform (GCP) SUSE Linux Arm64 VM. It measures Puppet’s local execution performance without requiring a Puppet Master. -### Prerequisites +## Prerequisites Ensure that Puppet is installed and functioning correctly: ```console @@ -24,7 +23,7 @@ Output: 8.10.0 ``` -### Create a Benchmark Manifest +## Create a benchmark manifest Create a directory and a simple manifest file: ```console @@ -41,11 +40,16 @@ notify { 'Benchmark Test': } ``` -- **notify** is a built-in Puppet resource type that displays a message during catalog application (like a print or log message). -- **'Benchmark Test'** is the title of the resource — a unique identifier for this notify action. -- **message => 'Running Puppet standalone benchmark.'** specifies the text message Puppet will print when applying the manifest. +## Explore the code -### Run the Benchmark Command +Here is a breakdown of the key elements in the `benchmark.pp` manifest to help you understand how Puppet processes and displays information during the benchmark: + +- `notify` is a built-in Puppet resource type that displays a message during catalog application (like a print or log message). +- `Benchmark Test` is the title of the resource. It's a unique identifier for this notify action. +- `message => 'Running Puppet standalone benchmark.'` specifies the text message Puppet prints when applying the manifest. + + +## Run the benchmark command This step runs Puppet in standalone mode using the `apply` command to execute the benchmark manifest locally while measuring execution time and performance statistics. ```console @@ -67,30 +71,40 @@ user 0m0.676s sys 0m0.367s ``` -### Benchmark Metrics Explanation - -- **Compiled catalog** → Puppet compiled your manifest into an execution plan. -- **Applied catalog** → Puppet executed the plan on your system. -- **real** → Total elapsed wall time (includes CPU + I/O). -- **user** → CPU time spent in user-space. -- **sys** → CPU time spent in system calls. - -### Benchmark results -The above results were executed on a `c4a-standard-4` (4 vCPU, 16 GB memory) Axiom Arm64 VM in GCP running SuSE: - -| **Metric / Log** | **Output** | -|-------------------|------------| -| Compiled catalog | 0.01 seconds | -| Environment | production | -| Applied catalog | 0.01 seconds | -| real | 0m1.054s | -| user | 0m0.676s | -| sys | 0m0.367s | - -### Puppet benchmarking summary - -- **Catalog compilation:** Completed in just **0.01 seconds**, showing excellent processing speed on **Arm64**. -- **Environment:** Executed smoothly under the **production** environment. -- **Configuration version:** Recorded as **1763407825**, confirming successful version tracking. -- **Catalog application:** Finished in **0.01 seconds**, demonstrating very low execution latency. -- **Real time:** Total runtime of **1.054 seconds**, reflecting efficient end. +## Interpret the benchmark metrics +Here is a breakdown of the key benchmark metrics you will see in the output: + +- `Compiled catalog`: Puppet parsed your manifest and generated a catalog, which is an execution plan describing the desired system state. This metric shows how quickly Puppet can process and prepare your configuration for application. Fast compilation times indicate efficient manifest design and good platform performance. +- `Applied catalog`: Puppet applied the compiled catalog to your VM, making the necessary changes to reach the desired state. This value reflects how quickly Puppet can enforce configuration changes on your Arm64 system. Low application times suggest minimal system overhead and effective resource management. +- `real`: This is the total elapsed wall-clock time from start to finish of the `puppet apply` command. It includes all time spent running the process, waiting for I/O, and any other delays. Lower real times mean faster end-to-end execution, which is important for automation and scaling. +- `user`: This measures the amount of CPU time spent executing user-space code (Puppet and Ruby processes) during the benchmark. High user time relative to real time can indicate CPU-bound workloads, while lower values suggest efficient code execution. +- `sys`: This is the CPU time spent in system (kernel) calls, such as file operations or network access. Lower sys times are typical for lightweight manifests, while higher values may indicate more intensive system interactions or I/O operations. + +## Benchmark results + +The following table summarizes the benchmark metrics collected from running Puppet on a `c4a-standard-4` (4 vCPU, 16 GB memory) Axiom Arm64 VM in Google Cloud Platform (GCP) with SUSE Linux. These results provide a baseline for evaluating Puppet’s performance on Arm64 infrastructure. Use this data to compare against other VM types or architectures, and to identify areas for further optimization. + +| Metric / Log | Output | +|--------------------|--------------| +| Compiled catalog | 0.01 seconds | +| Environment | production | +| Applied catalog | 0.01 seconds | +| real | 0m1.054s | +| user | 0m0.676s | +| sys | 0m0.367s | + +These metrics reflect efficient catalog compilation and application times, as well as low system overhead, demonstrating the strong performance of Puppet on Arm64-based GCP VMs. + +## Review Puppet benchmarking results + +Confirm that your benchmark output matches the expected metrics for catalog compilation, application, and system resource usage. If your results differ significantly, investigate VM resource allocation, manifest complexity, or system load. Use these metrics to validate Puppet performance on Arm64 and identify opportunities for further optimization. + +These benchmark results demonstrate that catalog compilation completed in only 0.01 seconds, highlighting the processing speed of the Arm64 platform. The benchmark ran smoothly in the production environment, and the configuration version was successfully recorded as 1763407825. Catalog application also finished in 0.01 seconds, indicating very low execution latency. The total runtime was 1.054 seconds, which reflects efficient overall performance for Puppet on an Arm64 SUSE VM in Google Cloud Platform. + +This benchmarking method is useful for validating Puppet performance after migration to Arm64, or when optimizing infrastructure for cost and speed. For more advanced benchmarking, consider automating multiple runs, collecting metrics over time, and comparing results with x86-based VMs to quantify the benefits of Arm64 on GCP. + +## Summary and next steps + +You’ve successfully benchmarked Puppet on an Arm64-based SUSE VM in Google Cloud Platform. You created and applied a simple manifest, measured key performance metrics, and interpreted the results to validate Puppet’s efficiency on Arm infrastructure. These steps help ensure your configuration management setup is optimized for speed and reliability on modern cloud platforms. + +Well done - completing this benchmark gives you a solid foundation for further automation and optimization with Puppet on Arm. \ No newline at end of file diff --git a/content/learning-paths/servers-and-cloud-computing/puppet-on-gcp/installation.md b/content/learning-paths/servers-and-cloud-computing/puppet-on-gcp/installation.md index 5e5d65cfc..2909d9835 100644 --- a/content/learning-paths/servers-and-cloud-computing/puppet-on-gcp/installation.md +++ b/content/learning-paths/servers-and-cloud-computing/puppet-on-gcp/installation.md @@ -1,30 +1,28 @@ --- -title: Install Puppet +title: Install Puppet on a GCP VM weight: 4 ### FIXED, DO NOT MODIFY layout: learningpathall --- -## Install Puppet on GCP VM -This guide walks you through installing Puppet on a **Google Cloud Platform (GCP) SUSE Linux Arm64 VM**, including all dependencies, Ruby setup, and environment preparation. +## Set up your environment and install Puppet +This section walks you through installing Puppet on a Google Cloud Platform (GCP) SUSE Linux arm64 VM. You'll set up all required dependencies, build Ruby from source, and prepare the environment for Puppet automation. -### Install build dependencies and Ruby from source -Installs all required tools and builds Ruby 3.1.4 from source to ensure compatibility with Puppet. +## Install dependencies and Ruby +To get started, you'll install the required development tools and libraries, then build Ruby 3.1.4 from source. This approach prepares your environment for Puppet and helps prevent compatibility problems. + +To install the necessary packages for Ruby use this command: -First we install the prerequisites for ruby: ```console sudo zypper install git curl gcc make patch libyaml-devel libffi-devel libopenssl-devel readline-devel zlib-devel gdbm-devel bzip2 bzip2-devel ``` -NOTE: -```note -Due to changing version dependencies, you may receive a message in the "zypper" -command above that ncurses-devel is not the correct version. If so, please select the -option that permits downgrading of the installed ncurses-devel package to the required -version (normally "Solution 1"), followed by confirmation with "y". -``` -Then, we will install ruby itself: + +{{% notice Note %}}If you see a version conflict for `ncurses-devel` during the `zypper` install, choose the option that allows downgrading `ncurses-devel` to the required version (usually "Solution 1"). Confirm the downgrade by entering "y" when prompted. This step can be confusing at first, but it's a common requirement for building Ruby from source on SUSE Linux.{{% /notice %}} + +Next, install ruby: + ```console cd ~ sudo wget https://cache.ruby-lang.org/pub/ruby/3.1/ruby-3.1.4.tar.gz @@ -34,45 +32,57 @@ sudo ./configure sudo make && sudo make install ``` -### Verify Ruby -Checks that Ruby is correctly installed and available in your system path. +## Verify Ruby +Check that Ruby is correctly installed and available in your system PATH: ```console ruby -v which ruby ``` +The expected output is: ```output ruby 3.1.4p223 (2023-03-30 revision 957bb7cb81) [aarch64-linux] /usr/local/bin/ruby ``` -### Install Puppet dependencies -Installs essential Puppet libraries (`semantic_puppet, facter, hiera`) needed for automation tasks. +## Install Puppet dependencies +Install the core Puppet libraries to enable automation and configuration management on your Arm-based GCP VM. -- **semantic_puppet** – Provides tools for handling Puppet-specific versioning, modules, and dependency constraints. -- **facter** – Collects system information (facts) such as OS, IP, and hardware details for Puppet to use in configuration decisions. -- **hiera** – Key-value lookup tool that manages configuration data outside of Puppet manifests for flexible data separation. +First, download and extract the Puppet source code: ```console cd ~ sudo wget https://github.com/puppetlabs/puppet/archive/refs/tags/8.10.0.tar.gz sudo tar -xvf 8.10.0.tar.gz cd ~/puppet-8.10.0 +``` + +Next, install the required Ruby gems for Puppet: + +```console sudo /usr/local/bin/gem install semantic_puppet -v "~> 1.0" sudo gem install facter -v "~> 4.0" sudo gem install hiera ``` -{{% notice Note %}} -Puppet 8.8.1 version expands official support for Arm and AArch64, with new agent compatibility for AlmaLinux 9 (AARCH64), Rocky Linux 9 (AARCH64), and Ubuntu 24.04 (ARM). The release ensures compatibility with Ruby 3.3 and resolves multiple agent and catalog-related issues. Security is enhanced with an OpenSSL 3.0.14 upgrade, addressing CVE-2024-4603 and CVE-2024-2511 vulnerabilities. -You can view [this release note](https://help.puppet.com/osp/current/Content/PuppetCore/PuppetReleaseNotes/release_notes_puppet_x-8-8-1.htm) +- `semantic_puppet` manages Puppet-specific versioning and module dependencies +- `facter` collects system information, such as operating system, IP address, and hardware details, for Puppet to use +- `hiera` separates configuration data from Puppet manifests, making your automation setup more flexible + +These libraries ensure Puppet runs smoothly and can manage your Arm-based SUSE Linux VM effectively. + +These libraries are required for Puppet to work correctly on your Arm-based GCP VM. + + + +< notice Note %>Puppet version 8.8.1 introduces expanded support for Arm and AArch64 platforms. This release adds agent compatibility for AlmaLinux 9 (AARCH64), Rocky Linux 9 (AARCH64), and Ubuntu 24.04 (ARM). It works with Ruby 3.3 and fixes several agent and catalog issues. Security is improved with OpenSSL 3.0.14, addressing recent vulnerabilities (CVE-2024-4603 and CVE-2024-2511). -The [Arm Ecosystem Dashboard](https://developer.arm.com/ecosystem-dashboard/) recommends Puppet version 8.8.1, the minimum recommended on the Arm platforms. -{{% /notice %}} +For more information, see the [official Puppet release notes](https://help.puppet.com/osp/current/Content/PuppetCore/PuppetReleaseNotes/release_notes_puppet_x-8-8-1.htm). -### Build and install the Puppet gem -The **Puppet gem** provides the core Puppet framework, including its CLI, manifest parser, and resource management engine. +The [Arm Ecosystem Dashboard](https://developer.arm.com/ecosystem-dashboard/) recommends Puppet 8.8.1 as the minimum version for Arm platforms. +## Build and install the Puppet gem +The Puppet gem provides the core Puppet framework, including its CLI, manifest parser, and resource management engine. Build and install the Puppet 8.10.0 package from source into your Ruby environment. @@ -81,7 +91,7 @@ sudo gem build puppet.gemspec sudo /usr/local/bin/gem install puppet-8.10.0.gem ``` -### Verification +## Verify Puppet installation Confirm Puppet is successfully installed and ready to use on the system. ```console diff --git a/content/learning-paths/servers-and-cloud-computing/puppet-on-gcp/instance.md b/content/learning-paths/servers-and-cloud-computing/puppet-on-gcp/instance.md index a8a819d24..d31b1fccd 100644 --- a/content/learning-paths/servers-and-cloud-computing/puppet-on-gcp/instance.md +++ b/content/learning-paths/servers-and-cloud-computing/puppet-on-gcp/instance.md @@ -11,33 +11,41 @@ layout: learningpathall In this section, you will learn how to provision a Google Axion C4A Arm virtual machine on Google Cloud Platform (GCP) using the `c4a-standard-4` (4 vCPUs, 16 GB memory) machine type in the Google Cloud Console. {{% notice Note %}} -For support on GCP setup, see the Learning Path [Getting started with Google Cloud Platform](https://learn.arm.com/learning-paths/servers-and-cloud-computing/csp/google/). +If you need help on setting up GCP, see the Learning Path [Getting started with Google Cloud Platform](https://learn.arm.com/learning-paths/servers-and-cloud-computing/csp/google/). {{% /notice %}} -## Provision a Google Axion C4A Arm VM in Google Cloud Console +## Provision a Google Axion C4A Arm VM -To create a virtual machine based on the C4A instance type: -- Navigate to the [Google Cloud Console](https://console.cloud.google.com/). -- Go to **Compute Engine > VM Instances** and select **Create Instance**. -- Under **Machine configuration**: - - Populate fields such as **Instance name**, **Region**, and **Zone**. - - Set **Series** to `C4A`. - - Select `c4a-standard-4` for machine type. +To create a virtual machine based on the C4A instance type, start by navigating to the [Google Cloud Console](https://console.cloud.google.com/), and follow these steps: +- In the Google Cloud Console, go to **Compute Engine > VM Instances** and select **Create Instance**. +- Under **Machine configuration**, enter the following details: + - **Instance name**: Choose a unique name for your VM. + - **Region** and **Zone**: Select the location closest to your users or workloads. + - **Series**: Set to `C4A` to use Arm-based Axion processors. + - **Machine type**: Select `c4a-standard-4` (4 vCPUs, 16 GB memory). - ![Create a Google Axion C4A Arm virtual machine in the Google Cloud Console with c4a-standard-4 selected alt-text#center](images/gcp-vm.png "Creating a Google Axion C4A Arm virtual machine in Google Cloud Console") + ![Creating a Google Axion C4A Arm virtual machine in Google Cloud Console with c4a-standard-4 selected. The screenshot shows the VM creation form with the C4A series and c4a-standard-4 machine type highlighted. alt-text#center](images/gcp-vm.png "Creating a Google Axion C4A Arm virtual machine in Google Cloud Console") -- Under **OS and Storage**, select **Change**, then choose an Arm64-based OS image. For this Learning Path, use **SUSE Linux Enterprise Server**. -- If using use **SUSE Linux Enterprise Server**. Select "Pay As You Go" for the license type. -- Once appropriately selected, please Click **Select**. -- Under **Networking**, enable **Allow HTTP traffic**. -- Click **Create** to launch the instance. -- Once created, you should see a "SSH" option to the right in your list of VM instances. Click on this to launch a SSH shell into your VM instance: +- Under **OS and storage**, select **Change**, then choose an Arm64-based operating system image. For this Learning Path, select **SUSE Linux Enterprise Server**. +- For **SUSE Linux Enterprise Server**, select *Pay As You Go* as the license type. +- After selecting the image and license, select **Select** to confirm your choice. +- Under **Networking**, enable **Allow HTTP traffic** to permit web access. +- Select **Create** to launch your VM instance. +- When the instance is ready, you'll see an **SSH** option next to your VM in the list. Select **SSH** to open a shell session in your browser. -![Invoke a SSH session via your browser alt-text#center](images/gcp-ssh.png "Invoke a SSH session into your running VM instance") +![Browser window showing the Google Cloud Console with the SSH button highlighted next to a running VM instance. The interface displays the VM name, status, and available actions. The environment is a web-based dashboard with navigation menus on the left. The emotional tone is neutral and instructional. Visible text includes VM instance details and the SSH button label. alt-text#center](images/gcp-ssh.png "Invoke a SSH session into your running VM instance") -- A window from your browser should come up and you should now see a shell into your VM instance: +When you select **SSH**, a new browser window opens with a shell prompt for your VM instance. You now have direct command-line access to your Arm-based VM, ready to run commands and manage your environment. -![Terminal Shell in your VM instance alt-text#center](images/gcp-shell.png "Terminal shell in your VM instance") +![Terminal window displaying a shell prompt inside a Google Axion C4A Arm VM instance. The interface shows a command line ready for input, with the username and hostname visible at the prompt. The wider environment is a browser-based SSH session within the Google Cloud Console. The emotional tone is neutral and instructional. Visible text includes the shell prompt and any default welcome messages shown in the terminal. alt-text#center](images/gcp-shell.png "Terminal shell in your VM instance") -Next, let's install puppet! \ No newline at end of file + +## What you've accomplished and what's next + +You have successfully provisioned a Google Axion C4A Arm virtual machine on Google Cloud Platform using the Console. You selected the Arm64-based SUSE Linux Enterprise Server image, configured networking, and launched your VM. You also connected to your instance using the built-in SSH feature. You now have a running Arm VM on GCP and access to its shell environment. + +Next, you'll install Puppet on your new instance to automate configuration and management tasks. + + + \ No newline at end of file