# RHCE Practice Lab This repo contains all the files needed to deploy an RHCE practice lab. The target infrastructure is Openshift Virtualuzation, and network services (eg; DNS) are handled by OPNsense. Once deployed, the lab consists of 7 VMs: - controller - utility - node1 - node2 - node3 - node4 - node5 The lab uses the domain name `lab.example.com`. All of the files needed to complete the tasks on the exam are hosted on the utility server, eg; [http://utility.lab.example.com](http://utility.lab.example.com/files) You will perform all tasks as the `ansible` user on the `controller` node from the directory `/home/ansible/ansible`. The `ansible` user's password is `ansible` (really original, I know). Unless otherwise specified, the password for any vaulted files is `redhat`. The lab is easily deployed with the following command: `ansible-playbook create-lab.yml -e @vault.yml --vault-password-file vault-password` The lab can be torn down by running the command: `ansible-playbook destroy-lab.yml` **Helpful hints:** `ansible localhost -m setup` to print system facts. You may want to pipe that out to a text file to avoid having to run the command repeatedly and save yourself some time. `ansible-config init --disabled > ansible.cfg` to generate a config file with all options commented. You can use `ansible.builtin.debug` to print out things like facts to make sure your syntax is correct, eg; ``` # printfacts.yml - name: Print facts hosts: jump01.lab.cudanet.org gather_facts: true remote_user: root tasks: - name: print facts ansible.builtin.debug: msg: "The default IPv4 address for {{ inventory_hostname }} is {{ ansible_default_ipv4.address }}" ``` ## Task 1. **install and configure ansible:** i) Install podman, ansible-core and ansible-navigator. /etc/yum.redos.d/rhce.repo should already be configured to pull packages from utility.lab.example.com.
solution `dnf -y install podman ansible-core ansible-navigator`
ii) configure ansible.cfg to install collections by default to `~/ansible/mycollections` and roles to `~/ansible/roles`
solution ``` # ansible.cfg [defaults] inventory = /home/ansible/ansible/inventory remote_user = ansible roles_path = /home/ansible/ansible/roles collections_path = /home/ansible/ansible/mycollections ```
iii) configure `inventory` as follows: node1 is in the dev group. node2 is in the test group. nodes 3 and 4 are in the prod group. node5 is in the balancers group. the prod group is in the webservers group.
solution ``` # inventory [dev] node1 [test] node2 [prod] node3 node4 [balancers] node5 [webservers:children] prod ```
iv) ansible-navigator.yml is configured to pull the EE image from the utility server if missing. The registry is located at utility.lab.example.com:5000
solution ``` # ansible-navigator.yml --- ansible-navigator: execution-environment: image: utility.lab.example.com:5000/ee-supported-rhel9:latest pull: policy: missing playbook-artifact: enable: false ```
*NOTE: You're basically going to have to memorize the contents of this file, because unlike `ansible.cfg` there is no way to generate an ansible-navigator.yml file with dummy values.* ## Task 2. **manage repositories:** Write a playbook called `repos.yml` to add the BaseOS and AppStream repos to all managed hosts with GPG check enabled. Mirror is located at http://utility.lab.example.com/rhel9/
solution ``` --- # repos.yml - name: Add BaseOS and AppStream repos to all hosts hosts: all become: true vars: repos: - BaseOS - AppStream baseurl: http://utility.lab.example.com/rhel9 gpgkey_url: http://utility.lab.example.com/rhel9/RPM-GPG-KEY-redhat-release repo_file: /etc/yum.repos.d/rhce tasks: - name: Add {{ item }} repository ansible.builtin.yum_repository: name: "EX294_{{ item }}" description: "EX294 {{ item }} Repository" baseurl: "{{ baseurl }}/{{ item }}" enabled: true gpgcheck: true gpgkey: "{{ gpgkey_url }}" file: "{{ repo_file }}" loop: "{{ repos }}" ```
## Task 3. **install roles and collections:** i) Install collections for `ansible.posix`, `community.general` and `redhat.rhel_system_roles` to '~/ansible/mycollections/'. Collections are hosted at [http://utility.lab.example.com/files/](http://utility.lab.example.com/files/) ii) install the `balancer` and `phpinfo` roles from [http://utility.lab.example.com/files](http://utility.lab.example.com/files) using a `requirements.yml` file. *NOTE: although, not a requirement, you can specify both roles and collections in your requirements file*
solution ``` # requirements.yml --- roles: - name: phpinfo src: http://utility.lab.example.com/files/phpinfo.tar.gz path: /home/ansible/ansible/roles - name: balancer src: http://utility.lab.example.com/files/haproxy.tar.gz path: /home/ansible/ansible/roles collections: - name: ansible.posix source: http://utility.lab.example.com/files/ansible-posix-2.1.0.tar.gz type: url - name: redhat.rhel_system_roles source: http://utility.lab.example.com/files/redhat-rhel_system_roles-1.108.6.tar.gz type: url - name: community.general source: http://utility.lab.example.com/files/community-general-12.1.0.tar.gz type: url ``` ``` # bash mkdir -p /home/ansible/ansible/{roles,mycollections} ansible-galaxy role install -r requirements.yml ansible-galaxy collection install -r requirements.yml -p /home/ansible/ansible/mycollections ```
## Task 4: **install packages and groups:** Write a playbook called `install.yml` to install `php` and `httpd` on the `test` group, and `RPM Development Tools` group in `dev` group only
solution ``` # install.yml --- - name: Install Packages and Groups hosts: all become: true tasks: - name: Install packages on test group ansible.builtin.dnf: name: - httpd - php state: latest when: inventory_hostname in groups['test'] - name: Install RPM Development Tools group on test group ansible.builtin.dnf: name: "@RPM Development Tools" state: latest when: inventory_hostname in groups['dev'] ```
## Task 5. **create a role:** i) Create a role called `apache` to install, start and persistently enable `httpd` and `firewalld`.
solution ``` # defaults/main.yml --- apache_packages: - httpd - firewalld ``` ``` # handlers/main.yml --- - name: restart httpd ansible.builtin.service: name: httpd state: restarted ```
*NOTE: You can create the basic filestructure of the role with `ansible-galaxy role init apache`*
solution ``` apache/ ├── defaults/ │ └── main.yml ├── handlers/ │ └── main.yml ├── tasks/ │ └── main.yml ├── templates/ │ └── index.html.j2 └── meta/ └── main.yml ```
ii) Allow the HTTP traffic through the firewall.
solution ``` # tasks/main.yml --- - name: Install httpd and firewalld ansible.builtin.package: name: "{{ apache_packages }}" state: present - name: Enable and start firewalld ansible.builtin.service: name: firewalld state: started enabled: true - name: Enable and start httpd ansible.builtin.service: name: httpd state: started enabled: true - name: Allow HTTP service through firewalld ansible.posix.firewalld: service: http permanent: true state: enabled immediate: true - name: Deploy index.html with FQDN and IPv4 ansible.builtin.template: src: index.html.j2 dest: /var/www/html/index.html owner: root group: root mode: '0644' notify: restart httpd ``` ``` # handlers/main.yml --- - name: restart httpd ansible.builtin.service: name: httpd state: restarted ```
iii) Populate out `index.html` with FQDN and IPv4 address using a jinja2 template, pulling those variables from ansible facts.
solution ``` # templates/index.html.j2 Apache Test Page

Apache is working

FQDN: {{ ansible_facts.fqdn }}

IPv4 Address: {{ ansible_facts.default_ipv4.address }}

```
iv) Finally, run the role against the `dev` group
solution ``` # apache.yml --- - name: Configure Apache web servers hosts: dev become: true roles: - apache ```
## Task 6. **use a role:** i) Use roles to apply the `balancer` role to the `balancers` group and `phpinfo` role to `webservers` group. Servers with the `phpinfo` role applied should report the FQDN and IP address of the web server, and refreshing the web browser should round robin between nodes 3 and 4. You should have already installed these roles in task 3.
solution ``` # roles.yml --- - name: Configure load balancer hosts: balancers become: yes roles: - balancer - name: Configure web servers hosts: webservers become: yes roles: - phpinfo ```
## Task 7. **manage SELinux:** i) Use the `ansible.posix.selinux` role to configure SELinux to be enabled and enforcing on all managed hosts. Don't forget - changes to SELinux require a reboot to take effect.
solution ``` --- - name: Ensure SELinux is enabled and enforcing hosts: all become: true tasks: - name: Set SELinux to enforcing ansible.posix.selinux: policy: targeted state: enforcing notify: Reboot if SELinux state changed handlers: - name: Reboot if SELinux state changed ansible.builtin.reboot: msg: "Rebooting to apply SELinux changes" reboot_timeout: 600 ```
## Task 8. **manage file content:** i) Populate `/etc/issue` with the name of the lifecycle environment, eg; "Development" for `dev`, "Testing" for `test` and "Production" for `prod`.
solution ``` # issue.yml --- - name: Automatically populate /etc/issue with environment name hosts: - dev - test - prod become: true tasks: - name: Determine environment name from inventory groups ansible.builtin.set_fact: env_name: >- {% if 'prod' in group_names %} Production {% elif 'test' in group_names %} Testing {% elif 'dev' in group_names %} Development {% endif %} - name: Populate /etc/issue ansible.builtin.copy: dest: /etc/issue content: | {{ env_name }} owner: root group: root mode: '0644' ```
## Task 9. **manage storage:** i) Write a playbook called `partition.yml`. It should create a 1500MiB partition on vdb as ext4 mounted at /devmount, a 1500MiB partition on vdc as ext4 mounted at /devmount1, unless there isn't enough space on vdc, in which case make it 800MiB and print a message stating such. Check for vde. If there is no vde present, print message stating there's no such drive. *NOTE: My exam said to create partitions, but all examples I've seen point to logical volumes. Maybe practice both?*
solution ``` # partition.yml --- - name: Configure disk partitions and mounts hosts: all become: true gather_facts: true tasks: #################################################################### # /dev/vdb — always create 1500MB partition mounted at /devmount #################################################################### - name: Create 1500MB partition on /dev/vdb community.general.parted: device: /dev/vdb number: 1 state: present part_end: 1500MiB - name: Create XFS filesystem on /dev/vdb1 ansible.builtin.filesystem: fstype: xfs dev: /dev/vdb1 - name: Mount /dev/vdb1 at /devmount ansible.builtin.mount: path: /devmount src: /dev/vdb1 fstype: xfs state: mounted #################################################################### # /dev/vdc — size-based logic (1500MB or 800MB) #################################################################### - name: Determine size of /dev/vdc partition ansible.builtin.set_fact: vdc_part_size: >- {{ '1500MiB' if (ansible_facts.devices.vdc.sectors | int * ansible_facts.devices.vdc.sectorsize | int) >= (1500 * 1024 * 1024) else '800MiB' }} when: "'vdc' in ansible_facts.devices" - name: Create partition on /dev/vdc community.general.parted: device: /dev/vdc number: 1 state: present part_end: "{{ vdc_part_size }}" when: "'vdc' in ansible_facts.devices" - name: Create XFS filesystem on /dev/vdc1 ansible.builtin.filesystem: fstype: xfs dev: /dev/vdc1 when: "'vdc' in ansible_facts.devices" - name: Mount /dev/vdc1 ansible.builtin.mount: path: >- {{ '/devmount1' if vdc_part_size == '1500MiB' else '/dev/mount' }} src: /dev/vdc1 fstype: xfs state: mounted when: "'vdc' in ansible_facts.devices" #################################################################### # /dev/vde presence check #################################################################### - name: Warn if /dev/vde is not present ansible.builtin.debug: msg: "Disk /dev/vde is not present" when: "'vde' not in ansible_facts.devices" ```
## Task 10. **manage directories and symlinks:** i) create the directory `/webdev` with `U=RWX,G=RWX,O=RX` permissions. It should be owned by `webdev` group. It should have special permissions `set group id` (I think that means 2775 in octal). Symlink from `/webdev > /var/www/html/webdev`

create `/webdev/index.html` to report hostname and ip address. Allow traffic through the firewall for http. It should be browseable by the dev group.
solution ``` # webcontent.yml --- - name: Configure restricted web content for dev hosts hosts: dev become: true gather_facts: true tasks: # ---------------- SELinux ---------------- - name: Ensure SELinux is enforcing ansible.posix.selinux: policy: targeted state: enforcing - name: Install SELinux utilities ansible.builtin.package: name: policycoreutils-python-utils state: present # ---------------- Groups & Users ---------------- - name: Ensure webdev group exists ansible.builtin.group: name: webdev state: present - name: Add ansible user to webdev group ansible.builtin.user: name: ansible groups: webdev append: true # ---------------- Web Content ---------------- - name: Create /webdev directory with setgid permissions ansible.builtin.file: path: /webdev state: directory owner: root group: webdev mode: "2775" - name: Create index.html using Ansible facts ansible.builtin.copy: dest: /webdev/index.html owner: root group: webdev mode: "0644" content: | WebDev Host Info

WebDev Page

Hostname: {{ ansible_facts['hostname'] }}

IP Address: {{ ansible_facts['default_ipv4']['address'] }}

# ---------------- Apache + Symlink ---------------- - name: Create symlink from /webdev to /var/www/html/webdev ansible.builtin.file: src: /webdev dest: /var/www/html/webdev state: link force: true # ---------------- SELinux Context ---------------- - name: Allow Apache to read /webdev via SELinux ansible.builtin.command: cmd: semanage fcontext -a -t httpd_sys_content_t "/webdev(/.*)?" register: semanage_result failed_when: semanage_result.rc not in [0,1] - name: Apply SELinux context ansible.builtin.command: restorecon -Rv /webdev changed_when: false # ---------------- Firewall ---------------- - name: Ensure firewalld is started and enabled ansible.builtin.service: name: firewalld state: started enabled: true - name: Allow HTTP through firewall ansible.posix.firewalld: service: http permanent: true immediate: true state: enabled # ---------------- Apache Access Control ---------------- - name: Restrict access to webdev content to node1 only ansible.builtin.copy: dest: /etc/httpd/conf.d/webdev.conf owner: root group: root mode: "0644" content: | Options FollowSymLinks Require all granted Require ip 127.0.0.1 Require ip {{ ansible_facts['default_ipv4']['address'] }} # ---------------- Services ---------------- - name: Ensure httpd is started and enabled ansible.builtin.service: name: httpd state: started enabled: true - name: Restart httpd to apply configuration ansible.builtin.service: name: httpd state: restarted ```
## Task 11. **manage file content with templates:** populate /etc/myhosts using hosts.j2 template and hosts.yml. Do not modify hosts.yml at all, it should handle all of the looping through the hosts in the template file use a for loop on the j2 template to loop through each host
solution ``` # hosts.j2 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 {% for node in groups['all'] %} {{ hostvars[node]['ansible_facts']['default_ipv4']['address'] }} {{ hostvars[node]['ansible_facts']['fqdn'] }} {{ hostvars[node]['ansible_facts']['hostname'] }} {% endfor%} ``` ``` # hosts.yml - name: Hosts config deploy hosts: all become: True tasks: - name: Template a file to /etc/myhosts when: inventory_hostname in groups['dev'] ansible.builtin.template: src: ./hosts.j2 dest: /etc/myhosts ```
## Task 12. **modify file contents:** Download `hwreport.empty` from `utility.lab.example.com` to `/root/hwreport.txt` on all hosts. Replace key value pairs for hostname, bios version, memoryMiB, size of vda, vdb and vdc. If device does not exist, put NONE.
solution ``` # hwreport.yml --- - name: Generate hardware report hosts: all become: yes tasks: - name: Download empty hwreport file get_url: url: http://utility.lab.example.com/files/hwreport.empty dest: /root/hwreport.txt mode: '0644' - name: Set hostname lineinfile: path: /root/hwreport.txt regexp: '^HOST=' line: "HOST={{ ansible_hostname }}" - name: Set BIOS version lineinfile: path: /root/hwreport.txt regexp: '^BIOS=' line: "BIOS={{ ansible_bios_version | default('NONE') }}" - name: Set memory size lineinfile: path: /root/hwreport.txt regexp: '^MEMORY=' line: "MEMORY={{ ansible_memtotal_mb }} MB" - name: Set vdb disk size lineinfile: path: /root/hwreport.txt regexp: '^VDB=' line: "VDB={{ ansible_devices.vdb.size | default('NONE') }}" - name: Set vdc disk size lineinfile: path: /root/hwreport.txt regexp: '^VDC=' line: "VDC={{ ansible_devices.vdc.size | default('NONE') }}" - name: Set vdd disk size (NONE if missing) lineinfile: path: /root/hwreport.txt regexp: '^VDD=' line: >- VDD={{ ansible_devices.vdd.size if 'vdd' in ansible_devices else 'NONE' }} ```
## Task 13. **use ansible vault to encrypt a file:** Create an encrypted variable file called `locker.yml` which should contain two variables and their values. *pw_developer is value imadev* *pw_manager is value imamgr* `locker.yml` file should be encrypted using the password `whenyouwishuponastar` store the password in a file named `secret.txt`, which is used to encrypt the variable file.
solution ``` # secret.txt echo "whenyouwishuponastar" > secret.txt chmod 600 secret.txt ``` ``` # locker.yml pw_developer: imadev pw_manager: imamgr ``` `ansible-vault encrypt locker.yml --vault-password-file secret.txt`
## Task 14. **manage users:** Download the variable file "http://utility.lab.example.com/files/user_list.yml" and write a playbook named "users.yml" and then run the playbook on all the nodes using two variable files user_list.yml and locker.yml. i) * Create a group opsdev * Create user from users variable who job is equal to developer and need to be in opsdev group * Assign a password using SHA512 format and run playbook on dev and test group. * User password is {{ pw_developer }} ii) * Create a group opsmgr * Create user from users varaible who job is equal to manager and need to be in opsmgr group * Assign a password using SHA512 format and run playbook on prod group. * User password is {{ pw_manager }} iii) Use when condition for each play
solution ``` # user_list.yml users: - name: Fred role: manager - name: Wilma role: manager - name: Barney role: developer - name: Betty role: developer ``` ``` # users.yml --- - name: Download user_list.yml variable file hosts: all gather_facts: false tasks: - name: Download user_list.yml ansible.builtin.get_url: url: http://utility.lab.example.com/files/user_list.yml dest: ./user_list.yml run_once: true delegate_to: localhost - name: Create developer users on dev and test hosts: dev:test become: true vars_files: - user_list.yml - locker.yml tasks: - name: Ensure opsdev group exists ansible.builtin.group: name: opsdev state: present - name: Create developer users ansible.builtin.user: name: "{{ item.name }}" groups: opsdev append: yes password: "{{ pw_developer | password_hash('sha512') }}" state: present loop: "{{ users }}" when: item.role == "developer" - name: Create manager users on prod hosts: prod become: true vars_files: - user_list.yml - locker.yml tasks: - name: Ensure opsmgr group exists ansible.builtin.group: name: opsmgr state: present - name: Create manager users ansible.builtin.user: name: "{{ item.name }}" groups: opsmgr append: yes password: "{{ pw_manager | password_hash('sha512') }}" state: present loop: "{{ users }}" when: item.role == "manager" ``` `ansible-navigator run -m stdout users.yml --vault-password-file secret.txt`
## Task 15. **re-encrypt a vaulted file:** Rekey variable file from [http://utility.lab.example.com/files/salaries.yml](http://utility.lab.example.com/files/salaries.yml) i) Old password: changeme ii) New password: redhat
solution ``` # salaries.yml fred: $100000 wilma:$100000 barney: $100000 betty: $100000 ``` `wget http://utility.lab.example.com/files/salaries.yml` `ansible-vault rekey salaries.yml` ``` Vault password: changeme New Vault password: redhat Confirm New Vault password:redhat ```
## Task 16. **manage cron:** Create a cronjob for the user ansible on all nodes, playbook name is crontab.yml and the job details are below: i) Every 2 minutes the job will execute logger "EX294 in progress".
solution ``` # cron.yml --- - name: Create cron job for user ansible hosts: all become: true tasks: - name: Ensure cron job runs every 2 minutes ansible.builtin.cron: name: "EX294 progress log" user: ansible minute: "*/2" job: 'logger "EX294 in progress"' state: present ```
## Task 17. **Use the RHEL timesync system role:** i) Create a playbook called "timesync.yml" that: - Runs on all managed nodes - Uses the timesync role - Configures the role to use the currently active NTP provider - Configure the role to use the time server utility.lab.example.com - Configure the role to enable the iburst parameter
solution ``` # timesync.yml - name: Configure time synchronization using RHEL timesync role hosts: all become: true roles: - role: redhat.rhel_system_roles.timesync vars: timesync_ntp_provider: auto timesync_ntp_servers: - hostname: utility.lab.example.com iburst: true ```
## Task 18. **configure MOTD:** Create a playbook called motd.yml. i) Run the playbook. ii) Whenever you ssh into any node (node1 here), the message will be as follows: Welcome to node1 OS: RedHat 9.4 Architecture: x86_64
solution ``` # motd.yml --- - name: Configure MOTD for all nodes hosts: all become: true gather_facts: true tasks: - name: Set MOTD file ansible.builtin.copy: dest: /etc/motd content: | Welcome to {{ inventory_hostname }} OS: {{ ansible_distribution }} {{ ansible_distribution_version }} Architecture: {{ ansible_architecture }} owner: root group: root mode: '0644' ```