Administration

 

Virtual Machines

This chapter describes the steps required to install a Linux virtual machine:

  1. Create a blank virtual machine on which to install an operating system.
  2. Add a virtual disk for storage.
  1. Add a network interface to connect the virtual machine to the network.
  1. Install an operating system on the virtual machine. See your operating system's documentation for instructions.
    • Enterprise Linux 6
    • Enterprise Linux 7
    • CentOS Atomic Host 7
  1. Install guest agents and drivers for additional virtual machine functionality.

When all of these steps are complete, the new virtual machine is functional and ready to perform tasks.

 

Creating a Linux Virtual Machine

Create a new virtual machine and configure the required settings.

Creating Linux Virtual Machines

  1. Click Compute &raar; Virtual Machines.
  1. Click New button to open the New Virtual Machine window.
  1. Select an Operating System from the drop-down list.
  1. Enter a Name for the virtual machine.
  1. Add storage to the virtual machine. Attach or Create a virtual disk under Instance Images.
    • Click Attach and select an existing virtual disk.
    • Click Create and enter a Size(GB) and Alias for a new virtual disk. You can accept the default settings for all other fields, or change them if required. See Add Virtual Disk dialogue entries for more details on the fields for all disk types.
  1. Connect the virtual machine to the network. Add a network interface by selecting a vNIC profile from the nic1 drop-down list at the bottom of the General tab.
  2. Specify the virtual machine's Memory Size on the System tab.
  3. Choose the First Device that the virtual machine will boot from on the Boot Options tab.
  4. You can accept the default settings for all other fields, or change them if required. For more details on all fields in the New Virtual Machine window, see Explanation of Settings in the New Virtual Machine and Edit Virtual Machine Windows.
  5. Click OK.

The new virtual machine is created and displays in the list of virtual machines with a status of Down.

 

Starting a Virtual Machine

 

Starting Virtual Machines

  1. Click Compute &raar; Virtual Machines and select a virtual machine with a status of Down.
  1. Click Run.

The Status of the virtual machine changes to Up, and the operating system installation begins. Open a console to the virtual machine if one does not open automatically.

**Note:** A virtual machine will not start on a host that the CPU is overloaded on. By default, a host’s CPU is considered overloaded if it has a load of more than 80% for 5 minutes but these values can be changed using scheduling policies.

Opening a Console to a Virtual Machine

Use Remote Viewer to connect to a virtual machine.

Connecting to Virtual Machines

  1. Install Remote Viewer if it is not already installed. See Installing Console Components.
  2. Click Compute &raar; Virtual Machines and select a virtual machine.
  3. Click Console. A console.vv file will be downloaded.
  4. Click on the file and a console window will automatically open for the virtual machine.
    Note: You can configure the system to automatically connect to a virtual machine. See the “Automatically Connecting to a Virtual Machine” below.

Opening a Serial Console to a Virtual Machine

You can access a virtual machine’s serial console from the command line instead of opening a console from the Administration Portal or the VM Portal. The serial console is emulated through VirtIO channels, using SSH and key pairs. The Engine acts as a proxy for the connection, provides information about virtual machine placement, and stores the authentication keys. You can add public keys for each user from either the Administration Portal or the VM Portal. You can access serial consoles for only those virtual machines for which you have appropriate permissions.

**Important:** To access the serial console of a virtual machine, the user must have **UserVmManager**, **SuperUser**, or **UserInstanceManager** permission on that virtual machine. These permissions must be explicitly defined for each user. It is not enough to assign these permissions to **Everyone**.

The serial console is accessed through TCP port 2222 on the Engine. This port is opened during engine-setup on new installations. To change the port, see ovirt-vmconsole/README.

The serial console relies on the ovirt-vmconsole package and the ovirt-vmconsole-proxy on the Engine, and the ovirt-vmconsole package and the ovirt-vmconsole-host package on the virtualization hosts. These packages are installed by default on new installations. To install the packages on existing installations, reinstall the host.

Enabling a Virtual Machine’s Serial Console

  1. On the virtual machine whose serial console you are accessing, add the following lines to /etc/default/grub:
     GRUB_CMDLINE_LINUX_DEFAULT="console=tty0 console=ttyS0,115200n8"
     GRUB_TERMINAL="console serial"
     GRUB_SERIAL_COMMAND="serial --speed=115200 --unit=0 --word=8 --parity=no --stop=1"


    Note: GRUB_CMDLINE_LINUX_DEFAULT applies this configuration only to the default menu entry. Use GRUB_CMDLINE_LINUX to apply the configuration to all the menu entries.
    If these lines already exist in
    /etc/default/grub, update them. Do not duplicate them.
  1. Rebuild /boot/grub2/grub.cfg:
  • BIOS-based machines:
      # grub2-mkconfig -o /boot/grub2/grub.cfg
  • UEFI-based machines:
      # grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg
  1. On the client machine from which you are accessing the virtual machine serial console, generate an SSH key pair. The Engine supports standard SSH key types, for example, an RSA key:
     # ssh-keygen -t rsa -b 2048 -C "user@domain" -f .ssh/serialconsolekey

    This command generates a public key and a private key.
  1. In the Administration Portal or the VM Portal, click the name of the signed-in user on the header bar and click Options to open the Edit Options window.
  1. In the User’s Public Key text field, paste the public key of the client machine that will be used to access the serial console.
  1. Click ComputeVirtual Machines and select a virtual machine.
  2. Click Edit.
  1. In the Console tab of the Edit Virtual Machine window, select the Enable VirtIO serial console check box.

Connecting to a Virtual Machine’s Serial Console

On the client machine, connect to the virtual machine’s serial console:

  • If a single virtual machine is available, this command connects the user to that virtual machine:
      # ssh -t -p 2222 ovirt-vmconsole@Manager_FQDN -i .ssh/serialconsolekey
      Red Hat Enterprise Linux Server release 6.7 (Santiago)
      Kernel 2.6.32-573.3.1.el6.x86_64 on an x86_64
      USER login:
  • If more than one virtual machine is available, this command lists the available virtual machines and their IDs:
      # ssh -t -p 2222 ovirt-vmconsole@Manager_FQDN -i .ssh/serialconsolekey list
      1. vm1 [vmid1]
      2. vm2 [vmid2]
      3. vm3 [vmid3]
      \> 2
      Red Hat Enterprise Linux Server release 6.7 (Santiago)
      Kernel 2.6.32-573.3.1.el6.x86_64 on an x86_64
      USER login:


    Enter the number of the machine to which you want to connect, and press Enter.
  • Alternatively, connect directly to a virtual machine using its unique identifier or its name:
      # ssh -t -p 2222 ovirt-vmconsole@Manager_FQDN connect --vm-id vmid1
      # ssh -t -p 2222 ovirt-vmconsole@Manager_FQDN connect --vm-name vm1

Disconnecting from a Virtual Machine’s Serial Console

Press any key followed by ~ . to close a serial console session.

If the serial console session is disconnected abnormally, a TCP timeout occurs. You will be unable to reconnect to the virtual machine’s serial console until the timeout period expires.

Automatically Connecting to a Virtual Machine

Once you have logged in, you can automatically connect to a single running virtual machine. This can be configured in the VM Portal.

Automatically Connecting to a Virtual Machine

  1. In the Virtual Machines page, click the name of the virtual machine to go to the details view.
  1. Click the pencil icon beside Console and set Connect automatically to ON.

The next time you log into the VM Portal, if you have only one running virtual machine, you will automatically connect to that machine.

 

Installing oVirt Guest Agents and Drivers

 

The oVirt guest agents and drivers provide additional information and functionality for Enterprise Linux and Windows virtual machines. Key features include the ability to monitor resource usage and gracefully shut down or reboot virtual machines from the User Portal and Administration Portal. Install the oVirt guest agents and drivers on each virtual machine on which this functionality is to be available.

oVirt Guest Drivers

Driver

Description

Works on

virtio-net

Paravirtualized network driver provides enhanced performance over emulated devices like rtl.

Server and Desktop.

virtio-block

Paravirtualized HDD driver offers increased I/O performance over emulated devices like IDE by optimizing the coordination and communication between the guest and the hypervisor. The driver complements the software implementation of the virtio-device used by the host to play the role of a hardware device.

Server and Desktop.

virtio-scsi

Paravirtualized iSCSI HDD driver offers similar functionality to the virtio-block device, with some additional enhancements. In particular, this driver supports adding hundreds of devices, and names devices using the standard SCSI device naming scheme.

Server and Desktop.

virtio-serial

Virtio-serial provides support for multiple serial ports. The improved performance is used for fast communication between the guest and the host that avoids network complications. This fast communication is required for the guest agents and for other features such as clipboard copy-paste between the guest and the host and logging.

Server and Desktop.

virtio-balloon

Virtio-balloon is used to control the amount of memory a guest actually accesses. It offers improved memory over-commitment. The balloon drivers are installed for future compatibility but not used by default in oVirt.

Server and Desktop.

qxl

A paravirtualized display driver reduces CPU usage on the host and provides better performance through reduced network bandwidth on most workloads.

Server and Desktop.

oVirt Guest Agents and Tools

Guest agent/tool

Description

Works on

ovirt-engine-guest-agent-common

Allows the oVirt Engine to receive guest internal events and information such as IP address and installed applications. Also allows the Engine to execute specific commands, such as shut down or reboot, on a guest.

On Enterprise Linux 6 and higher guests, the ovirt-engine-guest-agent-common installs tuned on your virtual machine and configures it to use an optimized, virtualized-guest profile.

Server and Desktop.

spice-agent

The SPICE agent supports multiple monitors and is responsible for client-mouse-mode support to provide a better user experience and improved responsiveness than the QEMU emulation. Cursor capture is not needed in client-mouse-mode. The SPICE agent reduces bandwidth usage when used over a wide area network by reducing the display level, including color depth, disabling wallpaper, font smoothing, and animation. The SPICE agent enables clipboard support allowing cut and paste operations for both text and images between client and guest, and automatic guest display setting according to client-side settings. On Windows guests, the SPICE agent consists of vdservice and vdagent.

Server and Desktop.

ovirt-sso

An agent that enables users to automatically log in to their virtual machines based on the credentials used to access the oVirt Engine.

Desktop.

ovirt-usb

A component that contains drivers and services for Legacy USB support (version 3.0 and earlier) on guests. It is needed for accessing a USB device that is plugged into the client machine. ovirt-USB Client is needed on the client side.

Desktop.

Installing the Guest Agents and Drivers on Enterprise Linux

The ovirt guest agents and drivers are installed on Enterprise Linux virtual machines using the ovirt-engine-guest-agent package provided by the ovirt Agent repository.

Installing the Guest Agents and Drivers on Enterprise Linux

  1. Log in to the Enterprise Linux virtual machine.
  2. Enable the ovirt Agent repository.
  3. Install the ovirt-engine-guest-agent-common package and dependencies:
     # yum install ovirt-engine-guest-agent-common
  1. Start and enable the service:
    • For Enterprise Linux 6
        # service ovirt-guest-agent start
        # chkconfig ovirt-guest-agent on
    • For Enterprise Linux 7
        # systemctl start ovirt-guest-agent.service
        # systemctl enable ovirt-guest-agent.service
  1. Start and enable the qemu-ga service:
    • For Enterprise Linux 6
        # service qemu-ga start
        # chkconfig qemu-ga on
    • For Enterprise Linux 7
        # systemctl start qemu-guest-agent.service
        # systemctl enable qemu-guest-agent.service

The guest agent now passes usage information to the ovirt Engine. The ovirt agent runs as a service called ovirt-guest-agent that you can configure via the ovirt-guest-agent.conf configuration file in the /etc/ directory.

Installing the Guest Agent on an Atomic Host

The ovirt guest agent should be installed as a system container on Atomic hosts

Installing the Guest Agent on an Atomic Host

The commands for a Centos7 Atomic host are essentially:

     # atomic pull --storage=ostree ovirtguestagent/centos7-atomic
     # atomic install --system --system-package=no --name=ovirt-guest-agent ovirtguestagent/centos7-atomic
     # systemctl status ovirt-guest-agent
     # systemctl start ovirt-guest-agent

Note: there is official documentation and separate image registry for those with a RHV subscription.

 

Virtual Machine Disks

 

Understanding Virtual Machine Storage

oVirt supports three storage types: NFS, iSCSI and FCP.

In each type, a host known as the Storage Pool Manager (SPM) manages access between hosts and storage. The SPM host is the only node that has full access within the storage pool; the SPM can modify the storage domain metadata, and the pool's metadata. All other hosts can only access virtual machine hard disk image data.

By default in an NFS, local, or POSIX compliant data center, the SPM creates the virtual disk using a thin provisioned format as a file in a file system.

In iSCSI and other block-based data centers, the SPM creates a volume group on top of the Logical Unit Numbers (LUNs) provided, and makes logical volumes to use as virtual machine disks. Virtual machine disks on block-based storage are preallocated by default.

If the virtual disk is preallocated, a logical volume of the specified size in GB is created. The virtual machine can be mounted on a Red Hat Enterprise Linux server using kpartx, vgscan, vgchange or mount to investigate the virtual machine's processes or problems.

If the virtual disk is thinly provisioned, a 1 GB logical volume is created. The logical volume is continuously monitored by the host on which the virtual machine is running. As soon as the usage nears a threshold the host notifies the SPM, and the SPM extends the logical volume by 1 GB. The host is responsible for resuming the virtual machine after the logical volume has been extended. If the virtual machine goes into a paused state it means that the SPM could not extend the disk in time. This occurs if the SPM is too busy or if there is not enough storage space.

A virtual disk with a preallocated (RAW) format has significantly faster write speeds than a virtual disk with a thin provisioning (QCOW2) format. Thin provisioning takes significantly less time to create a virtual disk. The thin provision format is suitable for non-I/O intensive virtual machines. The preallocated format is recommended for virtual machines with high I/O writes. If a virtual machine is able to write more than 1 GB every four seconds, use preallocated disks where possible.

Understanding Virtual Disks

oVirt features Preallocated (thick provisioned) and Sparse (thin provisioned) storage options.

Preallocated

A preallocated virtual disk allocates all the storage required for a virtual machine up front. For example, a 20 GB preallocated logical volume created for the data partition of a virtual machine will take up 20 GB of storage space immediately upon creation.

Sparse

A sparse allocation allows an administrator to define the total storage to be assigned to the virtual machine, but the storage is only allocated when required.

For example, a 20 GB thin provisioned logical volume would take up 0 GB of storage space when first created. When the operating system is installed it may take up the size of the installed file, and would continue to grow as data is added up to a maximum of 20 GB size.

The size of a disk is listed in the Disks sub-tab for each virtual machine and template. The Virtual Size of a disk is the total amount of disk space that the virtual machine can use; it is the number that you enter in the Size(GB) field when a disk is created or edited. The Actual Size of a disk is the amount of disk space that has been allocated to the virtual machine so far. Preallocated disks show the same value for both fields. Sparse disks may show a different value in the Actual Size field from the value in the Virtual Size field, depending on how much of the disk space has been allocated.

Note: When creating a Cinder virtual disk, the format and type of the disk are handled internally by Cinder and are not managed by oVirt.

The possible combinations of storage types and formats are described in the following table.

Permitted Storage Combinations

Storage

Format

Type

Note

NFS or iSCSI/FCP

RAW or QCOW2

Sparse or Preallocated

 

NFS

RAW

Preallocated

A file with an initial size which equals the amount of storage defined for the virtual disk, and has no formatting.

NFS

RAW

Sparse

A file with an initial size which is close to zero, and has no formatting.

NFS

QCOW2

Sparse

A file with an initial size which is close to zero, and has QCOW2 formatting. Subsequent layers will be QCOW2 formatted.

SAN

RAW

Preallocated

A block device with an initial size which equals the amount of storage defined for the virtual disk, and has no formatting.

SAN

QCOW2

Sparse

A block device with an initial size which is much smaller than the size defined for the virtual disk (currently 1 GB), and has QCOW2 formatting for which space is allocated as needed (currently in 1 GB increments).

Settings to Wipe Virtual Disks After Deletion

The wipe_after_delete flag, viewed in the Administration Portal as the Wipe After Delete check box will replace used data with zeros when a virtual disk is deleted. If it is set to false, which is the default, deleting the disk will open up those blocks for re-use but will not wipe the data. It is, therefore, possible for this data to be recovered because the blocks have not been returned to zero.

The wipe_after_delete flag only works on block storage. On file storage, for example NFS, the option does nothing because the file system will ensure that no data exists.

Enabling wipe_after_delete for virtual disks is more secure, and is recommended if the virtual disk has contained any sensitive data. This is a more intensive operation and users may experience degradation in performance and prolonged delete times.

Note: The wipe after delete functionality is not the same as secure delete, and cannot guarantee that the data is removed from the storage, just that new disks created on same storage will not expose data from old disks.

The wipe_after_delete flag default can be changed to true during the setup process (see "Configuring the oVirt Engine" in the Installation Guide), or by using the engine configuration tool on the oVirt Engine. Restart the engine for the setting change to take effect.

Setting SANWipeAfterDelete to Default to True Using the Engine Configuration Tool

Run the engine configuration tool with the --set action:

 # engine-config --set SANWipeAfterDelete=true

Restart the engine for the change to take effect:

 # systemctl restart ovirt-engine.service

The /var/log/vdsm/vdsm.log file located on the host can be checked to confirm that a virtual disk was successfully wiped and deleted.

For a successful wipe, the log file will contain the entry, storage_domain_id/volume_id was zeroed and will be deleted. For example:

a9cb0625-d5dc-49ab-8ad1-72722e82b0bf/a49351a7-15d8-4932-8d67-512a369f9d61 was zeroed and will be deleted

For a successful deletion, the log file will contain the entry, finished with VG:storage_domain_id LVs: list_of_volume_ids, img: image_id. For example:

finished with VG:a9cb0625-d5dc-49ab-8ad1-72722e82b0bf LVs: {'a49351a7-15d8-4932-8d67-512a369f9d61': ImgsPar(imgs=['11f8b3be-fa96-4f6a-bb83-14c9b12b6e0d'], parent='00000000-0000-0000-0000-000000000000')}, img: 11f8b3be-fa96-4f6a-bb83-14c9b12b6e0d

An unsuccessful wipe will display a log message zeroing storage_domain_id/volume_id failed. Zero and remove this volume manually, and an unsuccessful delete will display Remove failed for some of VG: storage_domain_id zeroed volumes: list_of_volume_ids.

 

Shareable Disks in oVirt

Some applications require storage to be shared between servers. oVirt allows you to mark virtual machine hard disks as Shareable and attach those disks to virtual machines. That way a single virtual disk can be used by multiple cluster-aware guests.

Shared disks are not to be used in every situation. For applications like clustered database servers, and other highly available services, shared disks are appropriate. Attaching a shared disk to multiple guests that are not cluster-aware is likely to cause data corruption because their reads and writes to the disk are not coordinated.

You cannot take a snapshot of a shared disk. Virtual disks that have snapshots taken of them cannot later be marked shareable.

You can mark a disk shareable either when you create it, or by editing the disk later.

 

Read Only Disks in oVirt

Some applications require administrators to share data with read-only rights. You can do this when creating or editing a disk attached to a virtual machine via the Disks tab in the details pane of the virtual machine and selecting the Read Only check box. That way, a single disk can be read by multiple cluster-aware guests, while an administrator maintains writing privileges.

You cannot change the read-only status of a disk while the virtual machine is running.

Important: Mounting a journaled file system requires read-write access. Using the Read Only option is not appropriate for virtual machine disks that contain such file systems (e.g. EXT3, EXT4, or XFS).

 

Virtual Disk Tasks

Creating Floating Virtual Disks

You can create a virtual disk that does not belong to any virtual machines. You can then attach this disk to a single virtual machine, or to multiple virtual machines if the disk is shareable.

Image disk creation is managed entirely by the Engine. Direct LUN disks require externally prepared targets that already exist. Cinder disks require access to an instance of OpenStack Volume that has been added to the oVirt environment using the External Providers window; see Adding an OpenStack Volume Cinder Instance for Storage Management for more information.

Creating Floating Virtual Disks

Select the Disks resource tab.

Click New.

Use the radio buttons to specify whether the virtual disk will be an Image, Direct LUN, or Cinder disk.

Select the options required for your virtual disk. The options change based on the disk type selected. See Add Virtual Disk dialogue entries for more details on each option for each disk type.

Click OK.

Explanation of Settings in the New Virtual Disk Window

New Virtual Disk Settings: Image

Field Name

Description

Size(GB)

The size of the new virtual disk in GB.

Alias

The name of the virtual disk, limited to 40 characters.

Description

A description of the virtual disk. This field is recommended but not mandatory.

Interface

The virtual interface the disk presents to virtual machines. VirtIO is faster, but requires drivers. Red Hat Enterprise Linux 5 and higher include these drivers. Windows does not include these drivers, but they can be installed from the guest tools ISO or virtual floppy disk. IDE devices do not require special drivers.

Data Center

The data center in which the virtual disk will be available.

Storage Domain

The storage domain in which the virtual disk will be stored. The drop-down list shows all storage domains available in the given data center, and also shows the total space and currently available space in the storage domain.

Allocation Policy

The provisioning policy for the new virtual disk.

Preallocated allocates the entire size of the disk on the storage domain at the time the virtual disk is created. The virtual size and the actual size of a preallocated disk are the same. Preallocated virtual disks take more time to create than thinly provisioned virtual disks, but have better read and write performance. Preallocated virtual disks are recommended for servers and other I/O intensive virtual machines. If a virtual machine is able to write more than 1 GB every four seconds, use preallocated disks where possible.

Thin Provision allocates 1 GB at the time the virtual disk is created and sets a maximum limit on the size to which the disk can grow. The virtual size of the disk is the maximum limit; the actual size of the disk is the space that has been allocated so far. Thinly provisioned disks are faster to create than preallocated disks and allow for storage over-commitment. Thinly provisioned virtual disks are recommended for desktops.

Disk Profile

The disk profile assigned to the virtual disk. Disk profiles define the maximum amount of throughput and the maximum level of input and output operations for a virtual disk in a storage domain. Disk profiles are defined on the storage domain level based on storage quality of service entries created for data centers.

Wipe After Delete

Allows you to enable enhanced security for deletion of sensitive material when the virtual disk is deleted.

Bootable

Allows you to enable the bootable flag on the virtual disk.

Shareable

Allows you to attach the virtual disk to more than one virtual machine at a time.

The Direct LUN settings can be displayed in either Targets > LUNs or LUNs > Targets. Targets > LUNs sorts available LUNs according to the host on which they are discovered, whereas LUNs > Targets displays a single list of LUNs.

New Virtual Disk Settings: Direct LUN

Field Name

Description

Alias

The name of the virtual disk, limited to 40 characters.

Description

A description of the virtual disk. This field is recommended but not mandatory. By default the last 4 characters of the LUN ID is inserted into the field.

The default behavior can be configured by setting the PopulateDirectLUNDiskDescriptionWithLUNId configuration key to the appropriate value using the engine-config command. The configuration key can be set to -1 for the full LUN ID to be used, or 0 for this feature to be ignored. A positive integer populates the description with the corresponding number of characters of the LUN ID. See Syntax for rhevm config Command for more information.

Interface

The virtual interface the disk presents to virtual machines. VirtIO is faster, but requires drivers. Red Hat Enterprise Linux 5 and higher include these drivers. Windows does not include these drivers, but they can be installed from the guest tools ISO or virtual floppy disk. IDE devices do not require special drivers.

Data Center

The data center in which the virtual disk will be available.

Use Host

The host on which the LUN will be mounted. You can select any host in the data center.

Storage Type

The type of external LUN to add. You can select from either iSCSI or Fibre Channel.

Discover Targets

This section can be expanded when you are using iSCSI external LUNs and Targets > LUNs is selected.

Address - The host name or IP address of the target server.

Port - The port by which to attempt a connection to the target server. The default port is 3260.

User Authentication - The iSCSI server requires User Authentication. The User Authentication field is visible when you are using iSCSI external LUNs.

CHAP user name - The user name of a user with permission to log in to LUNs. This field is accessible when the User Authentication check box is selected.

CHAP password - The password of a user with permission to log in to LUNs. This field is accessible when the User Authentication check box is selected.

Bootable

Allows you to enable the bootable flag on the virtual disk.

Shareable

Allows you to attach the virtual disk to more than one virtual machine at a time.

Enable SCSI Pass-Through

Available when the Interface is set to VirtIO-SCSI. Selecting this check box enables passthrough of a physical SCSI device to the virtual disk. A VirtIO-SCSI interface with SCSI passthrough enabled automatically includes SCSI discard support. When this check box is not selected, the virtual disk uses an emulated SCSI device.

Allow Privileged SCSI I/O

Available when the Enable SCSI Pass-Through check box is selected. Selecting this check box enables unfiltered SCSI Generic I/O (SG_IO) access, allowing privileged SG_IO commands on the disk. This is required for persistent reservations.

Fill in the fields in the Discover Targets section and click Discover to discover the target server. You can then click the Login All button to list the available LUNs on the target server and, using the radio buttons next to each LUN, select the LUN to add.

Using LUNs directly as virtual machine hard disk images removes a layer of abstraction between your virtual machines and their data.

The following considerations must be made when using a direct LUN as a virtual machine hard disk image:

Live storage migration of direct LUN hard disk images is not supported.

Direct LUN disks are not included in virtual machine exports.

Direct LUN disks are not included in virtual machine snapshots.

The Cinder settings form will be disabled if there are no available OpenStack Volume storage domains on which you have permissions to create a disk in the relevant Data Center. Cinder disks require access to an instance of OpenStack Volume that has been added to the oVirt environment using the External Providers window; see Adding an OpenStack Volume Cinder Instance for Storage Management for more information.

New Virtual Disk Settings: Cinder

Field Name

Description

Size(GB)

The size of the new virtual disk in GB.

Alias

The name of the virtual disk, limited to 40 characters.

Description

A description of the virtual disk. This field is recommended but not mandatory.

Interface

The virtual interface the disk presents to virtual machines. VirtIO is faster, but requires drivers. Red Hat Enterprise Linux 5 and higher include these drivers. Windows does not include these drivers, but they can be installed from the guest tools ISO or virtual floppy disk. IDE devices do not require special drivers.

Data Center

The data center in which the virtual disk will be available.

Storage Domain

The storage domain in which the virtual disk will be stored. The drop-down list shows all storage domains available in the given data center, and also shows the total space and currently available space in the storage domain.

Volume Type

The volume type of the virtual disk. The drop-down list shows all available volume types. The volume type will be managed and configured on OpenStack Cinder.

Bootable

Allows you to enable the bootable flag on the virtual disk.

Shareable

Allows you to attach the virtual disk to more than one virtual machine at a time.

Overview of Live Storage Migration

Virtual machine disks can be migrated from one storage domain to another while the virtual machine to which they are attached is running. This is referred to as live storage migration. When a disk attached to a running virtual machine is migrated, a snapshot of that disk's image chain is created in the source storage domain, and the entire image chain is replicated in the destination storage domain. As such, ensure that you have sufficient storage space in both the source storage domain and the destination storage domain to host both the disk image chain and the snapshot. A new snapshot is created on each live storage migration attempt, even when the migration fails.

Consider the following when using live storage migration:

You can live migrate multiple disks at one time.

Multiple disks for the same virtual machine can reside across more than one storage domain, but the image chain for each disk must reside on a single storage domain.

You can live migrate disks between any two storage domains in the same data center.

You cannot live migrate direct LUN hard disk images or disks marked as shareable.

Moving a Virtual Disk

Move a virtual disk that is attached to a virtual machine or acts as a floating virtual disk from one storage domain to another. You can move a virtual disk that is attached to a running virtual machine; this is referred to as live storage migration. Alternatively, shut down the virtual machine before continuing.

Consider the following when moving a disk:

You can move multiple disks at the same time.

You can move disks between any two storage domains in the same data center.

If the virtual disk is attached to a virtual machine that was created based on a template and used the thin provisioning storage allocation option, you must copy the disks for the template on which the virtual machine was based to the same storage domain as the virtual disk.

Moving a Virtual Disk

Select the Disks tab.

Select one or more virtual disks to move.

Click Move to open the Move Disk(s) window.

From the Target list, select the storage domain to which the virtual disk(s) will be moved.

From the Disk Profile list, select a profile for the disk(s), if applicable.

Click OK.

The virtual disks are moved to the target storage domain, and have a status of Locked while being moved.

Copying a Virtual Disk

Summary

You can copy a virtual disk from one storage domain to another. The copied disk can be attached to virtual machines.

Copying a Virtual Disk

Select the Disks tab.

Select the virtual disks to copy.

Click the Copy button to open the Copy Disk(s) window.

Optionally, enter an alias in the Alias text field.

Use the Target drop-down menus to select the storage domain to which the virtual disk will be copied.

Click OK.

Result

The virtual disks are copied to the target storage domain, and have a status of Locked while being copied.

Uploading a Disk Image to a Storage Domain

QEMU-compatible virtual machine disk images can be uploaded from your local machine to a oVirt storage domain and attached to virtual machines.

Virtual machine disk image types must be either QCOW2 or Raw. Disks created from a QCOW2 disk image cannot be shareable, and the QCOW2 disk image file must not have a backing file.

Prerequisites:

You must configure the Image I/O Proxy when running engine-setup. See "Configuring the oVirt Engine" in the Installation Guide for more information.

You must import the required certificate authority into the web browser used to access the Administration Portal.

Internet Explorer 10, Firefox 35, or Chrome 13 or greater is required to perform this upload procedure. Previous browser versions do not support the required HTML5 APIs.

Note: To import the certificate authority, browse to https://engine_address/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA and select all the trust settings.

Uploading a Disk Image to a Storage Domain

Open the Upload Image screen.

From the Disks tab, select Start from the Upload drop-down.

Alternatively, from the Storage tab select the storage domain, then select the Disks sub-tab, and select Start from the Upload drop-down.

In the Upload Image screen, click Browse and select the image on the local disk.

Set Image Type to QCOW2 or Raw.

Fill in the Disk Option fields. See Add Virtual Disk dialogue entries for a description of the relevant fields.

Click OK.

A progress bar will indicate the status of the upload. You can also pause, cancel, or resume uploads from the Upload drop-down.

Importing a Disk Image from an Imported Storage Domain

Import floating virtual disks from an imported storage domain using the Disk Import tab of the details pane.

Note: Only QEMU-compatible disks can be imported into the Engine.

Importing a Disk Image

Select a storage domain that has been imported into the data center.

In the details pane, click Disk Import.

Select one or more disk images and click Import to open the Import Disk(s) window.

Select the appropriate Disk Profile for each disk.

Click OK to import the selected disks.

Importing an Unregistered Disk Image from an Imported Storage Domain

Import floating virtual disks from a storage domain using the Disk Import tab of the details pane. Floating disks created outside of a oVirt environment are not registered with the Engine. Scan the storage domain to identify unregistered floating disks to be imported.

Note: Only QEMU-compatible disks can be imported into the Engine.

Importing a Disk Image

Select a storage domain that has been imported into the data center.

Right-click the storage domain and select Scan Disks so that the Engine can identify unregistered disks.

In the details pane, click Disk Import.

Select one or more disk images and click Import to open the Import Disk(s) window.

Select the appropriate Disk Profile for each disk.

Click OK to import the selected disks.

Importing a Virtual Disk Image from an OpenStack Image Service

Summary

Virtual disk images managed by an OpenStack Image Service can be imported into the oVirt Engine if that OpenStack Image Service has been added to the Engine as an external provider.

Click the Storage resource tab and select the OpenStack Image Service domain from the results list.

Select the image to import in the Images tab of the details pane.

Click Import to open the Import Image(s) window.

From the Data Center drop-down menu, select the data center into which the virtual disk image will be imported.

From the Domain Name drop-down menu, select the storage domain in which the virtual disk image will be stored.

Optionally, select a quota from the Quota drop-down menu to apply a quota to the virtual disk image.

Click OK to import the image.

Result

The image is imported as a floating disk and is displayed in the results list of the Disks resource tab. It can now be attached to a virtual machine.

Exporting a Virtual Machine Disk to an OpenStack Image Service

Summary

Virtual machine disks can be exported to an OpenStack Image Service that has been added to the Engine as an external provider.

Click the Disks resource tab.

Select the disks to export.

Click the Export button to open the Export Image(s) window.

From the Domain Name drop-down list, select the OpenStack Image Service to which the disks will be exported.

From the Quota drop-down list, select a quota for the disks if a quota is to be applied.

Click OK.

Result

The virtual machine disks are exported to the specified OpenStack Image Service where they are managed as virtual machine disk images.

Important: Virtual machine disks can only be exported if they do not have multiple volumes, are not thinly provisioned, and do not have any snapshots.

 

Virtual Disks and Permissions

Managing System Permissions for a Virtual Disk

As the SuperUser, the system administrator manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for granting a user administrative privileges that limit them to a specific resource. For example, a DataCenterAdmin role has administrator privileges only for the assigned data center with the exception of the storage for that data center, and a ClusterAdmin has administrator privileges only for the assigned cluster.

oVirt Engine provides two default virtual disk user roles, but no default virtual disk administrator roles. One of these user roles, the DiskCreator role, enables the administration of virtual disks from the User Portal. This role can be applied to specific virtual machines, to a data center, to a specific storage domain, or to the whole virtualized environment; this is useful to allow different users to manage different virtual resources.

The virtual disk creator role permits the following actions:

Create, edit, and remove virtual disks associated with a virtual machine or other resources.

Edit user permissions for virtual disks.

Note: You can only assign roles and permissions to existing users.

Virtual Disk User Roles Explained

Virtual Disk User Permission Roles

The table below describes the user roles and privileges applicable to using and administrating virtual machine disks in the User Portal.

oVirt System Administrator Roles

Role

Privileges

Notes

DiskOperator

Virtual disk user.

Can use, view and edit virtual disks. Inherits permissions to use the virtual machine to which the virtual disk is attached.

DiskCreator

Can create, edit, manage and remove virtual machine disks within assigned clusters or data centers.

This role is not applied to a specific virtual disk; apply this role to a user for the whole environment with the Configure window. Alternatively apply this role for specific data centers, clusters, or storage domains.

Assigning an Administrator or User Role to a Resource

Assign administrator or user roles to resources to allow users to access or manage that resource.

Assigning a Role to a Resource

Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.

Click the Permissions tab in the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.

Click Add.

Enter the name or user name of an existing user into the Search text box and click Go. Select a user from the resulting list of possible matches.

Select a role from the Role to Assign: drop-down list.

Click OK.

You have assigned a role to a user; the user now has the inherited permissions of that role enabled for that resource.

Removing an Administrator or User Role from a Resource

Remove an administrator or user role from a resource; the user loses the inherited permissions associated with the role for that resource.

Removing a Role from a Resource

Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.

Click the Permissions tab in the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.

Select the user to remove from the resource.

Click Remove. The Remove Permission window opens to confirm permissions removal.

Click OK.

You have removed the user's role, and the associated permissions, from the resource.

 

 

Virtual Machine Pools

Introduction to Virtual Machine Pools

A virtual machine pool is a group of virtual machines that are all clones of the same template and that can be used on demand by any user in a given group. Virtual machine pools allow administrators to rapidly configure a set of generalized virtual machines for users.

Users access a virtual machine pool by taking a virtual machine from the pool. When a user takes a virtual machine from a pool, they are provided with any one of the virtual machines in the pool if any are available. That virtual machine will have the same operating system and configuration as that of the template on which the pool was based, but users may not receive the same member of the pool each time they take a virtual machine. Users can also take multiple virtual machines from the same virtual machine pool depending on the configuration of that pool.

Virtual machines in a virtual machine pool are stateless, meaning that data is not persistent across reboots. However, if a user configures console options for a virtual machine taken from a virtual machine pool, those options will be set as the default for that user for that virtual machine pool.

In principle, virtual machines in a pool are started when taken by a user, and shut down when the user is finished. However, virtual machine pools can also contain pre-started virtual machines. Pre-started virtual machines are kept in an up state, and remain idle until they are taken by a user. This allows users to start using such virtual machines immediately, but these virtual machines will consume system resources even while not in use due to being idle.

Note: Virtual machines taken from a pool are not stateless when accessed from the Administration Portal. This is because administrators need to be able to write changes to the disk if necessary.

Virtual Machine Pool Tasks

Creating a Virtual Machine Pool

You can create a virtual machine pool that contains multiple virtual machines that have been created based on a common template.

Creating a Virtual Machine Pool

  1. Click the Pools tab.
  1. Click the New button to open the New Pool window.
  1. Use the drop down-list to select the Cluster or use the selected default.
  1. Use the Template drop-down menu to select the required template and version or use the selected default. A template provides standard settings for all the virtual machines in the pool.
  1. Use the Operating System drop-down list to select an Operating System or use the default provided by the template.
  1. Use the Optimized for drop-down list to optimize virtual machines for either Desktop use or Server use.
  1. Enter a Name and Description, any Comments, and the Number of VMs for the pool.
  1. Enter the number of virtual machines to be prestarted in the Prestarted VMs field.
  1. Select the Maximum number of VMs per user that a single user is allowed to run in a session. The minimum is one.
  2. Select the Delete Protection check box to enable delete protection.
  1. Optionally, click the Show Advanced Options button and perform the following steps:
    1. Click the Type tab and select a Pool Type:
      • Manual - The administrator is responsible for explicitly returning the virtual machine to the pool. The virtual machine reverts to the original base image after the administrator returns it to the pool.
      • Automatic - When the virtual machine is shut down, it automatically reverts to its base image and is returned to the virtual machine pool.
    1. Select the Console tab. At the bottom of the tab window, select the Override SPICE Proxy check box to enable the Overridden SPICE proxy address text field. Specify the address of a SPICE proxy to override the global SPICE proxy.
  1. Click OK.

You have created and configured a virtual machine pool with the specified number of identical virtual machines. You can view these virtual machines in the Virtual Machines resource tab, or in the details pane of the Pools resource tab; a virtual machine in a pool is distinguished from independent virtual machines by its icon.

Explanation of Settings and Controls in the New Pool and Edit Pool Windows

New Pool and Edit Pool General Settings Explained

The following table details the information required on the General tab of the New Pool and Edit Pool windows that are specific to virtual machine pools. All other settings are identical to those in the New Virtual Machine window.

General settings

Field Name

Description

Template

The template on which the virtual machine pool is based.

Description

A meaningful description of the virtual machine pool.

Comment

A field for adding plain text human-readable comments regarding the virtual machine pool.

Prestarted VMs

Allows you to specify the number of virtual machines in the virtual machine pool that will be started before they are taken and kept in that state to be taken by users. The value of this field must be between 0 and the total number of virtual machines in the virtual machine pool.

Number of VMs/Increase number of VMs in pool by

Allows you to specify the number of virtual machines to be created and made available in the virtual machine pool. In the edit window it allows you to increase the number of virtual machines in the virtual machine pool by the specified number. By default, the maximum number of virtual machines you can create in a pool is 1000. This value can be configured using the MaxVmsInPool key of the engine-config command.

Maximum number of VMs per user

Allows you to specify the maximum number of virtual machines a single user can take from the virtual machine pool at any one time. The value of this field must be between 1 and 32,767.

Delete Protection

Allows you to prevent the virtual machines in the pool from being deleted.

New and Edit Pool Type Settings Explained

The following table details the information required on the Type tab of the New Pool and Edit Pool windows.

Type settings

Field Name

Description

Pool Type

This drop-down menu allows you to specify the type of the virtual machine pool. The following options are available:

  • Automatic: After a user finishes using a virtual machine taken from a virtual machine pool, that virtual machine is automatically returned to the virtual machine pool.
  • Manual: After a user finishes using a virtual machine taken from a virtual machine pool, that virtual machine is only returned to the virtual machine pool when an administrator manually returns the virtual machine.

New Pool and Edit Pool Console Settings Explained

The following table details the information required on the Console tab of the New Pool or Edit Pool window that is specific to virtual machine pools. All other settings are identical to those in the New Virtual Machine and Edit Virtual Machine windows.

Console settings

Field Name

Description

Override SPICE proxy

Select this check box to enable overriding the SPICE proxy defined in global configuration. This feature is useful in a case where the user (who is, for example, connecting via the User Portal) is outside of the network where the hosts reside.

Overridden SPICE proxy address

The proxy by which the SPICE client will connect to virtual machines. This proxy overrides both the global SPICE proxy defined for the oVirt environment and the SPICE proxy defined for the cluster to which the virtual machine pool belongs, if any. The address must be in the following format:

protocol://[host]:[port]

Virtual Machine Pool Host Settings Explained

The following table details the options available on the Host tab of the New Pool and Edit Pool windows.

Virtual Machine Pool: Host Settings

Field Name

Sub-element

Description

Start Running On

 

Defines the preferred host on which the virtual machine is to run. Select either:

  • Any Host in Cluster - The virtual machine can start and run on any available host in the cluster.
  • Specific - The virtual machine will start running on a particular host in the cluster. However, the Engine or an administrator can migrate the virtual machine to a different host in the cluster depending on the migration and high-availability settings of the virtual machine. Select the specific host or group of hosts from the list of available hosts.

Migration Options

Migration mode

Defines options to run and migrate the virtual machine. If the options here are not used, the virtual machine will run or migrate according to its cluster's policy.

  • Allow manual and automatic migration - The virtual machine can be automatically migrated from one host to another in accordance with the status of the environment, or manually by an administrator.
  • Allow manual migration only - The virtual machine can only be migrated from one host to another manually by an administrator.
  • Do not allow migration - The virtual machine cannot be migrated, either automatically or manually.

 

Use custom migration policy

Defines the migration convergence policy. If the check box is left unselected, the host determines the policy.

  • Legacy - Legacy behavior of 3.6 version. Overrides in vdsm.conf are still applied. The guest agent hook mechanism is disabled.
  • Minimal downtime - Allows the virtual machine to migrate in typical situations. Virtual machines should not experience any significant downtime. The migration will be aborted if virtual machine migration does not converging after a long time (dependent on QEMU iterations, with a maximum of 500 milliseconds). The guest agent hook mechanism is enabled.
  • Suspend workload if needed - Allows the virtual machine to migrate in most situations, including when the virtual machine is running a heavy workload. Virtual machines may experience a more significant downtime. The migration may still be aborted for extreme workloads. The guest agent hook mechanism is enabled.

 

Use custom migration downtime

This check box allows you to specify the maximum number of milliseconds the virtual machine can be down during live migration. Configure different maximum downtimes for each virtual machine according to its workload and SLA requirements. Enter 0 to use the VDSM default value.

 

Auto Converge migrations

Only activated with Legacy migration policy. Allows you to set whether auto-convergence is used during live migration of the virtual machine. Large virtual machines with high workloads can dirty memory more quickly than the transfer rate achieved during live migration, and prevent the migration from converging. Auto-convergence capabilities in QEMU allow you to force convergence of virtual machine migrations. QEMU automatically detects a lack of convergence and triggers a throttle-down of the vCPUs on the virtual machine. Auto-convergence is disabled globally by default.

  • Select Inherit from cluster setting to use the auto-convergence setting that is set at the cluster level. This option is selected by default.
  • Select Auto Converge to override the cluster setting or global setting and allow auto-convergence for the virtual machine.
  • Select Don't Auto Converge to override the cluster setting or global setting and prevent auto-convergence for the virtual machine.

 

Enable migration compression

Only activated with Legacy migration policy. The option allows you to set whether migration compression is used during live migration of the virtual machine. This feature uses Xor Binary Zero Run-Length-Encoding to reduce virtual machine downtime and total live migration time for virtual machines running memory write-intensive workloads or for any application with a sparse memory update pattern. Migration compression is disabled globally by default.

  • Select Inherit from cluster setting to use the compression setting that is set at the cluster level. This option is selected by default.
  • Select Compress to override the cluster setting or global setting and allow compression for the virtual machine.
  • Select Don't compress to override the cluster setting or global setting and prevent compression for the virtual machine.

 

Pass-Through Host CPU

This check box allows virtual machines to take advantage of the features of the physical CPU of the host on which they are situated. This option can only be enabled when Do not allow migration is selected.

Configure NUMA

NUMA Node Count

The number of virtual NUMA nodes to assign to the virtual machine. If the Tune Mode is Preferred, this value must be set to 1.

 

Tune Mode

The method used to allocate memory.

  • Strict: Memory allocation will fail if the memory cannot be allocated on the target node.
  • Preferred: Memory is allocated from a single preferred node. If sufficient memory is not available, memory can be allocated from other nodes.
  • Interleave: Memory is allocated across nodes in a round-robin algorithm.

 

NUMA Pinning

Opens the NUMA Topology window. This window shows the host's total CPUs, memory, and NUMA nodes, and the virtual machine's virtual NUMA nodes. Pin virtual NUMA nodes to host NUMA nodes by clicking and dragging each vNUMA from the box on the right to a NUMA node on the left.

Editing a Virtual Machine Pool

Editing a Virtual Machine Pool

After a virtual machine pool has been created, its properties can be edited. The properties available when editing a virtual machine pool are identical to those available when creating a new virtual machine pool except that the Number of VMs property is replaced by Increase number of VMs in pool by.

Note: When editing a virtual machine pool, the changes introduced affect only new virtual machines. Virtual machines that existed already at the time of the introduced changes remain unaffected.

Editing a Virtual Machine Pool

  1. Click the Pools resource tab, and select a virtual machine pool from the results list.
  1. Click Edit to open the Edit Pool window.
  1. Edit the properties of the virtual machine pool.
  1. Click Ok.

Prestarting Virtual Machines in a Pool

The virtual machines in a virtual machine pool are powered down by default. When a user requests a virtual machine from a pool, a machine is powered up and assigned to the user. In contrast, a prestarted virtual machine is already running and waiting to be assigned to a user, decreasing the amount of time a user has to wait before being able to access a machine. When a prestarted virtual machine is shut down it is returned to the pool and restored to its original state. The maximum number of prestarted virtual machines is the number of virtual machines in the pool.

Prestarted virtual machines are suitable for environments in which users require immediate access to virtual machines which are not specifically assigned to them. Only automatic pools can have prestarted virtual machines.

Prestarting Virtual Machines in a Pool

  1. Use the Pools resource tab, tree mode, or the search function to find and select the virtual machine pool in the results list.
  1. Click Edit to open the Edit Pool window.
  1. Enter the number of virtual machines to be prestarted in the Prestarted VMs field.
  2. Select the Pool tab. Ensure Pool Type is set to Automatic.
  1. Click OK.

You have set a number of prestarted virtual machines in a pool. The prestarted machines are running and available for use.

Adding Virtual Machines to a Virtual Machine Pool

If you require more virtual machines than originally provisioned in a virtual machine pool, add more machines to the pool.

Adding Virtual Machines to a Virtual Machine Pool

  1. Use the Pools resource tab, tree mode, or the search function to find and select the virtual machine pool in the results list.
  1. Click Edit to open the Edit Pool window.
  1. Enter the number of additional virtual machines to add in the Increase number of VMs in pool by field.
  1. Click OK.

You have added more virtual machines to the virtual machine pool.

Detaching Virtual Machines from a Virtual Machine Pool

You can detach virtual machines from a virtual machine pool. Detaching a virtual machine removes it from the pool to become an independent virtual machine.

Detaching Virtual Machines from a Virtual Machine Pool

  1. Use the Pools resource tab, tree mode, or the search function to find and select the virtual machine pool in the results list.
  2. Ensure the virtual machine has a status of Down because you cannot detach a running virtual machine.

Click the Virtual Machines tab in the details pane to list the virtual machines in the pool.

  1. Select one or more virtual machines and click Detach to open the Detach Virtual Machine(s) confirmation window.
  1. Click OK to detach the virtual machine from the pool.

Note: The virtual machine still exists in the environment and can be viewed and accessed from the Virtual Machines resource tab. Note that the icon changes to denote that the detached virtual machine is an independent virtual machine.

You have detached a virtual machine from the virtual machine pool.

Removing a Virtual Machine Pool

You can remove a virtual machine pool from a data center. You must first either delete or detach all of the virtual machines in the pool. Detaching virtual machines from the pool will preserve them as independent virtual machines.

Removing a Virtual Machine Pool

  1. Use the Pools resource tab, tree mode, or the search function to find and select the virtual machine pool in the results list.
  1. Click Remove to open the Remove Pool(s) confirmation window.
  1. Click OK to remove the pool.

You have removed the pool from the data center.

Pools and Permissions

Managing System Permissions for a Virtual Machine Pool

As the SuperUser, the system administrator manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for granting a user administrative privileges that limit them to a specific resource. For example, a DataCenterAdmin role has administrator privileges only for the assigned data center with the exception of the storage for that data center, and a ClusterAdmin has administrator privileges only for the assigned cluster.

A virtual machine pool administrator is a system administration role for virtual machine pools in a data center. This role can be applied to specific virtual machine pools, to a data center, or to the whole virtualized environment; this is useful to allow different users to manage certain virtual machine pool resources.

The virtual machine pool administrator role permits the following actions:

  • Create, edit, and remove pools.
  • Add and detach virtual machines from the pool.

Note: You can only assign roles and permissions to existing users.

Virtual Machine Pool Administrator Roles Explained

Pool Permission Roles

The table below describes the administrator roles and privileges applicable to pool administration.

oVirt System Administrator Roles

Role

Privileges

Notes

VmPoolAdmin

System Administrator role of a virtual pool.

Can create, delete, and configure a virtual pool, assign and remove virtual pool users, and perform basic operations on a virtual machine.

ClusterAdmin

Cluster Administrator

Can use, create, delete, manage all virtual machine pools in a specific cluster.

Assigning an Administrator or User Role to a Resource

Assign administrator or user roles to resources to allow users to access or manage that resource.

Assigning a Role to a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  1. Click the Permissions tab in the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  1. Click Add.
  1. Enter the name or user name of an existing user into the Search text box and click Go. Select a user from the resulting list of possible matches.
  1. Select a role from the Role to Assign: drop-down list.
  1. Click OK.

You have assigned a role to a user; the user now has the inherited permissions of that role enabled for that resource.

Removing an Administrator or User Role from a Resource

Remove an administrator or user role from a resource; the user loses the inherited permissions associated with the role for that resource.

Removing a Role from a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  2. Click the Permissions tab in the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  1. Select the user to remove from the resource.
  1. Click Remove. The Remove Permission window opens to confirm permissions removal.
  1. Click OK.

You have removed the user's role, and the associated permissions, from the resource.

Trusted Compute Pools

Trusted compute pools are secure clusters based on Intel Trusted Execution Technology (Intel TXT). Trusted clusters only allow hosts that are verified by Intel's OpenAttestation, which measures the integrity of the host's hardware and software against a White List database. Trusted hosts and the virtual machines running on them can be assigned tasks that require higher security. For more information on Intel TXT, trusted systems, and attestation, see https://software.intel.com/en-us/articles/intel-trusted-execution-technology-intel-txt-enabling-guide.](https://software.intel.com/en-us/articles/intel-trusted-execution-technology-intel-txt-enabling-guide](https://software.intel.com/en-us/articles/intel-trusted-execution-technology-intel-txt-enabling-guide).)

Creating a trusted compute pool involves the following steps:

  • Configuring the Engine to communicate with an OpenAttestation server.
  • Creating a trusted cluster that can only run trusted hosts.
  • Adding trusted hosts to the trusted cluster. Hosts must be running the OpenAttestation agent to be verified as trusted by the OpenAttestation sever.

For information on installing an OpenAttestation server, installing the OpenAttestation agent on hosts, and creating a White List database, see https://github.com/OpenAttestation/OpenAttestation/wiki.](https://github.com/OpenAttestation/OpenAttestation/wiki](https://github.com/OpenAttestation/OpenAttestation/wiki).)

Connecting an OpenAttestation Server to the Engine

Before you can create a trusted cluster, the oVirt Engine must be configured to recognize the OpenAttestation server. Use engine-config to add the OpenAttestation server's FQDN or IP address:

# engine-config -s AttestationServer=attestationserver.example.com

The following settings can also be changed if required:

OpenAttestation Settings for engine-config

Option

Default Value

Description

AttestationServer

oat-server

The FQDN or IP address of the OpenAttestation server. This must be set for the Engine to communicate with the OpenAttestation server.

AttestationPort

8443

The port used by the OpenAttestation server to communicate with the Engine.

AttestationTruststore

TrustStore.jks

The trust store used for securing communication with the OpenAttestation server.

AttestationTruststorePass

password

The password used to access the trust store.

AttestationFirstStageSize

10

Used for quick initialization. Changing this value without good reason is not recommended.

SecureConnectionWithOATServers

true

Enables or disables secure communication with OpenAttestation servers.

PollUri

AttestationService/resources/PollHosts

The URI used for accessing the OpenAttestation service.

Creating a Trusted Cluster

Trusted clusters communicate with an OpenAttestation server to assess the security of hosts. When a host is added to a trusted cluster, the OpenAttestation server measures the host's hardware and software against a White List database. Virtual machines can be migrated between trusted hosts in the trusted cluster, allowing for high availability in a secure environment.

Creating a Trusted Cluster

  1. Select the Clusters tab.
  1. Click New.
  1. Enter a Name for the cluster.
  1. Select the Enable Virt Service radio button.
  1. In the Scheduling Policy tab, select the Enable Trusted Service check box.
  1. Click OK.

Adding a Trusted Host

Enterprise Linux hosts can be added to trusted clusters and measured against a White List database by the OpenAttestation server. Hosts must meet the following requirements to be trusted by the OpenAttestation server:

  • Intel TXT is enabled in the BIOS.
  • The OpenAttestation agent is installed and running.
  • Software running on the host matches the OpenAttestation server's White List database.

Adding a Trusted Host

  1. Select the Hosts tab.
  1. Click New.
  1. Select a trusted cluster from the Host Cluster drop-down list.
  2. Enter a Name for the host.
  1. Enter the Address of the host.
  1. Enter the host's root Password.
  1. Click OK.

After the host is added to the trusted cluster, it is assessed by the OpenAttestation server. If a host is not trusted by the OpenAttestation server, it will move to a Non Operational state and should be removed from the trusted cluster.

 

Snapshots

Will be added soon ...

 

Templates

A template is a pre-installed and pre-configured virtual machine and Templates become beneficial where we need to deploy large number similar virtual machines.Templates help us to reduce the time to deploy virtual machine and also  reduce the amount of disk space needed.A template does not require to be cloned. Instead a small overlay can be put on top of the base image to store just the changes for one particular instance.

 

To Convert a virtual machine into a template we need to generalize the virtual machine or in other words sealing virtual machine.

 

We will be using this virtual machine and will convert it into a template. Refer the following steps :

 

Step:1 Login to Virtual Machine Console

SSH the virtual  machine as a root user.

 

Step:2 Remove SSH host keys  using rm command.

[root@linuxtechi ~]# rm -f /etc/ssh/ssh_host_*

 

 

Step:3 Remove the hostname and set it as local host

[root@linuxtechi ~]# hostnamectl set-hostname 'localhost'

 

Step:4 Remove the host specific information

Remove the followings :

  • udev rules
  • MAC Address & UUID

[root@linuxtechi ~]# rm -f /etc/udev/rules.d/*-persistent-*.rules
[root@linuxtechi ~]# sed -i '/^HWADDR=/d' /etc/sysconfig/network-scripts/ifcfg-*
[root@linuxtechi ~]# sed -i '/^UUID=/d' /etc/sysconfig/network-scripts/ifcfg-*

 

Step:5 Remove RHN systemid associated with virtual machine

[root@linuxtechi ~]# rm -f /etc/sysconfig/rhn/systemid

 

Step:6 Run the command sys-unconfig

Run the command sys-unconfig to complete the process and it will also shutdown the virtual machine.

[root@linuxtechi ~]# sys-unconfig

 

Now our Virtual Machine is ready for template.

Do the right click on the Machine and select the “Make Template” option

Specify the Name and Description of the template and click on OK

It will take couple of minutes to create template from the virtual machine. Once Done go to templates Tab and verify whether the newly created template is there or not.

 

Now start deploying virtual machine from template.

Got to the Virtual Machine Tab , click on “New VM“, Select the template that we have created in above steps. Specify the VM name and Description

When we click on OK , it will start creating the virtual machine from template.

As we can see that after couple of minutes Virtual Machine “test_server1” has been successfully launched from template.

 

Storage

  1. Adding FCP Storage
    1. Click Storage Domains to list all storage domains.
    2. Click New Domain.
    1. Enter the Name of the storage domain.
    1. Use the Data Center drop-down menu to select an FCP data center.
If you do not yet have an appropriate FCP data center, select (none).
  1. Select the Domain Function and the Storage Type from the drop-down menus. The storage domain types that are not compatible with the chosen data center are not available.
  1. Select an active host in the Use Host field. If this is not the first data domain in a data center, you must select the data center’s SPM host.
Important
All communication to the storage domain is through the selected host and not directly from the Red Hat Virtualization Manager. At least one active host must exist in the system and be attached to the chosen data center. All hosts must have access to the storage device before the storage domain can be configured.
  1. The New Domain window automatically displays known targets with unused LUNs when Fibre Channel is selected as the storage type. Select the LUN ID check box to select all of the available LUNs.
  2. Optionally, you can configure the advanced parameters.
    1. Click Advanced Parameters.
    2. Enter a percentage value into the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged.
    3. Enter a GB value into the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked.
    1. Select the Wipe After Delete check box to enable the wipe after delete option. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist.
    1. Select the Discard After Delete check box to enable the discard after delete option. This option can be edited after the domain is created. This option is only available to block storage domains.
  1. Click OK to create the storage domain and close the window.
The new FCP data domain displays in Storage Domains. It will remain with a Locked status while it is being prepared for use. When ready, it is automatically attached to the data center.
 
  1. Local ISO Domain
    1. Changing Local ISO Domain Permission
      1. Log in to the Manager machine.
      1. Edit the /etc/exports file, and add the hosts, or the subnets to which they belong, to the access control list:
/var/lib/exports/iso 10.1.2.0/255.255.255.0(rw) host01.example.com(rw) host02.example.com(rw)
The example above allows read and write access to a single /24 network and two specific hosts. /var/lib/exports/iso is the default file path for the ISO domain. See the exports(5) man page for further formatting options.
  1. Apply the changes:
exportfs -ra or -av
showmount -e
Note that if you manually edit the /etc/exports file after running engine-setup, running engine-cleanup later will not undo the changes.
  1. Attaching Local ISO Domain
    1. In the Administration Portal, click Compute Data Centers and select the appropriate data center.
    1. Click the data center’s name to go to the details view.
    1. Click the Storage tab to list the storage domains already attached to the data center.
    1. Click Attach ISO to open the Attach ISO Library window.
    2. Click the radio button for the local ISO domain.
    3. Click OK.
The ISO domain is now attached to the data center and is automatically activated.
 
  1. Enabling Gluster on Gluster Storage
    1. Click Compute Clusters.
    1. Click New.
    2. Click the General tab and select the Enable Gluster Service check box. Enter the address, SSH fingerprint, and password as necessary. The address and password fields can be filled in only when the Import existing Gluster configuration check box is selected.
    3. Click OK.
 
  1. Preparing and Installing a Remote PostgreSQL DB
    1. Preparing the PostgreSQL Database
      1. Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted:
subscription-manager register
  1. Find the Red Hat Enterprise Linux Server and Red Hat Virtualization subscription pools and note down the pool IDs:
subscription-manager list --available
  1. Use the pool IDs to attach the subscriptions to the system:
subscription-manager attach --pool=poolid
Note
To find out which subscriptions are currently attached, run:
subscription-manager list --consumed
To list all enabled repositories, run:
yum repolist
  1. Disable all existing repositories:
subscription-manager repos --disable=*
  1. Enable the Red Hat Enterprise Linux and Red Hat Virtualization Manager repositories:
subscription-manager repos --enable=rhel-7-server-rpms
subscription-manager repos --enable=rhel-7-server-rhv-4.2-manager-rpms
  1. Initializing the PostgreSQL Database
    1. Install the PostgreSQL server package:
yum install rh-postgresql95 rh-postgresql95-postgresql-contrib
  1. Initialize the PostgreSQL database, start the postgresql service, and ensure that this service starts on boot:
scl enable rh-postgresql95 -- postgresql-setup --initdb
systemctl enable rh-postgresql95-postgresql
systemctl start rh-postgresql95-postgresql
  1. Connect to the psql command line interface as the postgres user:
su - postgres -c 'scl enable rh-postgresql95 -- psql'
  1. Create a default user. The Manager’s default user is engine and the Data Warehouse’s default user is ovirt_engine_history:
postgres=# create role user_name with login encrypted password 'password';
  1. Create a database. The Manager’s default database name is engine and Data Warehouse’s default database name is ovirt_engine_history:
postgres=# create database database_name owner user_name template template0 encoding 'UTF8' lc_collate 'en_US.UTF-8' lc_ctype 'en_US.UTF-8';
  1. Connect to the new database:
postgres=# \c database_name
  1. Add the uuid-ossp extension:
database_name=# CREATE EXTENSION "uuid-ossp";
  1. Add the plpgsql language if it does not exist:
database_name=# CREATE LANGUAGE plpgsql;
  1. Ensure the database can be accessed remotely by enabling md5 client authentication. Edit the /var/lib/pgsql/data/pg_hba.conf file, and add the following line immediately underneath the line starting with local at the bottom of the file, replacing X.X.X.X with the IP address of the Manager or the Data Warehouse machine:
host    database_name    user_name    ::0/32    md5
host    database_name    user_name    ::0/128   md5
  1. Allow TCP/IP connections to the database. Edit the /var/opt/rh/rh-postgresql95/lib/pgsql/data/postgresql.conf file and add the following line:
listen_addresses='*'
  1. Update the PostgreSQL server’s configuration. Edit the /var/opt/rh/rh-postgresql95/lib/pgsql/data/postgresql.conf file and add the following lines:
autovacuum_vacuum_scale_factor='0.01'
autovacuum_analyze_scale_factor='0.075'
autovacuum_max_workers='6'
maintenance_work_mem='65536'
max_connections='150'
work_mem='8192'
  1. Open the default port used for PostgreSQL database connections, and save the updated firewall rules:
firewall-cmd --zone=public --add-service=postgresql
firewall-cmd --permanent --zone=public --add-service=postgresql
  1. Restart the postgresql service:
systemctl rh-postgresql95-postgresql restart
 
  1. Preparing and Installing a Local PostgreSQL DB
    1. Preparing the PostgreSQL Database
      1. Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted:
subscription-manager register
  1. Find the Red Hat Enterprise Linux Server and Red Hat Virtualization subscription pools and note down the pool IDs:
subscription-manager list --available
  1. Use the pool IDs to attach the subscriptions to the system:
subscription-manager attach --pool=poolid
Note
To find out which subscriptions are currently attached, run:
subscription-manager list --consumed
To list all enabled repositories, run:
yum repolist
  1. Disable all existing repositories:
subscription-manager repos --disable=*
  1. Enable the Red Hat Enterprise Linux and Red Hat Virtualization Manager repositories:
subscription-manager repos --enable=rhel-7-server-rpms
subscription-manager repos --enable=rhel-7-server-rhv-4.2-manager-rpms
  1. Initializing the PostgreSQL Database
    1. Install the PostgreSQL server package:
yum install rh-postgresql95 rh-postgresql95-postgresql-contrib
  1. Initialize the PostgreSQL database, start the postgresql service, and ensure that this service starts on boot:
scl enable rh-postgresql95 -- postgresql-setup --initdb
systemctl enable rh-postgresql95-postgresql
systemctl start rh-postgresql95-postgresql
  1. Connect to the psql command line interface as the postgres user:
su - postgres -c 'scl enable rh-postgresql95 -- psql'
  1. Create a user for the Manager to use when it writes to and reads from the database. The default user name on the Manager is engine:
postgres=# create role user_name with login encrypted password 'password';
  1. Create a database in which to store data about the Red Hat Virtualization environment. The default database name on the Manager is engine:
postgres=# create database database_name owner user_name template template0 encoding 'UTF8' lc_collate 'en_US.UTF-8' lc_ctype 'en_US.UTF-8';
  1. Connect to the new database:
postgres=# \c database_name
  1. Add the uuid-ossp extension:
database_name=# CREATE EXTENSION "uuid-ossp";
  1. Add the plpgsql language if it does not exist:
database_name=# CREATE LANGUAGE plpgsql;
  1. Ensure the database can be accessed remotely by enabling md5 client authentication. Edit the /var/opt/rh/rh-postgresql95/lib/pgsql/data/pg_hba.conf file, and add the following line immediately underneath the line starting with local at the bottom of the file, replacing ::0/32 or ::0/128 with the IP address of the Manager:
host    [database name]    [user name]    0.0.0.0/0  md5
host    [database name]    [user name]    ::/32      md5
host    [database name]    [user name]    ::/128     md5
  1. Update the PostgreSQL server’s configuration. Edit the /var/opt/rh/rh-postgresql95/lib/pgsql/data/postgresql.conf file and add the following lines:
autovacuum_vacuum_scale_factor='0.01'
autovacuum_analyze_scale_factor='0.075'
autovacuum_max_workers='6'
maintenance_work_mem='65536'
max_connections='150'
work_mem='8192'
  1. Restart the postgresql service:
systemctl rh-postgresql95-postgresql restart