Administration

 

Logical Networks

Logical Network Tasks

Performing Networking Tasks

Network → Networks provides a central location for users to perform logical network-related operations and search for logical networks based on each network’s property or association with other resources.The NewEdit and Remove buttons allow you to create, change the properties of, and delete logical networks within data centers.

Click on each network name and use the tabs in the details view to perform functions including:

  • Attaching or detaching the networks to clusters and hosts
  • Removing network interfaces from virtual machines and templates
  • Adding and removing permissions for users to access and manage networks

These functions are also accessible through each individual resource tab.

Warning: Do not change networking in a data center or a cluster if any hosts are running as this risks making the host unreachable.

Important: If you plan to use oVirt nodes to provide any services, remember that the services will stop if the oVirt environment stops operating.

This applies to all services, but you should be especially aware of the hazards of running the following on oVirt:

  • Directory Services
  • DNS
  • Storage

Creating a New Logical Network in a Data Center or Cluster

Create a logical network and define its use in a data center, or in clusters in a data center.

Creating a New Logical Network in a Data Center or Cluster

  1. Click Compute → Data Centers or Compute → Clusters.
  2. Click the data center or cluster name to open the details view.
  3. Click the Logical Networks tab.
  4. Open the New Logical Network window:
    • From the Data Centers details pane, click New.
    • From the Clusters details pane, click Add Network.
  1. Enter a NameDescription, and Comment for the logical network.
  1. Optionally enable Enable VLAN tagging.
  1. Optionally disable VM Network.
  1. Optionally select the Create on external provider check box. This disables the Network LabelVM Network, and MTU options.
  1. Select the External Provider. The External Provider list does not include external providers that are in read-only mode.

You can create an internal, isolated network, by selecting ovirt-provider-ovn on the External Provider list and leaving Connect to physical network unselected.

  1. Enter a new label or select an existing label for the logical network in the Network Label text field.
  1. Set the MTU value to Default (1500) or Custom.
  1. From the Cluster tab, select the clusters to which the network will be assigned. You can also specify whether the logical network will be a required network.
  1. If Create on external provider is selected, the Subnet tab will be visible. From the Subnet tab, select the Create subnet and enter a NameCIDR, and Gateway address, and select an IP Version for the subnet that the logical network will provide. You can also add DNS servers as required.
  1. From the vNIC Profiles tab, add vNIC profiles to the logical network as required.
  1. Click OK.

If you entered a label for the logical network, it is automatically added to all host network interfaces with that label.

Note: When creating a new logical network or making changes to an existing logical network that is used as a display network, any running virtual machines that use that network must be rebooted before the network becomes available or the changes are applied.

Editing a Logical Network

Edit the settings of a logical network.

Important: A logical network cannot be edited or moved to another interface if it is not synchronized with the network configuration on the host. See Editing host network interfaces on how to synchronize your networks.

Editing a Logical Network

  1. Click Compute → Data Centers.
  1. Click the data center’s name to open the details view.
  1. Click the Logical Networks tab and select a logical network.
  1. Click Edit.
  1. Edit the necessary settings.
    Note: You can edit the name of a new or existing network, with the exception of the default network, without having to stop the virtual machines.
  1. Click OK.
    Note: Multi-host network configuration automatically applies updated network settings to all of the hosts within the data center to which the network is assigned. Changes can only be applied when virtual machines using the network are down. You cannot rename a logical network that is already configured on a host. You cannot disable the VM Network option while virtual machines or templates using that network are running.

Removing a Logical Network

You can remove a logical network from Network → Networks or Compute → Data Centers. The following procedure shows you how to remove logical networks associated to a data center. For a working oVirt environment, you must have at least one logical network used as the ovirtmgmt management network.

Removing Logical Networks

  1. Click Compute → Data Centers.
  2. Click the data center’s name to open the details view.
  3. Click the Logical Networks tab to list the logical networks in the data center.
  1. Select a logical network and click Remove.
  1. Optionally, select the Remove external network(s) from the provider(s) as well check box to remove the logical network both from the Manager and from the external provider if the network is provided by an external provider. The check box is grayed out if the external provider is in read-only mode.
  1. Click OK.

The logical network is removed from the Engine and is no longer available.

Configuring a Non-Management Logical Network as the Default Route

The default route used by hosts in a cluster is through the management network (ovirtmgmt). The following procedure provides instructions to configure a non-management logical network as the default route.

Prerequisite:

  • If you are using the default_route custom property, you need to clear the custom property from all attached hosts and then follow this procedure.

Configuring the Default Route Role

  1. Click Network → Networks.
  1. Click the name of the non-management logical network to configure as the default route to access its details.
  1. Click the Clusters tab.
  1. Click Manage Network to open the Manage Network window.
  1. Select the Default Route checkbox for the appropriate cluster(s).
  1. Click OK.

When networks are attached to a host, the default route of the host will be set on the network of your choice. It is recommended to configure the default route role before any host is added to your cluster. If your cluster already contains hosts, they may become out-of-sync until you sync your change to them.

Viewing or Editing the Gateway for a Logical Network

Users can define the gateway, along with the IP address and subnet mask, for a logical network. This is necessary when multiple networks exist on a host and traffic should be routed through the specified network, rather than the default gateway.

If multiple networks exist on a host and the gateways are not defined, return traffic will be routed through the default gateway, which may not reach the intended destination. This would result in users being unable to ping the host. Multiple Gateways.

oVirt handles multiple gateways automatically whenever an interface goes up or down.

Viewing or Editing the Gateway for a Logical Network

  1. Click Compute → Hosts.
  2. Click the host’s name to open the details view.
  3. Click the Network Interfaces tab to list the network interfaces attached to the host and their configurations.
  4. Click the Setup Host Networks.
  5. Hover your cursor over an assigned logical network and click the pencil icon to open the Edit Management Network window.

The Edit Management Network window displays the network name, the boot protocol, and the IP, subnet mask, and gateway addresses. The address information can be manually edited by selecting a Static boot protocol.

Logical Network General Settings Explained

The table below describes the settings for the General tab of the New Logical Network and Edit Logical Network window.

New Logical Network and Edit Logical Network Settings

Field Name

Description

Name

The name of the logical network. This text field has a 15-character limit and must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores.

Description

The description of the logical network. This text field has a 40-character limit.

Comment

A field for adding plain text, human-readable comments regarding the logical network.

Create on external provider

Allows you to create the logical network to an OpenStack Networking instance that has been added to the Manager as an external provider.

External Provider - Allows you to select the external provider on which the logical network will be created.

Enable VLAN tagging

VLAN tagging is a security feature that gives all network traffic carried on the logical network a special characteristic. VLAN-tagged traffic cannot be read by interfaces that do not also have that characteristic. Use of VLANs on logical networks also allows a single network interface to be associated with multiple, differently VLAN-tagged logical networks. Enter a numeric value in the text entry field if VLAN tagging is enabled.

VM Network

Select this option if only virtual machines use this network. If the network is used for traffic that does not involve virtual machines, such as storage communications, do not select this check box.

MTU

Choose either Default, which sets the maximum transmission unit (MTU) to the value given in the parenthesis (), or Custom to set a custom MTU for the logical network. You can use this to match the MTU supported by your new logical network to the MTU supported by the hardware it interfaces with. Enter a numeric value in the text entry field if Custom is selected.

Network Label

Allows you to specify a new label for the network or select from existing labels already attached to host network interfaces. If you select an existing label, the logical network will be automatically assigned to all host network interfaces with that label.

Logical Network Cluster Settings Explained

The table below describes the settings for the Cluster tab of the New Logical Network window.

New Logical Network Settings

Field Name

Description

Attach/Detach Network to/from Cluster(s)

Allows you to attach or detach the logical network from clusters in the data center and specify whether the logical network will be a required network for individual clusters.

Name - the name of the cluster to which the settings will apply. This value cannot be edited.

Attach All - Allows you to attach or detach the logical network to or from all clusters in the data center. Alternatively, select or clear the Attach check box next to the name of each cluster to attach or detach the logical network to or from a given cluster.

Required All - Allows you to specify whether the logical network is a required network on all clusters. Alternatively, select or clear the Required check box next to the name of each cluster to specify whether the logical network is a required network for a given cluster.

Logical Network vNIC Profiles Settings Explained

The table below describes the settings for the vNIC Profiles tab of the New Logical Network window.

New Logical Network Settings

Field Name

Description

vNIC Profiles

Allows you to specify one or more vNIC profiles for the logical network. You can add or remove a vNIC profile to or from the logical network by clicking the plus or minus button next to the vNIC profile. The first field is for entering a name for the vNIC profile.

Public - Allows you to specify whether the profile is available to all users.

QoS - Allows you to specify a network quality of service (QoS) profile to the vNIC profile.

Designate a Specific Traffic Type for a Logical Network with the Manage Networks Window

Specify the traffic type for the logical network to optimize the network traffic flow.

Specifying Traffic Types for Logical Networks

  1. Click Compute → Clusters.
  1. Click the cluster’s name to open the details view.
  1. Select the Logical Networks tab.
  1. Click Manage Networks.
  1. Select appropriate check boxes and radio buttons.
  1. Click OK.

Note: Logical networks offered by external providers must be used as virtual machine networks; they cannot be assigned special cluster roles such as display or migration.

Explanation of Settings in the Manage Networks Window

The table below describes the settings for the Manage Networks window.

Manage Networks Settings

Field

Description/Action

Assign

Assigns the logical network to all hosts in the cluster

Required

A Network marked "required" must remain operational in order for the hosts associated with it to function properly. If a required network ceases to function, any hosts associated with it become non-operational.

VM Network

A logical network marked "VM Network" carries network traffic relevant to the virtual machine network.

Display Network

A logical network marked "Display Network" carries network traffic relevant to SPICE and to the virtual network controller.

Migration Network

A logical network marked "Migration Network" carries virtual machine and storage migration traffic. If an outage occurs on this network, the management network (ovirtmgmt by default) will be used instead.

Editing the Virtual Function Configuration on a NIC

Single Root I/O Virtualization (SR-IOV) enables a single PCIe endpoint to be used as multiple separate devices. This is achieved through the introduction of two PCIe functions: physical functions (PFs) and virtual functions (VFs). A PCIe card can have between one and eight PFs, but each PF can support many more VFs (dependent on the device).

You can edit the configuration of SR-IOV-capable Network Interface Controllers (NICs) through the oVirt Engine, including the number of VFs on each NIC and to specify the virtual networks allowed to access the VFs.

Once VFs have been created, each can be treated as a standalone NIC. This includes having one or more logical networks assigned to them, creating bonded interfaces with them, and to directly assign vNICs to them for direct device passthrough.

A vNIC must have the passthrough property enabled in order to be directly attached to a VF. See Marking vNIC as Passthrough.

Editing the Virtual Function Configuration on a NIC

  1. Click Compute → Hosts.
  2. Click the name of an SR-IOV-capable host to open the details view.
  3. Click the Network Interfaces tab.
  1. Click Setup Host Networks.
  1. Select an SR-IOV-capable NIC, marked with a 

, and click the pencil icon.

  1. To edit the number of virtual functions, click the Number of VFs setting drop-down button and edit the Number of VFs text field.
    Important: Changing the number of VFs will delete all previous VFs on the network interface before creating new VFs. This includes any VFs that have virtual machines directly attached.
  1. The All Networks check box is selected by default, allowing all networks to access the virtual functions. To specify the virtual networks allowed to access the virtual functions, select the Specific networks radio button to list all networks. You can then either select the check box for desired networks, or you can use the Labels text field to automatically select networks based on one or more network labels.
  1. Click OK.
  1. In the Setup Host Networks window, click OK.

Virtual Network Interface Cards

vNIC Profile Overview

A Virtual Network Interface Card (vNIC) profile is a collection of settings that can be applied to individual virtual network interface cards in the Manager. A vNIC profile allows you to apply Network QoS profiles to a vNIC, enable or disable port mirroring, and add or remove custom properties. A vNIC profile also offers an added layer of administrative flexibility in that permission to use (consume) these profiles can be granted to specific users. In this way, you can control the quality of service that different users receive from a given network.

Creating or Editing a vNIC Profile

Create or edit a Virtual Network Interface Controller (vNIC) profile to regulate network bandwidth for users and groups.

Note: If you are enabling or disabling port mirroring, all virtual machines using the associated profile must be in a down state before editing.

Creating or editing a vNIC Profile

  1. Click Network → Networks.
  1. Click the logical network’s name to open the details view.
  2. Click the vNIC Profiles tab.
  3. Click New or Edit.
  4. Enter the Name and Description of the profile.
  5. Select the relevant Quality of Service policy from the QoS list.
  6. Select a Network Filter from the drop-down list to manage the traffic of network packets to and from virtual machines.
  1. Select the Passthrough check box to enable passthrough of the vNIC and allow direct device assignment of a virtual function. Enabling the passthrough property will disable QoS and port mirroring as these are not compatible. For more information on passthrough, see Marking vNIC as Passthrough.
  1. If Passthrough is selected, optionally deselect the Migratable check box to disable migration for vNICs using this profile.
  1. Use the Port Mirroring and Allow all users to use this Profile check boxes to toggle these options.
  1. Select a custom property from the custom properties list, which displays Please select a key… by default. Use the + and - buttons to add or remove custom properties.
  1. Click OK.

Apply this profile to users and groups to regulate their network bandwidth. Note that if you edited a vNIC profile, you must either restart the virtual machine or hot unplug and then hot plug the vNIC.

Explanation of Settings in the VM Interface Profile Window

VM Interface Profile Window

Field Name

Description

Network

A drop-down menu of the available networks to apply the vNIC profile.

Name

The name of the vNIC profile. This must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores between 1 and 50 characters.

Description

The description of the vNIC profile. This field is recommended but not mandatory.

QoS

A drop-down menu of the available Network Quality of Service policies to apply to the vNIC profile. QoS policies regulate inbound and outbound network traffic of the vNIC.

Network Filter

A drop-down list of the available network filters to apply to the vNIC profile. Network filters improve network security by filtering the type of packets that can be sent to and from virtual machines. The default filter is vdsm-no-mac-spoofing, which is a combination of no-mac-spoofing and no-arp-mac-spoofing.

Passthrough

A check box to toggle the passthrough property. Passthrough allows a vNIC to connect directly to a virtual function of a host NIC. The passthrough property cannot be edited if the vNIC profile is attached to a virtual machine.

Both QoS and port mirroring are disabled in the vNIC profile if passthrough is enabled.

Migratable

A check box to toggle whether or not vNICs using this profile can be migrated. Migration is enabled by default on regular vNIC profiles; the check box is selected and cannot be changed. When the Passthrough check box is selected, Migratable becomes available and can be deselected, if required, to disable migration of passthrough vNICs.

Port Mirroring

A check box to toggle port mirroring. Port mirroring copies layer 3 network traffic on the logical network to a virtual interface on a virtual machine. It it not selected by default.

Device Custom Properties

A drop-down menu to select available custom properties to apply to the vNIC profile. Use the + and - buttons to add and remove properties respectively.

Allow all users to use this Profile

A check box to toggle the availability of the profile to all users in the environment. It is selected by default.

.

Enabling Passthrough on a vNIC Profile

The passthrough property of a vNIC profile enables a vNIC to be directly connected to a virtual function (VF) of an SR-IOV-enabled NIC. The vNIC will then bypass the software network virtualization and connect directly to the VF for direct device assignment.

The passthrough property cannot be enabled if the vNIC profile is already attached to a vNIC; this procedure creates a new profile to avoid this. If a vNIC profile has passthrough enabled, QoS and port mirroring are disabled for the profile.

Enabling Passthrough

  1. Click Network → Networks.
  1. Click the logical network’s name to open the details view.
  1. Click the vNIC Profiles tab to list all vNIC profiles for that logical network.
  1. Click New.
  1. Enter the Name and Description of the profile.
  1. Select the Passthrough check box.
  1. Optionally deselect the Migratable check box to disable migration for vNICs using this profile.
  1. If necessary, select a custom property from the custom properties list, which displays Please select a key… by default. Use the + and - buttons to add or remove custom properties.
  1. Click OK.

The vNIC profile is now passthrough-capable. To use this profile to directly attach a virtual machine to a NIC or PCI VF, attach the logical network to the NIC and create a new vNIC on the desired virtual machine that uses the passthrough vNIC profile. For more information on these procedures respectively, see “Editing host network interfaces” and “Adding a New Network Interface” in the Virtual Machine Management Guide.

Removing a vNIC Profile

Remove a vNIC profile to delete it from your virtualized environment.

Removing a vNIC Profile

  1. Click Network → Networks.
  2. Click the logical network’s name to open the details view.
  3. Click the vNIC Profiles tab to display available vNIC profiles.
  1. Select one or more profiles and click Remove.
  1. Click OK.

Assigning Security Groups to vNIC Profiles

Note: This feature is only available for users who are integrating with OpenStack Neutron. Security groups cannot be created with oVirt Engine. You must create security groups within OpenStack.

You can assign security groups to the vNIC profile of networks that have been imported from an OpenStack Networking instance and that use the Open vSwitch plug-in. A security group is a collection of strictly enforced rules that allow you to filter inbound and outbound traffic over a network interface. The following procedure outlines how to attach a security group to a vNIC profile.

Note: A security group is identified using the ID of that security group as registered in the OpenStack Networking instance. You can find the IDs of security groups for a given tenant by running the following command on the system on which OpenStack Networking is installed:

    # neutron security-group-list

Assigning Security Groups to vNIC Profiles

  1. Click Network → Networks.
  1. Click the logical network’s name to open the details view.
  1. Click the vNIC Profiles tab.
  2. Click New, or select an existing vNIC profile and click Edit.
  3. From the custom properties drop-down list, select SecurityGroups. Leaving the custom property drop-down blank applies the default security settings, which permit all outbound traffic and intercommunication but deny all inbound traffic from outside of the default security group. Note that removing the SecurityGroups property later will not affect the applied security group.
  1. In the text field, enter the ID of the security group to attach to the vNIC profile.
  1. Click OK.

You have attached a security group to the vNIC profile. All traffic through the logical network to which that profile is attached will be filtered in accordance with the rules defined for that security group.

User Permissions for vNIC Profiles

Configure user permissions to assign users to certain vNIC profiles. Assign the VnicProfileUser role to a user to enable them to use the profile. Restrict users from certain profiles by removing their permission for that profile.

User Permissions for vNIC Profiles

  1. Click Network → vNIC Profile.
  1. Click the vNIC profile’s name to open the details view.
  1. Click the Permissions tab to show the current user permissions for the profile.
  1. Click Add or Remove to change user permissions for the vNIC profile.
  2. In the Add Permissions to User window, click My Groups to display your user groups. You can use this option to grant permissions to other users in your groups.

You have configured user permissions for a vNIC profile.

Configuring vNIC Profiles for UCS Integration

Cisco’s Unified Computing System (UCS) is used to manage datacenter aspects such as computing, networking and storage resources.

The vdsm-hook-vmfex-dev hook allows virtual machines to connect to Cisco’s UCS-defined port profiles by configuring the vNIC profile. The UCS-defined port profiles contain the properties and settings used to configure virtual interfaces in UCS. The vdsm-hook-vmfex-dev hook is installed by default with VDSM. See VDSM and Hooks for more information.

When a virtual machine that uses the vNIC profile is created, it will use the Cisco vNIC.

The procedure to configure the vNIC profile for UCS integration involves first configuring a custom device property. When configuring the custom device property, any existing value it contained is overwritten. When combining new and existing custom properties, include all of the custom properties in the command used to set the key’s value. Multiple custom properties are separated by a semi-colon.

Note: A UCS port profile must be configured in Cisco UCS before configuring the vNIC profile.

Configuring the Custom Device Property

  1. On the oVirt Engine, configure the vmfex custom property and set the cluster compatibility level using --cver.
     # engine-config -s CustomDeviceProperties='{type=interface;prop={vmfex=^[a-zA-Z0-9_.-]{2,32}$}}' --cver=3.6
  1. Verify that the vmfex custom device property was added.
     # engine-config -g CustomDeviceProperties
  1. Restart the engine.
     # systemctl restart ovirt-engine.service

The vNIC profile to configure can belong to a new or existing logical network. See Creating a new logical network in a data center or cluster for instructions to configure a new logical network.

Configuring a vNIC Profile for UCS Integration

  1. Click Network → Networks.
  1. Click the logical network’s name to open the details view.
  1. Select the vNIC Profiles tab.
  1. Click New or select a vNIC profile and click Edit.
  1. Enter the Name and Description of the profile.
  1. Select the vmfex custom property from the custom properties list and enter the UCS port profile name.
  1. Click OK.

External Provider Networks

Importing Networks From External Providers

To use networks from an external network provider (OpenStack Networking or any third-party provider that implements the OpenStack Neutron REST API), register the provider with the Manager. See Adding an OpenStack Network Service Neutron for Network Provisioning or Adding an External Network Provider for more information. Then, use the following procedure to import the networks provided by that provider into the Manager so the networks can be used by virtual machines.

Importing a Network From an External Provider

  1. Click Network → Networks.
  2. Click Import.
  3. From the Network Provider drop-down list, select an external provider. The networks offered by that provider are automatically discovered and listed in the Provider Networks list.
  4. Using the check boxes, select the networks to import in the Provider Networks list and click the down arrow to move those networks into the Networks to Import list.
  5. It is possible to customize the name of the network that you are importing. To customize the name, click on the network’s name in the Name column, and change the text.
  6. From the Data Center drop-down list, select the data center into which the networks will be imported.
  7. Optionally, clear the Allow All check box for a network in the Networks to Import list to prevent that network from being available to all users.
  8. Click Import.

The selected networks are imported into the target data center and can be attached to virtual machines. See “Adding a New Network Interface” in the Virtual Machine Management Guide for more information.

Limitations to Using External Provider Networks

The following limitations apply to using logical networks imported from an external provider in an oVirt environment.

  • Logical networks offered by external providers must be used as virtual machine networks, and cannot be used as display networks.
  • The same logical network can be imported more than once, but only to different data centers.
  • You cannot edit logical networks offered by external providers in the Manager. To edit the details of a logical network offered by an external provider, you must edit the logical network directly from the external provider that provides that logical network.
  • Port mirroring is not available for virtual network interface cards connected to logical networks offered by external providers.
  • If a virtual machine uses a logical network offered by an external provider, that provider cannot be deleted from the Manager while the logical network is still in use by the virtual machine.
  • Networks offered by external providers are non-required. As such, scheduling for clusters in which such logical networks have been imported will not take those logical networks into account during host selection. Moreover, it is the responsibility of the user to ensure the availability of the logical network on hosts in clusters in which such logical networks have been imported.

Configuring Subnets on External Provider Logical Networks

A logical network provided by an external provider can only assign IP addresses to virtual machines if one or more subnets have been defined on that logical network. If no subnets are defined, virtual machines will not be assigned IP addresses. If there is one subnet, virtual machines will be assigned an IP address from that subnet, and if there are multiple subnets, virtual machines will be assigned an IP address from any of the available subnets. The DHCP service provided by the external network provider on which the logical network is hosted is responsible for assigning these IP addresses.

While the oVirt Engine automatically discovers predefined subnets on imported logical networks, you can also add or remove subnets to or from logical networks from within the Manager.

Adding Subnets to External Provider Logical Networks

Create a subnet on a logical network provided by an external provider.

Adding Subnets to External Provider Logical Networks

  1. Click Network → Networks.
  1. Click the logical network’s name to open the details view.
  1. Click the Subnets tab.
  1. Click New.
  1. Enter a Name and CIDR for the new subnet.
  1. From the IP Version drop-down menu, select either IPv4 or IPv6.
  1. Click OK.

Removing Subnets from External Provider Logical Networks

Remove a subnet from a logical network provided by an external provider.

Removing Subnets from External Provider Logical Networks

  1. Click Network → Networks.
  1. Click the logical network’s name to open the details view.
  1. Click the Subnets tab.
  1. Select a subnet and click Remove.
  1. Click OK

Hosts and Networking

Refreshing Host Capabilities

When a network interface card is added to a host, the capabilities of the host must be refreshed to display that network interface card in the Engine.

To Refresh Host Capabilities

  1. Click Compute → Hosts and select a host.
  1. Click Management → Refresh Capabilities button.

The list of network interface cards in the Network Interfaces tab of the details pane for the selected host is updated. Any new network interface cards can now be used in the Manager.

Editing Host Network Interfaces and Assigning Logical Networks to Hosts

You can change the settings of physical host network interfaces, move the management network from one physical host network interface to another, and assign logical networks to physical host network interfaces. Bridge and ethtool custom properties are also supported.

Warning: The only way to change the IP address of a host in Red Hat Virtualization is to remove the host and then to add it again.

Important: You cannot assign logical networks offered by external providers to physical host network interfaces; such networks are dynamically assigned to hosts as they are required by virtual machines.

Note: If the switch has been configured to provide Link Layer Discovery Protocol (LLDP) information, you can hover your cursor over a physical network interface to view the switch port’s current configuration. This can help to prevent incorrect configuration. Red Hat recommends checking the following information prior to assigning logical networks:

  • Port Description (TLV type 4) and System Name (TLV type 5) help to detect to which ports and on which switch the host’s interfaces are patched.
  • Port VLAN ID shows the native VLAN ID configured on the switch port for untagged ethernet frames. All VLANs configured on the switch port are shown as VLAN Name and VLAN ID combinations.

Editing Host Network Interfaces and Assigning Logical Networks to Hosts

  1. Click Compute → Hosts.
  1. Click the host’s name to open the details view.
  1. Click the Network Interfaces tab.
  1. Click Setup Host Networks.
  1. Optionally, hover your cursor over host network interface to view configuration information provided by the switch.
  1. Attach a logical network to a physical host network interface by selecting and dragging the logical network into the Assigned Logical Networks area next to the physical host network interface.

Alternatively, right-click the logical network and select a network interface from the drop-down menu.

  1. Configure the logical network:
    i. Hover your cursor over an assigned logical network and click the pencil icon to open the 
    Edit Management Network window.
    ii. From the 
    IPv4 tab, select a Boot Protocol from NoneDHCP, or Static. If you selected Static, enter the IPNetmask / Routing Prefix, and the Gateway.
    Note: Each logical network can have a separate gateway defined from the management network gateway. This ensures traffic that arrives on the logical network will be forwarded using the logical network’s gateway instead of the default gateway used by the management network.
    Note: The IPv6 tab should not be used as it is currently not supported.
    iii. Use the 
    QoS tab to override the default host network quality of service. Select Override QoS and enter the desired values in the following fields:
    • Weighted Share: Signifies how much of the logical link’s capacity a specific network should be allocated, relative to the other networks attached to the same logical link. The exact share depends on the sum of shares of all networks on that link. By default this is a number in the range 1-100.
    • Rate Limit [Mbps]: The maximum bandwidth to be used by a network.
    • Committed Rate [Mbps]: The minimum bandwidth required by a network. The Committed Rate requested is not guaranteed and will vary depending on the network infrastructure and the Committed Rate requested by other networks on the same logical link.
      For more information on configuring host network quality of service see 
      Host Network Quality of Service
      iv. To configure a network bridge, click the Custom Properties tab and select bridge_opts from the drop-down list. Enter a valid key and value with the following syntax: key=value. Separate multiple entries with a whitespace character. The following keys are valid, with the values provided as examples. For more information on these parameters, see Explanation of bridge opts Parameters.
           forward_delay=1500
           gc_timer=3765
           group_addr=1:80:c2:0:0:0
           group_fwd_mask=0x0
           hash_elasticity=4
           hash_max=512
           hello_time=200
           hello_timer=70
           max_age=2000
           multicast_last_member_count=2
           multicast_last_member_interval=100
           multicast_membership_interval=26000
           multicast_querier=0
           multicast_querier_interval=25500
           multicast_query_interval=13000
           multicast_query_response_interval=1000
           multicast_query_use_ifaddr=0
           multicast_router=1
           multicast_snooping=1
           multicast_startup_query_count=2
           multicast_startup_query_interval=3125


      v. To configure ethtool properties, click the Custom Properties tab and select ethtool_opts from the drop-down list. Enter a valid value using the format of the command-line arguments of ethtool. For example:
           --coalesce em1 rx-usecs 14 sample-interval 3 --offload em2 rx on lro on tso off --change em1 speed 1000 duplex half

      This field can accept wildcards. For example, to apply the same option to all of this network’s interfaces, use:
           --coalesce * rx-usecs 14 sample-interval 3

      The ethtool_opts option is not available by default; you need to add it using the engine configuration tool. See the “How to Set Up oVirt Engine to Use Ethtool” in Appendix B for more information. For more information on ethtool properties, see the manual page by typing man ethtool in the command line.
      vi. To configure Fibre Channel over Ethernet (FCoE), click the 
      Custom Properties tab and select fcoe from the drop-down list. Enter a valid key and value with the following syntax: key=value. At least enable=yes is required. You can also add dcb=[yes|no] and auto_vlan=[yes|no]. Separate multiple entries with a whitespace character. The fcoe option is not available by default; you need to add it using the engine configuration tool. See How to Set Up RHVM to Use FCoE for more information.
      Note: A separate, dedicated logical network is recommended for use with FCoE.
      vii. To change the default network used by the host from the management network (ovirtmgmt) to a non-management network, configure the non-management network’s default route.
      viii. If your logical network definition is not synchronized with the network configuration on the host, select the 
      Sync network check box.
  1. Select the Verify connectivity between Host and Engine check box to check network connectivity; this action will only work if the host is in maintenance mode.
  2. Select the Save network configuration check box to make the changes persistent when the environment is rebooted.
  3. Click OK.
    Note: If not all network interface cards for the host are displayed, click Management → Refresh Capabilities button to update the list of network interface cards available for that host.

Synchronizing Host Networks

The Manager defines a network interface as out-of-sync when the definition of the interface on the host differs from the definitions stored by the Manager. Out-of-sync networks appear with an Out-of-sync icon 

 out of sync in the host’s Network Interfaces tab and with this icon 

 out of sync setup in the Setup Host Networks window.

When a host’s network is out of sync, the only activities that you can perform on the unsynchronized network in the Setup Host Networks window are detaching the logical network from the network interface or synchronizing the network.

Understanding How a Host Becomes out-of-sync

A host will become out of sync if:

  • You make configuration changes on the host rather than using the Edit Logical Networks window, for example:
    • Changing the VLAN identifier on the physical host.
    • Changing the Custom MTU on the physical host.
  • You move a host to a different data center with the same network name, but with different values/parameters.
  • You change a network’s VM Network property by manually removing the bridge from the host.
  • You update definitions using the Edit Logical Networks window, without selecting the Save network configuration check box when saving your changes. After rebooting the host, it may become unsynchronized.

Preventing Hosts from Becoming Unsynchronized

Following these best practices will prevent your host from becoming unsynchronized:

  • Ensure that the Save network configuration check box is selected when saving your changes in the Edit Logical Networks window (it is selected by default).
  • Use the Administration Portal to make changes rather than making changes locally on the host.
  • Edit VLAN settings according to the instructions in the “Editing a Host’s VLAN Settings” section.

Synchronizing Hosts

Synchronizing a host’s network interface definitions involves using the definitions from the Manager and applying them to the host. If these are not the definitions that you require, after synchronizing your hosts update their definitions from the Administration Portal. You can synchronize a host’s networks on three levels:

  • Per logical network
  • Per host
  • Per cluster

Synchronizing Host Networks on the Logical Network Level

  1. Click Compute → Hosts.
  1. Click the host’s name to open the details view.
  1. Click the Network Interfaces tab.
  2. Click Setup Host Networks.
  3. Hover your cursor over the unsynchronized network and click the pencil icon to open the Edit Network window.
  4. Select the Sync network check box.
  5. Click OK.
  6. Select the Save network configuration check box in the Setup Host Networks window to make the changes persistent when the environment is rebooted.
  7. Click OK.

Synchronizing a Host’s Networks on the Host level

  • Click the Sync All Networks button in the host’s Network Interfaces tab to synchronize all of the host’s unsynchronized network interfaces.

Synchronizing a Host’s Networks on the Cluster level

  • Click the Sync All Networks button in the cluster’s Logical Networks tab to synchronize all unsynchronized logical network definitions for the entire cluster.
    Note: You can also synchronize a host’s networks via the REST API

Editing a Host’s VLAN Settings

To change the VLAN settings of a host, the host must be removed from the Manager, reconfigured, and re-added to the engine.

To keep networking synchronized, do the following:

  1. Put the host in maintenance mode.
  2. Manually remove the management network from the host. This will make the host reachable over the new VLAN.
  3. Add the host to the cluster. Virtual machines that are not connected directly to the management network can be migrated between hosts safely.

The following warning message appears when the VLAN ID of the management network is changed:

    Changing certain properties (e.g. VLAN, MTU) of the management network could lead to loss of connectivity to hosts in the data center, if its underlying network infrastructure isn't configured to accommodate the changes. Are you sure you want to proceed?

Proceeding causes all of the hosts in the data center to lose connectivity to the Manager and causes the migration of hosts to the new management network to fail. The management network will be reported as “out-of-sync”.

Adding Multiple VLANs to a Single Network Interface Using Logical Networks

Multiple VLANs can be added to a single network interface to separate traffic on the one host.

Important: You must have created more than one logical network, all with the Enable VLAN tagging check box selected in the New Logical Network or Edit Logical Network windows.

Adding Multiple VLANs to a Network Interface using Logical Networks

  1. Click Compute → Hosts.
  1. Click the host’s name to open the details view.
  1. Click the Network Interfaces tab.
  1. Click Setup Host Networks.
  1. Drag your VLAN-tagged logical networks into the Assigned Logical Networks area next to the physical network interface. The physical network interface can have multiple logical networks assigned due to the VLAN tagging.
  1. Edit the logical networks:

i. Hover your cursor over an assigned logical network and click the pencil icon.

ii. If your logical network definition is not synchronized with the network configuration on the host, select the Sync network check box.

iii. Select a Boot Protocol:

  • None
  • DHCP
  • Static

iv. Provide the IP and Subnet Mask.

v. Click OK.

  1. Select the Verify connectivity between Host and Engine check box to run a network check; this will only work if the host is in maintenance mode.
  2. Select the Save network configuration check box.
  3. Click OK.

Add the logical network to each host in the cluster by editing a NIC on each host in the cluster. After this is done, the network will become operational.

You have added multiple VLAN-tagged logical networks to a single interface. This process can be repeated multiple times, selecting and editing the same network interface each time on each host to add logical networks with different VLAN tags to a single network interface.

Assigning Additional IPv4 Addresses to a Host Network

A host network, such as the ovirtmgmt management network, is created with only one IP address when initially set up. This means that if a NIC’s configuration file (for example, /etc/sysconfig/network-scripts/ifcfg-eth01) is configured with multiple IP addresses, only the first listed IP address will be assigned to the host network. Additional IP addresses may be required if connecting to storage, or to a server on a separate private subnet using the same NIC.

The vdsm-hook-extra-ipv4-addrs hook allows you to configure additional IPv4 addresses for host networks. For more information about hooks, see Appendix A, VDSM and Hooks.

In the following procedure, the host-specific tasks must be performed on each host for which you want to configure additional IP addresses.

Assigning Additional IPv4 Addresses to a Host Network

  1. On the host that you want to configure additional IPv4 addresses for, install the VDSM hook package. The package is available by default on oVirt Nodes but needs to be installed on Enterprise Linux hosts.
    # yum install vdsm-hook-extra-ipv4-addrs
  2. On the Engine, run the following command to add the key:
    # engine-config -s 'UserDefinedNetworkCustomProperties=ipv4_addrs=.\*'
  3. Restart the ovirt-engine service:
    # systemctl restart ovirt-engine.service
  4. In the Administration Portal, click Compute → Hosts.
  5. Click the host’s name to open the details view.
  1. Click the Network Interfaces tab and click Setup Host Networks.
  1. Edit the host network interface by hovering the cursor over the assigned logical network and clicking the pencil icon.
  2. Select ipv4_addr from the Custom Properties drop-down list and add the additional IP address and prefix (for example 5.5.5.5/24). Multiple IP addresses must be comma-separated.
  1. Click OK.
  1. Select the Save network configuration check box.
  1. Click OK.

The additional IP addresses will not be displayed in the Engine, but you can run the command ip addr show on the host to confirm that they have been added.

Adding Network Labels to Host Network Interfaces

Using network labels allows you to greatly simplify the administrative workload associated with assigning logical networks to host network interfaces.

Note: Setting a label on a role network (for instance, a migration network or a display network) causes a mass deployment of that network on all hosts. Such mass additions of networks are achieved through the use of DHCP. This method of mass deployment was chosen over a method of typing in static addresses, because of the unscalable nature of the task of typing in many static IP addresses.

Adding Network Labels to Host Network Interfaces

  1. Click Compute → Hosts.
  1. Click the host’s name to open the details view.
  1. Click the Network Interfaces tab.
  1. Click Setup Host Networks.
  1. Click Labels, and right-click [New Label]. Select a physical network interface to label.
  1. Enter a name for the network label in the Label text field.
  1. Click OK.

You have added a network label to a host network interface. Any newly created logical networks with the same label will be automatically assigned to all host network interfaces with that label. Also, removing a label from a logical network will automatically remove that logical network from all host network interfaces with that label.

Bonding Logic in oVirt

The oVirt Engine Administration Portal allows you to create bond devices using a graphical interface. There are several distinct bond creation scenarios, each with its own logic.

Two factors that affect bonding logic are:

  • Are either of the devices already carrying logical networks?
  • Are the devices carrying compatible logical networks?

Bonding Scenarios and Their Results

Bonding Scenario

Result

NIC + NIC

The Create New Bond window is displayed, and you can configure a new bond device.

If the network interfaces carry incompatible logical networks, the bonding operation fails until you detach incompatible logical networks from the devices forming your new bond.

NIC + Bond

The NIC is added to the bond device. Logical networks carried by the NIC and the bond are all added to the resultant bond device if they are compatible.

If the bond devices carry incompatible logical networks, the bonding operation fails until you detach incompatible logical networks from the devices forming your new bond.

Bond + Bond

If the bond devices are not attached to logical networks, or are attached to compatible logical networks, a new bond device is created. It contains all of the network interfaces, and carries all logical networks, of the component bond devices. The Create New Bond window is displayed, allowing you to configure your new bond.

If the bond devices carry incompatible logical networks, the bonding operation fails until you detach incompatible logical networks from the devices forming your new bond.

Bonding Modes

A bond is an aggregation of multiple network interface cards into a single software-defined device. Because bonded network interfaces combine the transmission capability of the network interface cards included in the bond to act as a single network interface, they can provide greater transmission speed than that of a single network interface card. Also, because all network interface cards in the bond must fail for the bond itself to fail, bonding provides increased fault tolerance. However, one limitation is that the network interface cards that form a bonded network interface must be of the same make and model to ensure that all network interface cards in the bond support the same options and modes.

The packet dispersal algorithm for a bond is determined by the bonding mode used.

Important: Modes 1, 2, 3, and 4 support both virtual machine (bridged) and non-virtual machine (bridgeless) network types. Modes 0, 5 and 6 support non-virtual machine (bridgeless) networks only.

oVirt uses Mode 4 by default, but supports the following common bonding modes:

Mode 0 (round-robin policy)

Transmits packets through network interface cards in sequential order. Packets are transmitted in a loop that begins with the first available network interface card in the bond and end with the last available network interface card in the bond. All subsequent loops then start with the first available network interface card. Mode 0 offers fault tolerance and balances the load across all network interface cards in the bond. However, Mode 0 cannot be used in conjunction with bridges, and is therefore not compatible with virtual machine logical networks.

Mode 1 (active-backup policy)

Sets all network interface cards to a backup state while one network interface card remains active. In the event of failure in the active network interface card, one of the backup network interface cards replaces that network interface card as the only active network interface card in the bond. The MAC address of the bond in Mode 1 is visible on only one port to prevent any confusion that might otherwise be caused if the MAC address of the bond changed to reflect that of the active network interface card. Mode 1 provides fault tolerance and is supported in oVirt.

Mode 2 (XOR policy)

Selects the network interface card through which to transmit packets based on the result of an XOR operation on the source and destination MAC addresses modulo network interface card slave count. This calculation ensures that the same network interface card is selected for each destination MAC address used. Mode 2 provides fault tolerance and load balancing and is supported in oVirt.

Mode 3 (broadcast policy)

Transmits all packets to all network interface cards. Mode 3 provides fault tolerance and is supported in oVirt.

Mode 4 (IEEE 802.3ad policy)

Creates aggregation groups in which the interfaces share the same speed and duplex settings. Mode 4 uses all network interface cards in the active aggregation group in accordance with the IEEE 802.3ad specification and is supported in oVirt.

Mode 5 (adaptive transmit load balancing policy)

Ensures the distribution of outgoing traffic accounts for the load on each network interface card in the bond and that the current network interface card receives all incoming traffic. If the network interface card assigned to receive traffic fails, another network interface card is assigned to the role of receiving incoming traffic. Mode 5 cannot be used in conjunction with bridges, therefore it is not compatible with virtual machine logical networks.

Mode 6 (adaptive load balancing policy)

Combines Mode 5 (adaptive transmit load balancing policy) with receive load balancing for IPv4 traffic without any special switch requirements. ARP negotiation is used for balancing the receive load. Mode 6 cannot be used in conjunction with bridges, therefore it is not compatible with virtual machine logical networks.

Creating a Bond Device Using the Administration Portal

You can bond compatible network devices together. This type of configuration can increase available bandwidth and reliability. You can bond multiple network interfaces, pre-existing bond devices, and combinations of the two. A bond can carry both VLAN tagged and non-VLAN traffic.

Creating a Bond Device using the Administration Portal

  1. Click Compute → Hosts.
  1. Click the host’s name to open the details view.
  2. Click the Network Interfaces tab to list the physical network interfaces attached to the host.
  3. Click Setup Host Networks.
  4. Optionally, hover your cursor over host network interface to view configuration information provided by the switch.
  5. Select and drag one of the devices over the top of another device and drop it to open the Create New Bond window. Alternatively, right-click the device and select another device from the drop-down menu.
    If the devices are incompatible, the bond operation fails and suggests how to correct the compatibility issue.
  6. Select the Bond Name and Bonding Mode from the drop-down menus.
    Bonding modes 1, 2, 4, and 5 can be selected. Any other mode can be configured using the 
    Custom option.
  1. Click OK to create the bond and close the Create New Bond window.
  1. Assign a logical network to the newly created bond device.
  1. Optionally choose to Verify connectivity between Host and Engine and Save network configuration.
  1. Click OK.

Your network devices are linked into a bond device and can be edited as a single interface. The bond device is listed in the Network Interfaces tab of the details pane for the selected host.

Bonding must be enabled for the ports of the switch used by the host. The process by which bonding is enabled is slightly different for each switch; consult the manual provided by your switch vendor for detailed information on how to enable bonding.

Note: For a bond in Mode 4, all slaves must be configured properly on the switch. If none of them is configured properly on the switch, the ad_partner_mac is reported as 00:00:00:00:00:00. The Manager will display a warning in the form of an exclamation mark icon on the bond in the Network Interfaces tab. No warning is provided if any of the slaves are up and running.

Example Uses of Custom Bonding Options with Host Interfaces

You can create customized bond devices by selecting Custom from the Bonding Mode of the Create New Bond window. The following examples should be adapted for your needs. For a comprehensive list of bonding options and their descriptions, see the Linux Ethernet Bonding Driver HOWTO on Kernel.org.

xmit_hash_policy

This option defines the transmit load balancing policy for bonding modes 2 and 4. For example, if the majority of your traffic is between many different IP addresses, you may want to set a policy to balance by IP address. You can set this load-balancing policy by selecting a Custom bonding mode, and entering the following into the text field:

mode=4 xmit_hash_policy=layer2+3

ARP Monitoring

ARP monitor is useful for systems which can’t or don’t report link-state properly via ethtool. Set an arp_interval on the bond device of the host by selecting a Custom bonding mode, and entering the following into the text field:

mode=1 arp_interval=1 arp_ip_target=192.168.0.2

Primary

You may want to designate a NIC with higher throughput as the primary interface in a bond device. Designate which NIC is primary by selecting a Custom bonding mode, and entering the following into the text field:

mode=1 primary=eth0

Changing the FQDN of a Host

Use the following procedure to change the fully qualified domain name of hosts.

Updating the FQDN of a Host

  1. Place the host into maintenance mode so the virtual machines are live migrated to another host. See Moving a host to maintenance mode1 for more information. Alternatively, manually shut down or migrate all the virtual machines to another host. See “Manually Migrating Virtual Machines” in the Virtual Machine Management Guide for more information.
  2. Click Remove, and click OK to remove the host from the Administration Portal.
  1. Use the hostnamectl tool to update the host name.
     # hostnamectl set-hostname NEW_FQDN
  1. Reboot the host.
  1. Re-register the host with the Manager. See Adding a Host for more information.

 

Data Centers

Introduction to Data Centers

A data center is a logical entity that defines the set of resources used in a specific environment. A data center is considered a container resource, in that it is comprised of logical resources, in the form of clusters and hosts; network resources, in the form of logical networks and physical NICs; and storage resources, in the form of storage domains.

A data center can contain multiple clusters, which can contain multiple hosts; it can have multiple storage domains associated to it; and it can support multiple virtual machines on each of its hosts. An oVirt environment can contain multiple data centers; the data center infrastructure allows you to keep these centers separate.

All data centers are managed from the single Administration Portal.

Data Centers

 

oVirt creates a default data center during installation. You can configure the default data center, or set up new appropriately named data centers.

Data Center Objects

 

 

The Storage Pool Manager

The Storage Pool Manager (SPM) is a role given to one of the hosts in the data center enabling it to manage the storage domains of the data center. The SPM entity can be run on any host in the data center; the oVirt Engine grants the role to one of the hosts. The SPM does not preclude the host from its standard operation; a host running as SPM can still host virtual resources.

The SPM entity controls access to storage by coordinating the metadata across the storage domains. This includes creating, deleting, and manipulating virtual disks (images), snapshots, and templates, and allocating storage for sparse block devices (on SAN). This is an exclusive responsibility: only one host can be the SPM in the data center at one time to ensure metadata integrity.

The oVirt Engine ensures that the SPM is always available. The Manager moves the SPM role to a different host if the SPM host encounters problems accessing the storage. When the SPM starts, it ensures that it is the only host granted the role; therefore it will acquire a storage-centric lease. This process can take some time.

SPM Priority

The SPM role uses some of a host’s available resources. The SPM priority setting of a host alters the likelihood of the host being assigned the SPM role: a host with high SPM priority will be assigned the SPM role before a host with low SPM priority. Critical virtual machines on hosts with low SPM priority will not have to contend with SPM operations for host resources.

You can change a host’s SPM priority in the SPM tab in the Edit Host window.

Data Center Tasks

Creating a New Data Center

This procedure creates a data center in your virtualization environment. The data center requires a functioning cluster, host, and storage domain to operate.

**Note:** Once the **Compatibility Version** is set, it cannot be lowered at a later time; version regression is not allowed.

The option to specify a MAC pool range for a data center has been disabled, and is now done at the cluster level.

Creating a New Data Center

  1. Click Compute → Data Centers.
  1. Click New.
  1. Enter the Name and Description of the data center.
  1. Select the storage TypeCompatibility Version, and Quota Mode of the data center from the drop-down menus.
  2. Click OK to create the data center and open the Data Center - Guide Me window.
  1. The Guide Me window lists the entities that need to be configured for the data center. Configure these entities or postpone configuration by clicking the Configure Later button; configuration can be resumed by selecting the data center and clicking More Actions → Guide Me.

The new data center is added to the virtualization environment. It will remain Uninitialized until a cluster, host, and storage domain are configured for it; use Guide Me to configure these entities.

Explanation of Settings in the New Data Center and Edit Data Center Windows

The table below describes the settings of a data center as displayed in the New Data Center and Edit Data Center windows. Invalid entries are outlined in orange when you click OK, prohibiting the changes being accepted. In addition, field prompts indicate the expected values or range of values.

Data Center Properties

Field

Description/Action

Name

The name of the data center. This text field has a 40-character limit and must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores.

Description

The description of the data center. This field is recommended but not mandatory.

Type

The storage type. Choose one of the following:

  • Shared
  • Local

The type of data domain dictates the type of the data center and cannot be changed after creation without significant disruption. Multiple types of storage domains (iSCSI, NFS, FC, POSIX, and Gluster) can be added to the same data center, though local and shared domains cannot be mixed.

Compatibility Version

The version of oVirt.

After upgrading the oVirt Engine, the hosts, clusters and data centers may still be in the earlier version. Ensure that you have upgraded all the hosts, then the clusters, before you upgrade the Compatibility Level of the data center.

Quota Mode

Quota is a resource limitation tool provided with oVirt. Choose one of:

  • Disabled: Select if you do not want to implement Quota
  • Audit: Select if you want to edit the Quota settings
  • Enforced: Select to implement Quota

Comment

Optionally add a plain text comment about the data center.</a>

Re-Initializing a Data Center: Recovery Procedure

This recovery procedure replaces the master data domain of your data center with a new master data domain; necessary in the event of data corruption of your master data domain. Re-initializing a data center allows you to restore all other resources associated with the data center, including clusters, hosts, and non-problematic storage domains.

You can import any backup or exported virtual machines or templates into your new master data domain.

Re-Initializing a Data Center

  1. Click Compute → Data Centers and select the data center to re-initialize.
  2. Ensure that any storage domains attached to the data center are in maintenance mode.
  1. Click More Actions → Re-Initialize Data Center.
  1. The Data Center Re-Initialize window lists all available (detached; in maintenance mode) storage domains. Click the radio button for the storage domain you are adding to the data center.
  1. Select the Approve operation check box.
  1. Click OK.

The storage domain is attached to the data center as the master data domain and activated. You can now import any backup or exported virtual machines or templates into your new master data domain.

Removing a Data Center

An active host is required to remove a data center. Removing a data center will not remove the associated resources.

Removing a Data Center

  1. Ensure the storage domains attached to the data center is in maintenance mode.
  1. Click Compute → Data Centers and select the data center to remove.
  1. Click Remove.
  1. Click OK.

Force Removing a Data Center

A data center becomes Non Responsive if the attached storage domain is corrupt or if the host becomes Non Responsive. You cannot Remove the data center under either circumstance.

Force Remove does not require an active host. It also permanently removes the attached storage domain.

It may be necessary to Destroy a corrupted storage domain before you can Force Remove the data center.

Force Removing a Data Center

  1. Click Compute → Data Centers and select the data center to remove.
  1. Click More Actions → Force Remove.
  1. Select the Approve operation check box.
  1. Click OK

The data center and attached storage domain are permanently removed from the oVirt environment.

Changing the Data Center Storage Type

You can change the storage type of the data center after it has been initialized. This is useful for data domains that are used to move virtual machines or templates around.

Limitations

  • Shared to Local - For a data center that does not contain more than one host and more than one cluster, since a local data center does not support it.
  • Local to Shared - For a data center that does not contain a local storage domain.

Changing the Data Center Storage Type

  1. Click Compute → Data Centers and select the data center to change.
  1. Click Edit.
  1. Change the Storage Type to the desired value.
  2. Click OK.

Changing the Data Center Compatibility Version

oVirt data centers have a compatibility version. The compatibility version indicates the version of oVirt that the data center is intended to be compatible with. All clusters in the data center must support the desired compatibility level.

Important: To change the data center compatibility version, you must have first updated all the clusters in your data center to a level that supports your desired compatibility level.

Procedure

  1. From the Administration Portal, click Compute → Data Centers.
  1. Select the data center to change from the list displayed.
  1. Click Edit.
  1. Change the Compatibility Version to the desired value.
  1. Click OK to open the Change Data Center Compatibility Version confirmation window.
  1. Click OK to confirm.

You have updated the compatibility version of the data center.

Important: Upgrading the compatibility will also upgrade all of the storage domains belonging to the data center.

Data Centers and Storage Domains

Attaching an Existing Data Domain to a Data Center

Data domains that are Unattached can be attached to a data center. Shared storage domains of multiple types (iSCSI, NFS, FC, POSIX, and Gluster) can be added to the same data center.

Attaching an Existing Data Domain to a Data Center

  1. Click Compute → Data Centers.
  1. Click a data center’s name to open the details view.
  1. Click the Storage tab to list the storage domains already attached to the data center.
  2. Click Attach Data.
  3. Select the check box for the data domain to attach to the data center. You can select multiple check boxes to attach multiple data domains.
  4. Click OK.

The data domain is attached to the data center and is automatically activated.

Attaching an Existing ISO domain to a Data Center

An ISO domain that is Unattached can be attached to a data center. The ISO domain must be of the same Storage Type as the data center.

Only one ISO domain can be attached to a data center.

Attaching an Existing ISO Domain to a Data Center

  1. Click Compute → Data Centers.
  2. Click a data center’s name to open the details view.
  3. Click the Storage tab to list the storage domains already attached to the data center.
  4. Click Attach ISO.
  5. Click the radio button for the appropriate ISO domain.
  6. Click OK.

The ISO domain is attached to the data center and is automatically activated.

Attaching an Existing Export Domain to a Data Center

**Note:** The export storage domain is deprecated. Storage data domains can be unattached from a data center and imported to another data center in the same environment, or in a different environment. Virtual machines, floating virtual disk images, and templates can then be uploaded from the imported storage domain to the attached data center. See [Importing Existing Storage Domains](sect-Importing_Existing_Storage_Domains) for information on importing storage domains.

An export domain that is Unattached can be attached to a data center. Only one export domain can be attached to a data center.

Attaching an Existing Export Domain to a Data Center

  1. Click Compute → Data Centers.
  2. Click a data center’s name to open the details view.
  3. Click the Storage tab to list the storage domains already attached to the data center.
  4. Click Attach Export.
  5. Click the radio button for the appropriate export domain.
  6. Click OK.

The export domain is attached to the data center and is automatically activated.

Detaching a Storage Domain from a Data Center

Detaching a storage domain from a data center will stop the data center from associating with that storage domain. The storage domain is not removed from the oVirt environment; it can be attached to another data center.

Data, such as virtual machines and templates, remains attached to the storage domain.

Note: The master storage, if it is the last available storage domain, cannot be removed.

Detaching a Storage Domain from a Data Center

  1. Click Compute → Data Centers.
  2. Click a data center’s name to open the details view.
  3. Click the Storage tab to list the storage domains already attached to the data center.
  4. Select the storage domain to detach. If the storage domain is Active, click Maintenance.
  5. Click OK to initiate maintenance mode.
  6. Click Detach.
  7. Click OK.

You have detached the storage domain from the data center. It can take up to several minutes for the storage domain to disappear from the details pane.

 

Data Centers and Permissions

Managing System Permissions for a Data Center

As the SuperUser, the system administrator manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for granting a user administrative privileges that limit them to a specific resource. For example, a DataCenterAdmin role has administrator privileges only for the assigned data center with the exception of the storage for that data center, and a ClusterAdmin has administrator privileges only for the assigned cluster.

A data center administrator is a system administration role for a specific data center only. This is useful in virtualization environments with multiple data centers where each data center requires an administrator. The DataCenterAdmin role is a hierarchical model; a user assigned the data center administrator role for a data center can manage all objects in the data center with the exception of storage for that data center. Use the Configure button in the header bar to assign a data center administrator for all data centers in the environment.

The data center administrator role permits the following actions:

  • Create and remove clusters associated with the data center.
  • Add and remove hosts, virtual machines, and pools associated with the data center.
  • Edit user permissions for virtual machines associated with the data center.

Note: You can only assign roles and permissions to existing users.

You can change the system administrator of a data center by removing the existing system administrator and adding the new system administrator.

Data Center Administrator Roles Explained

Data Center Permission Roles

The table below describes the administrator roles and privileges applicable to data center administration.

oVirt System Administrator Roles

Role

Privileges

Notes

DataCenterAdmin

Data Center Administrator

Can use, create, delete, manage all physical and virtual resources within a specific data center except for storage, including clusters, hosts, templates and virtual machines.

NetworkAdmin

Network Administrator

Can configure and manage the network of a particular data center. A network administrator of a data center inherits network permissions for virtual machines within the data center as well.

Assigning an Administrator or User Role to a Resource

Assign administrator or user roles to resources to allow users to access or manage that resource.

Assigning a Role to a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  1. Click the Permissions tab in the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  1. Click Add.
  2. Enter the name or user name of an existing user into the Search text box and click Go. Select a user from the resulting list of possible matches.
  3. Select a role from the Role to Assign: drop-down list.
  4. Click OK.

You have assigned a role to a user; the user now has the inherited permissions of that role enabled for that resource.

Removing an Administrator or User Role from a Resource

Remove an administrator or user role from a resource; the user loses the inherited permissions associated with the role for that resource.

Removing a Role from a Resource

  1. Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.
  1. Click the Permissions tab in the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.
  2. Select the user to remove from the resource.
  3. Click Remove. The Remove Permission window opens to confirm permissions removal.
  4. Click OK.

You have removed the user's role, and the associated permissions, from the resource.

 

Clusters

Introduction to Clusters

A cluster is a logical grouping of hosts that share the same storage domains and have the same type of CPU (either Intel or AMD). If the hosts have different generations of CPU models, they use only the features present in all models.

Each cluster in the system must belong to a data center, and each host in the system must belong to a cluster. Virtual machines are dynamically allocated to any host in a cluster and can be migrated between them, according to policies defined on the Clusters tab and in the Configuration tool during runtime. The cluster is the highest level at which power and load-sharing policies can be defined.

The number of hosts and number of virtual machines that belong to a cluster are displayed in the results list under Host Count and VM Count, respectively.

Clusters run virtual machines or Gluster Storage Servers. These two purposes are mutually exclusive: A single cluster cannot support virtualization and storage hosts together.

oVirt creates a default cluster in the default data center during installation.

Cluster

 

Cluster Tasks

Creating a New Cluster

A data center can contain multiple clusters, and a cluster can contain multiple hosts. All hosts in a cluster must be of the same CPU type (Intel or AMD). It is recommended that you create your hosts before you create your cluster to ensure CPU type optimization. However, you can configure the hosts at a later time using the Guide Me button.

Creating a New Cluster

Select the Clusters resource tab.

Click New.

Select the Data Center the cluster will belong to from the drop-down list.

Enter the Name and Description of the cluster.

Select a network from the Management Network drop-down list to assign the management network role.

Select the CPU Architecture and CPU Type from the drop-down lists. It is important to match the CPU processor family with the minimum CPU processor type of the hosts you intend to attach to the cluster, otherwise the host will be non-operational.

Note: For both Intel and AMD CPU types, the listed CPU models are in logical order from the oldest to the newest. If your cluster includes hosts with different CPU models, select the oldest CPU model.

Select the Compatibility Version of the cluster from the drop-down list.

Select either the Enable Virt Service or Enable Gluster Service radio button to define whether the cluster will be populated with virtual machine hosts or with Gluster-enabled nodes.

Optionally select the Enable to set VM maintenance reason check box to enable an optional reason field when a virtual machine is shut down from the Manager, allowing the administrator to provide an explanation for the maintenance.

Optionally select the Enable to set Host maintenance reason check box to enable an optional reason field when a host is placed into maintenance mode from the Manager, allowing the administrator to provide an explanation for the maintenance.

Select either the /dev/random source (Linux-provided device) or /dev/hwrng source (external hardware device) check box to specify the random number generator device that all hosts in the cluster will use.

Click the Optimization tab to select the memory page sharing threshold for the cluster, and optionally enable CPU thread handling and memory ballooning on the hosts in the cluster.

Click the Migration Policy tab to define the virtual machine migration policy for the cluster.

Click the Scheduling Policy tab to optionally configure a scheduling policy, configure scheduler optimization settings, enable trusted service for hosts in the cluster, enable HA Reservation, and add a custom serial number policy.

Click the Console tab to optionally override the global SPICE proxy, if any, and specify the address of a SPICE proxy for hosts in the cluster.

Click the Fencing policy tab to enable or disable fencing in the cluster, and select fencing options.

Click OK to create the cluster and open the New Cluster - Guide Me window.

The Guide Me window lists the entities that need to be configured for the cluster. Configure these entities or postpone configuration by clicking the Configure Later button; configuration can be resumed by selecting the cluster and clicking the Guide Me button.

The new cluster is added to the virtualization environment.

Explanation of Settings and Controls in the New Cluster and Edit Cluster Windows

General Cluster Settings Explained

 

The table below describes the settings for the General tab in the New Cluster and Edit Cluster windows. Invalid entries are outlined in orange when you click OK, prohibiting the changes being accepted. In addition, field prompts indicate the expected values or range of values.

General Cluster Settings

Field

Description/Action

Data Center

The data center that will contain the cluster. The data center must be created before adding a cluster.

Name

The name of the cluster. This text field has a 40-character limit and must be a unique name with any combination of uppercase and lowercase letters, numbers, hyphens, and underscores.

Description / Comment

The description of the cluster or additional notes. These fields are recommended but not mandatory.

Management Network

The logical network which will be assigned the management network role. The default is ovirtmgmt. On existing clusters, the management network can only be changed via the Manage Networks button in the Logical Networks tab in the details pane.

CPU Architecture

The CPU architecture of the cluster. Different CPU types are available depending on which CPU architecture is selected.

undefined: All CPU types are available.

x86_64: All Intel and AMD CPU types are available.

ppc64: Only IBM POWER 8 is available.

CPU Type

The CPU type of the cluster. Choose one of:

Intel Conroe Family

Intel Penryn Family

Intel Nehalem Family

Intel Westmere Family

Intel SandyBridge Family

Intel Haswell-noTSX Family

Intel Haswell-Family

Intel Broadwell-noTSX Family

Intel Broadwell Family

Intel Skylake Family

AMD Opteron G1

AMD Opteron G2

AMD Opteron G3

AMD Opteron G4

AMD Opteron G5

IBM POWER 8

All hosts in a cluster must run either Intel, AMD, or IBM POWER 8 CPU type; this cannot be changed after creation without significant disruption. The CPU type should be set to the oldest CPU model in the cluster. Only features present in all models can be used. For both Intel and AMD CPU types, the listed CPU models are in logical order from the oldest to the newest.

Compatibility Version

The version of oVirt. Choose one of:

3.6

4.0

You will not be able to select a version older than the version specified for the data center.

Enable Virt Service

If this radio button is selected, hosts in this cluster will be used to run virtual machines.

Enable Gluster Service

If this radio button is selected, hosts in this cluster will be used as Gluster Storage Server nodes, and not for running virtual machines. You cannot add an oVirt Node to a cluster with this option enabled.

Import existing gluster configuration

This check box is only available if the Enable Gluster Service radio button is selected. This option allows you to import an existing Gluster-enabled cluster and all its attached hosts to oVirt Engine.

The following options are required for each host in the cluster that is being imported:

Address: Enter the IP or fully qualified domain name of the Gluster host server.

Fingerprint: oVirt Engine fetches the host's fingerprint, to ensure you are connecting with the correct host.

Root Password: Enter the root password required for communicating with the host.

Enable to set VM maintenance reason

If this check box is selected, an optional reason field will appear when a virtual machine in the cluster is shut down from the Manager. This allows you to provide an explanation for the maintenance, which will appear in the logs and when the virtual machine is powered on again.

Enable to set Host maintenance reason

If this check box is selected, an optional reason field will appear when a host in the cluster is moved into maintenance mode from the Manager. This allows you to provide an explanation for the maintenance, which will appear in the logs and when the host is activated again.

Required Random Number Generator sources:

If one of the following check boxes is selected, all hosts in the cluster must have that device available. This enables passthrough of entropy from the random number generator device to virtual machines.

/dev/random source - The Linux-provided random number generator.

/dev/hwrng source - An external hardware generator.

Optimization Settings Explained

Memory page sharing allows virtual machines to use up to 200% of their allocated memory by utilizing unused memory in other virtual machines. This process is based on the assumption that the virtual machines in your oVirt environment will not all be running at full capacity at the same time, allowing unused memory to be temporarily allocated to a particular virtual machine.

CPU Thread Handling allows hosts to run virtual machines with a total number of processor cores greater than number of cores in the host. This is useful for non-CPU-intensive workloads, where allowing a greater number of virtual machines to run can reduce hardware requirements. It also allows virtual machines to run with CPU topologies that would otherwise not be possible, specifically if the number of guest cores is between the number of host cores and number of host threads.

The table below describes the settings for the Optimization tab in the New Cluster and Edit Cluster windows.

Optimization Settings

Field

Description/Action

Memory Optimization

None - Disable memory overcommit: Disables memory page sharing.

For Server Load - Allow scheduling of 150% of physical memory: Sets the memory page sharing threshold to 150% of the system memory on each host.

For Desktop Load - Allow scheduling of 200% of physical memory: Sets the memory page sharing threshold to 200% of the system memory on each host.

CPU Threads

Selecting the Count Threads As Cores check box allows hosts to run virtual machines with a total number of processor cores greater than the number of cores in the host.

The exposed host threads would be treated as cores which can be utilized by virtual machines. For example, a 24-core system with 2 threads per core (48 threads total) can run virtual machines with up to 48 cores each, and the algorithms to calculate host CPU load would compare load against twice as many potential utilized cores.

Memory Balloon

Selecting the Enable Memory Balloon Optimization check box enables memory overcommitment on virtual machines running on the hosts in this cluster. When this option is set, the Memory Overcommit Manager (MoM) will start ballooning where and when possible, with a limitation of the guaranteed memory size of every virtual machine.

To have a balloon running, the virtual machine needs to have a balloon device with relevant drivers. Each virtual machine includes a balloon device unless specifically removed. Each host in this cluster receives a balloon policy update when its status changes to Up. If necessary, you can manually update the balloon policy on a host without having to change the status. See Updating the MoM Policy on Hosts in a Cluster.

It is important to understand that in some scenarios ballooning may collide with KSM. In such cases MoM will try to adjust the balloon size to minimize collisions. Additionally, in some scenarios ballooning may cause sub-optimal performance for a virtual machine. Administrators are advised to use ballooning optimization with caution.

KSM control

Selecting the Enable KSM check box enables MoM to run Kernel Same-page Merging (KSM) when necessary and when it can yield a memory saving benefit that outweighs its CPU cost.

Migration Policy Settings Explained

A migration policy defines the conditions for live migrating virtual machines in the event of host failure. These conditions include the downtime of the virtual machine during migration, network bandwidth, and how the virtual machines are prioritized.

Migration Policies Explained

Policy

Description

Legacy

Legacy behavior of 3.6 version. Overrides in vdsm.conf are still applied. The guest agent hook mechanism is disabled.

Minimal downtime

A policy that lets virtual machines migrate in typical situations. Virtual machines should not experience any significant downtime. The migration will be aborted if the virtual machine migration does not converge after a long time (dependent on QEMU iterations, with a maximum of 500 milliseconds). The guest agent hook mechanism is enabled.

Suspend workload if needed

A policy that lets virtual machines migrate in most situations, including virtual machines running heavy workloads. Virtual machines may experience a more significant downtime. The migration may still be aborted for extreme workloads. The guest agent hook mechanism is enabled.

The bandwidth settings define the maximum bandwidth of both outgoing and incoming migrations per host.

Bandwidth Explained

Policy

Description

Auto

Bandwidth is copied from the Rate Limit [Mbps] setting in the data center Host Network QoS. If the rate limit has not been defined, it is computed as a minimum of link speeds of sending and receiving network interfaces. If rate limit has not been set, and link speeds are not available, it is determined by local VDSM setting on sending host.

Hypervisor default

Bandwidth is controlled by local VDSM setting on sending Host.

Custom

Defined by user (in Mbps).

The resilience policy defines how the virtual machines are prioritized in the migration.

Resilience Policy Settings

Field

Description/Action

Migrate Virtual Machines

Migrates all virtual machines in order of their defined priority.

Migrate only Highly Available Virtual Machines

Migrates only highly available virtual machines to prevent overloading other hosts.

Do Not Migrate Virtual Machines

Prevents virtual machines from being migrated.

The Additional Properties are only applicable to the Legacy migration policy.

Additional Properties Explained

Property

Description

Auto Converge migrations

Allows you to set whether auto-convergence is used during live migration of virtual machines. Large virtual machines with high workloads can dirty memory more quickly than the transfer rate achieved during live migration, and prevent the migration from converging. Auto-convergence capabilities in QEMU allow you to force convergence of virtual machine migrations. QEMU automatically detects a lack of convergence and triggers a throttle-down of the vCPUs on the virtual machine. Auto-convergence is disabled globally by default.

Select Inherit from global setting to use the auto-convergence setting that is set at the global level. This option is selected by default.

Select Auto Converge to override the global setting and allow auto-convergence for the virtual machine.

Select Don't Auto Converge to override the global setting and prevent auto-convergence for the virtual machine.

Enable migration compression

The option allows you to set whether migration compression is used during live migration of the virtual machine. This feature uses Xor Binary Zero Run-Length-Encoding to reduce virtual machine downtime and total live migration time for virtual machines running memory write-intensive workloads or for any application with a sparse memory update pattern. Migration compression is disabled globally by default.

Select Inherit from global setting to use the compression setting that is set at the global level. This option is selected by default.

Select Compress to override the global setting and allow compression for the virtual machine.

Select Don't compress to override the global setting and prevent compression for the virtual machine.

Scheduling Policy Settings Explained

Scheduling policies allow you to specify the usage and distribution of virtual machines between available hosts. Define the scheduling policy to enable automatic load balancing across the hosts in a cluster.

To add a scheduling policy to an existing cluster, click the Clusters tab and click the Edit button, then click the Scheduling Policy tab.

 

The table below describes the settings for the Scheduling Policy tab.

Scheduling Policy Tab Properties

Field

Description/Action

Select Policy

Select a policy from the drop-down list.

none: Set the policy value to none to have no load or power sharing between hosts. This is the default mode.

evenly_distributed: Distributes the memory and CPU processing load evenly across all hosts in the cluster. Additional virtual machines attached to a host will not start if that host has reached the defined Maximum Service Level.

InClusterUpgrade: Distributes virtual machines based on host operating system version. Hosts with a newer operating system than the virtual machine currently runs on are given priority over hosts with the same operating system. Virtual machines that migrate to a host with a newer operating system will not migrate back to an older operating system. A virtual machine can restart on any host in the cluster. The policy allows hosts in the cluster to be upgraded by allowing the cluster to have mixed operating system versions. Preconditions must be met before the policy can be enabled.

power_saving: Distributes the memory and CPU processing load across a subset of available hosts to reduce power consumption on underutilized hosts. Hosts with a CPU load below the low utilization value for longer than the defined time interval will migrate all virtual machines to other hosts so that it can be powered down. Additional virtual machines attached to a host will not start if that host has reached the defined high utilization value.

vm_evenly_distributed: Distributes virtual machines evenly between hosts based on a count of the virtual machines. The cluster is considered unbalanced if any host is running more virtual machines than the HighVmCount and there is at least one host with a virtual machine count that falls outside of the MigrationThreshold.

Properties

The following properties appear depending on the selected policy, and can be edited if necessary:

HighVmCount: Sets the maximum number of virtual machines that can run on each host. Exceeding this limit qualifies the host as overloaded. The default value is 10.

MigrationThreshold: Defines a buffer before virtual machines are migrated from the host. It is the maximum inclusive difference in virtual machine count between the most highly-utilized host and the least-utilized host. The cluster is balanced when every host in the cluster has a virtual machine count that falls inside the migration threshold. The default value is 5.

SpmVmGrace: Defines the number of slots for virtual machines to be reserved on SPM hosts. The SPM host will have a lower load than other hosts, so this variable defines how many fewer virtual machines than other hosts it can run. The default value is 5.

CpuOverCommitDurationMinutes: Sets the time (in minutes) that a host can run a CPU load outside of the defined utilization values before the scheduling policy takes action. The defined time interval protects against temporary spikes in CPU load activating scheduling policies and instigating unnecessary virtual machine migration. Maximum two characters. The default value is 2.

HighUtilization: Expressed as a percentage. If the host runs with CPU usage at or above the high utilization value for the defined time interval, the oVirt Engine migrates virtual machines to other hosts in the cluster until the host's CPU load is below the maximum service threshold. The default value is 80.

LowUtilization: Expressed as a percentage. If the host runs with CPU usage below the low utilization value for the defined time interval, the oVirt Engine will migrate virtual machines to other hosts in the cluster. The Manager will power down the original host machine, and restart it again when load balancing requires or there are not enough free hosts in the cluster. The default value is 20.

ScaleDown: Reduces the impact of the HA Reservation weight function, by dividing a host's score by the specified amount. This is an optional property that can be added to any policy, including none.

HostsInReserve: Specifies a number of hosts to keep running even though there are no running virtual machines on them. This is an optional property that can be added to the power_saving policy.

EnableAutomaticHostPowerManagement: Enables automatic power management for all hosts in the cluster. This is an optional property that can be added to the power_saving policy. The default value is true.

MaxFreeMemoryForOverUtilized: Sets the maximum free memory required in MB for the minimum service level. If the host's memory usage runs at, or above this value, the oVirt Engine migrates virtual machines to other hosts in the cluster until the host's available memory is below the minimum service threshold. Setting both MaxFreeMemoryForOverUtilized and MinFreeMemoryForUnderUtilized to 0 MB disables memory based balancing. This is an optional property that can be added to the power_saving and evenly_distributed policies.

MinFreeMemoryForUnderUtilized: Sets the minimum free memory required in MB before the host is considered underutilized. If the host's memory usage runs below this value, the oVirt Engine migrates virtual machines to other hosts in the cluster and will automatically power down the host machine, and restart it again when load balancing requires or there are not enough free hosts in the cluster. Setting both MaxFreeMemoryForOverUtilized and MinFreeMemoryForUnderUtilized to 0MB disables memory based balancing. This is an optional property that can be added to the power_saving and evenly_distributed policies.

Scheduler Optimization

Optimize scheduling for host weighing/ordering.

Optimize for Utilization: Includes weight modules in scheduling to allow best selection.

Optimize for Speed: Skips host weighting in cases where there are more than ten pending requests.

Enable Trusted Service

Enable integration with an OpenAttestation server. Before this can be enabled, use the engine-config tool to enter the OpenAttestation server's details. For more information, see Trusted Compute Pools.

Enable HA Reservation

Enable the Manager to monitor cluster capacity for highly available virtual machines. The Manager ensures that appropriate capacity exists within a cluster for virtual machines designated as highly available to migrate in the event that their existing host fails unexpectedly.

Provide custom serial number policy

This check box allows you to specify a serial number policy for the virtual machines in the cluster. Select one of the following options:

Host ID: Sets the host's UUID as the virtual machine's serial number.

Vm ID: Sets the virtual machine's UUID as its serial number.

Custom serial number: Allows you to specify a custom serial number.

Auto Converge migrations

This option allows you to set whether auto-convergence is used during live migration of virtual machines in the cluster. Large virtual machines with high workloads can dirty memory more quickly than the transfer rate achieved during live migration, and prevent the migration from converging. Auto-convergence capabilities in QEMU allow you to force convergence of virtual machine migrations. QEMU automatically detects a lack of convergence and triggers a throttle-down of the vCPUs on the virtual machines. Auto-convergence is disabled globally by default.

Select Inherit from global setting to use the auto-convergence setting that is set at the global level with engine-config. This option is selected by default.

Select Auto Converge to override the global setting and allow auto-convergence for virtual machines in the cluster.

Select Don't Auto Converge to override the global setting and prevent auto-convergence for virtual machines in the cluster.

Enable migration compression

This option allows you to set whether migration compression is used during live migration of virtual machines in the cluster. This feature uses Xor Binary Zero Run-Length-Encoding to reduce virtual machine downtime and total live migration time for virtual machines running memory write-intensive workloads or for any application with a sparse memory update pattern. Migration compression is disabled globally by default.

Select Inherit from global setting to use the compression setting that is set at the global level with engine-config. This option is selected by default.

Select Compress to override the global setting and allow compression for virtual machines in the cluster.

Select Don't compress to override the global setting and prevent compression for virtual machines in the cluster.

When a host's free memory drops below 20%, ballooning tts like mom.Controllers.Balloon - INFO Ballooning guest:half1 from 1096400 to 1991580 are logged to /var/log/vdsm/mom.log. /var/log/vdsm/mom.log is the Memory Overcommit Manager log file.

Cluster Console Settings Explained

The table below describes the settings for the Console tab in the New Cluster and Edit Cluster windows.

Console Settings

Field

Description/Action

Define SPICE Proxy for Cluster

Select this check box to enable overriding the SPICE proxy defined in global configuration. This feature is useful in a case where the user (who is, for example, connecting via the User Portal) is outside of the network where the hypervisors reside.

Overridden SPICE proxy address

The proxy by which the SPICE client will connect to virtual machines. The address must be in the following format:

protocol://[host]:[port]

Fencing Policy Settings Explained

The table below describes the settings for the Fencing Policy tab in the New Cluster and Edit Cluster windows.

Fencing Policy Settings

Field

Description/Action

Enable fencing

Enables fencing on the cluster. Fencing is enabled by default, but can be disabled if required; for example, if temporary network issues are occurring or expected, administrators can disable fencing until diagnostics or maintenance activities are completed. Note that if fencing is disabled, highly available virtual machines running on non-responsive hosts will not be restarted elsewhere.

Skip fencing if host has live lease on storage

If this check box is selected, any hosts in the cluster that are Non Responsive and still connected to storage will not be fenced.

Skip fencing on cluster connectivity issues

If this check box is selected, fencing will be temporarily disabled if the percentage of hosts in the cluster that are experiencing connectivity issues is greater than or equal to the defined Threshold. The Threshold value is selected from the drop-down list; available values are 25, 50, 75, and 100.

Editing a Resource

Summary

Edit the properties of a resource.

Editing a Resource

Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.

Click Edit to open the Edit window.

Change the necessary properties and click OK.

Result

The new properties are saved to the resource. The Edit window will not close if a property field is invalid.

Setting Load and Power Management Policies for Hosts in a Cluster

The evenly_distributed and power_saving scheduling policies allow you to specify acceptable memory and CPU usage values, and the point at which virtual machines must be migrated to or from a host. The vm_evenly_distributed scheduling policy distributes virtual machines evenly between hosts based on a count of the virtual machines. Define the scheduling policy to enable automatic load balancing across the hosts in a cluster. For a detailed explanation of each scheduling policy, see Cluster Scheduling Policy Settings.

Setting Load and Power Management Policies for Hosts

Use the resource tabs, tree mode, or the search function to find and select the cluster in the results list.

Click Edit to open the Edit Cluster window.

 

Select one of the following policies:

none

vm_evenly_distributed

Set the maximum number of virtual machines that can run on each host in the HighVmCount field.

Define the maximum acceptable difference between the number of virtual machines on the most highly-utilized host and the number of virtual machines on the least-utilized host in the MigrationThreshold field.

Define the number of slots for virtual machines to be reserved on SPM hosts in the SpmVmGrace field.

evenly_distributed

Set the time (in minutes) that a host can run a CPU load outside of the defined utilization values before the scheduling policy takes action in the CpuOverCommitDurationMinutes field.

Enter the CPU utilization percentage at which virtual machines start migrating to other hosts in the HighUtilization field.

Enter the minimum required free memory in MB at which virtual machines start migrating to other hosts in the MinFreeMemoryForUnderUtilized.

Enter the maximum required free memory in MB at which virtual machines start migrating to other hosts in the MaxFreeMemoryForOverUtilized.

power_saving

Set the time (in minutes) that a host can run a CPU load outside of the defined utilization values before the scheduling policy takes action in the CpuOverCommitDurationMinutes field.

Enter the CPU utilization percentage below which the host will be considered under-utilized in the LowUtilization field.

Enter the CPU utilization percentage at which virtual machines start migrating to other hosts in the HighUtilization field.

Enter the minimum required free memory in MB at which virtual machines start migrating to other hosts in the MinFreeMemoryForUnderUtilized.

Enter the maximum required free memory in MB at which virtual machines start migrating to other hosts in the MaxFreeMemoryForOverUtilized.

Choose one of the following as the Scheduler Optimization for the cluster:

Select Optimize for Utilization to include weight modules in scheduling to allow best selection.

Select Optimize for Speed to skip host weighting in cases where there are more than ten pending requests.

If you are using an OpenAttestation server to verify your hosts, and have set up the server's details using the engine-config tool, select the Enable Trusted Service check box.

Optionally select the Enable HA Reservation check box to enable the Manager to monitor cluster capacity for highly available virtual machines.

Optionally select the Provide custom serial number policy check box to specify a serial number policy for the virtual machines in the cluster, and then select one of the following options:

Select Host ID to set the host's UUID as the virtual machine's serial number.

Select Vm ID to set the virtual machine's UUID as its serial number.

Select Custom serial number, and then specify a custom serial number in the text field.

Click OK.

Updating the MoM Policy on Hosts in a Cluster

The Memory Overcommit Manager handles memory balloon and KSM functions on a host. Changes to these functions at the cluster level are only passed to hosts the next time a host moves to a status of Up after being rebooted or in maintenance mode. However, if necessary you can apply important changes to a host immediately by synchronizing the MoM policy while the host is Up. The following procedure must be performed on each host individually.

Synchronizing MoM Policy on a Host

Click the Clusters tab and select the cluster to which the host belongs.

Click the Hosts tab in the details pane and select the host that requires an updated MoM policy.

Click Sync MoM Policy.

The MoM policy on the host is updated without having to move the host to maintenance mode and back Up.

CPU Profiles

CPU profiles define the maximum amount of processing capability a virtual machine in a cluster can access on the host on which it runs, expressed as a percent of the total processing capability available to that host. CPU profiles are created based on CPU profiles defined under data centers, and are not automatically applied to all virtual machines in a cluster; they must be manually assigned to individual virtual machines for the profile to take effect.

Creating a CPU Profile

Create a CPU profile. This procedure assumes you have already defined one or more CPU quality of service entries under the data center to which the cluster belongs.

Creating a CPU Profile

Click the Clusters resource tab and select a cluster.

Click the CPU Profiles sub tab in the details pane.

Click New.

Enter a name for the CPU profile in the Name field.

Enter a description for the CPU profile in the Description field.

Select the quality of service to apply to the CPU profile from the QoS list.

Click OK.

You have created a CPU profile, and that CPU profile can be applied to virtual machines in the cluster.

Removing a CPU Profile

Remove an existing CPU profile from your oVirt environment.

Removing a CPU Profile

Click the Clusters resource tab and select a cluster.

Click the CPU Profiles sub tab in the details pane.

Select the CPU profile to remove.

Click Remove.

Click OK.

You have removed a CPU profile, and that CPU profile is no longer available. If the CPU profile was assigned to any virtual machines, those virtual machines are automatically assigned the default CPU profile.

Importing an Existing Gluster Storage Cluster

You can import a Gluster Storage cluster and all hosts belonging to the cluster into oVirt Engine.

When you provide details such as the IP address or host name and password of any host in the cluster, the gluster peer status command is executed on that host through SSH, then displays a list of hosts that are a part of the cluster. You must manually verify the fingerprint of each host and provide passwords for them. You will not be able to import the cluster if one of the hosts in the cluster is down or unreachable. As the newly imported hosts do not have VDSM installed, the bootstrap script installs all the necessary VDSM packages on the hosts after they have been imported, and reboots them.

Importing an Existing Gluster Storage Cluster to oVirt Engine

Select the Clusters resource tab to list all clusters in the results list.

Click New to open the New Cluster window.

Select the Data Center the cluster will belong to from the drop-down menu.

Enter the Name and Description of the cluster.

Select the Enable Gluster Service radio button and the Import existing gluster configuration check box.

The Import existing gluster configuration field is displayed only if you select Enable Gluster Service radio button.

In the Address field, enter the hostname or IP address of any server in the cluster.

The host Fingerprint displays to ensure you are connecting with the correct host. If a host is unreachable or if there is a network error, an error Error in fetching fingerprint displays in the Fingerprint field.

Enter the Root Password for the server, and click OK.

The Add Hosts window opens, and a list of hosts that are a part of the cluster displays.

For each host, enter the Name and the Root Password.

If you wish to use the same password for all hosts, select the Use a Common Password check box to enter the password in the provided text field.

Click Apply to set the entered password all hosts.

Make sure the fingerprints are valid and submit your changes by clicking OK.

The bootstrap script installs all the necessary VDSM packages on the hosts after they have been imported, and reboots them. You have now successfully imported an existing Gluster Storage cluster into oVirt Engine.

Explanation of Settings in the Add Hosts Window

The Add Hosts window allows you to specify the details of the hosts imported as part of a Gluster-enabled cluster. This window appears after you have selected the Enable Gluster Service check box in the New Cluster window and provided the necessary host details.

Add Gluster Hosts Settings

Field

Description

Use a common password

Tick this check box to use the same password for all hosts belonging to the cluster. Enter the password in the Password field, then click the Apply button to set the password on all hosts.

Name

Enter the name of the host.

Hostname/IP

This field is automatically populated with the fully qualified domain name or IP of the host you provided in the New Cluster window.

Root Password

Enter a password in this field to use a different root password for each host. This field overrides the common password provided for all hosts in the cluster.

Fingerprint

The host fingerprint is displayed to ensure you are connecting with the correct host. This field is automatically populated with the fingerprint of the host you provided in the New Cluster window.

Removing a Cluster

Summary

Move all hosts out of a cluster before removing it.

Note: You cannot remove the Default cluster, as it holds the Blank template. You can however rename the Default cluster and add it to a new data center.

Removing a Cluster

Use the resource tabs, tree mode, or the search function to find and select the cluster in the results list.

Ensure there are no hosts in the cluster.

Click Remove to open the Remove Cluster(s) confirmation window.

Click OK

Result

The cluster is removed.

Changing the Cluster Compatibility Version

oVirt clusters have a compatibility version. The cluster compatibility version indicates the features of oVirt supported by all of the hosts in the cluster. The cluster compatibility is set according to the version of the least capable host operating system in the cluster.

Note: To change the cluster compatibility version, you must have first updated all the hosts in your cluster to a level that supports your desired compatibility level.

Changing the Cluster Compatibility Version

From the Administration Portal, click the Clusters tab.

Select the cluster to change from the list displayed.

Click Edit.

Change the Compatibility Version to the desired value.

Click OK to open the Change Cluster Compatibility Version confirmation window.

Click OK to confirm.

You have updated the compatibility version of the cluster. Once you have updated the compatibility version of all clusters in a data center, you can then change the compatibility version of the data center itself.

Important: Upgrading the compatibility will also upgrade all of the storage domains belonging to the data center.

Clusters and Permissions

Managing System Permissions for a Cluster

As the SuperUser, the system administrator manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for granting a user administrative privileges that limit them to a specific resource. For example, a DataCenterAdmin role has administrator privileges only for the assigned data center with the exception of the storage for that data center, and a ClusterAdmin has administrator privileges only for the assigned cluster.

A cluster administrator is a system administration role for a specific data center only. This is useful in data centers with multiple clusters, where each cluster requires a system administrator. The ClusterAdmin role is a hierarchical model: a user assigned the cluster administrator role for a cluster can manage all objects in the cluster. Use the Configure button in the header bar to assign a cluster administrator for all clusters in the environment.

The cluster administrator role permits the following actions:

Create and remove associated clusters.

Add and remove hosts, virtual machines, and pools associated with the cluster.

Edit user permissions for virtual machines associated with the cluster.

Note: You can only assign roles and permissions to existing users.

You can also change the system administrator of a cluster by removing the existing system administrator and adding the new system administrator.

Cluster Administrator Roles Explained

Cluster Permission Roles

The table below describes the administrator roles and privileges applicable to cluster administration.

oVirt System Administrator Roles

Role

Privileges

Notes

ClusterAdmin

Cluster Administrator

Can use, create, delete, manage all physical and virtual resources in a specific cluster, including hosts, templates and virtual machines. Can configure network properties within the cluster such as designating display networks, or marking a network as required or non-required.

However, a ClusterAdmin does not have permissions to attach or detach networks from a cluster, to do so NetworkAdmin permissions are required.

NetworkAdmin

Network Administrator

Can configure and manage the network of a particular cluster. A network administrator of a cluster inherits network permissions for virtual machines within the cluster as well.

Assigning an Administrator or User Role to a Resource

Assign administrator or user roles to resources to allow users to access or manage that resource.

Assigning a Role to a Resource

Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.

Click the Permissions tab in the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.

Click Add.

Enter the name or user name of an existing user into the Search text box and click Go. Select a user from the resulting list of possible matches.

Select a role from the Role to Assign: drop-down list.

Click OK.

You have assigned a role to a user; the user now has the inherited permissions of that role enabled for that resource.

Removing an Administrator or User Role from a Resource

Remove an administrator or user role from a resource; the user loses the inherited permissions associated with the role for that resource.

Removing a Role from a Resource

Use the resource tabs, tree mode, or the search function to find and select the resource in the results list.

Click the Permissions tab in the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource.

Select the user to remove from the resource.

Click Remove. The Remove Permission window opens to confirm permissions removal.

Click OK.

You have removed the user's role, and the associated permissions, from the resource.

 

Storage

  1. Adding FCP Storage
    1. Click Storage Domains to list all storage domains.
    2. Click New Domain.
    1. Enter the Name of the storage domain.
    1. Use the Data Center drop-down menu to select an FCP data center.
If you do not yet have an appropriate FCP data center, select (none).
  1. Select the Domain Function and the Storage Type from the drop-down menus. The storage domain types that are not compatible with the chosen data center are not available.
  1. Select an active host in the Use Host field. If this is not the first data domain in a data center, you must select the data center’s SPM host.
Important
All communication to the storage domain is through the selected host and not directly from the Red Hat Virtualization Manager. At least one active host must exist in the system and be attached to the chosen data center. All hosts must have access to the storage device before the storage domain can be configured.
  1. The New Domain window automatically displays known targets with unused LUNs when Fibre Channel is selected as the storage type. Select the LUN ID check box to select all of the available LUNs.
  2. Optionally, you can configure the advanced parameters.
    1. Click Advanced Parameters.
    2. Enter a percentage value into the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged.
    3. Enter a GB value into the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked.
    1. Select the Wipe After Delete check box to enable the wipe after delete option. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist.
    1. Select the Discard After Delete check box to enable the discard after delete option. This option can be edited after the domain is created. This option is only available to block storage domains.
  1. Click OK to create the storage domain and close the window.
The new FCP data domain displays in Storage Domains. It will remain with a Locked status while it is being prepared for use. When ready, it is automatically attached to the data center.
 
  1. Local ISO Domain
    1. Changing Local ISO Domain Permission
      1. Log in to the Manager machine.
      1. Edit the /etc/exports file, and add the hosts, or the subnets to which they belong, to the access control list:
/var/lib/exports/iso 10.1.2.0/255.255.255.0(rw) host01.example.com(rw) host02.example.com(rw)
The example above allows read and write access to a single /24 network and two specific hosts. /var/lib/exports/iso is the default file path for the ISO domain. See the exports(5) man page for further formatting options.
  1. Apply the changes:
exportfs -ra or -av
showmount -e
Note that if you manually edit the /etc/exports file after running engine-setup, running engine-cleanup later will not undo the changes.
  1. Attaching Local ISO Domain
    1. In the Administration Portal, click Compute Data Centers and select the appropriate data center.
    1. Click the data center’s name to go to the details view.
    1. Click the Storage tab to list the storage domains already attached to the data center.
    1. Click Attach ISO to open the Attach ISO Library window.
    2. Click the radio button for the local ISO domain.
    3. Click OK.
The ISO domain is now attached to the data center and is automatically activated.
 
  1. Enabling Gluster on Gluster Storage
    1. Click Compute Clusters.
    1. Click New.
    2. Click the General tab and select the Enable Gluster Service check box. Enter the address, SSH fingerprint, and password as necessary. The address and password fields can be filled in only when the Import existing Gluster configuration check box is selected.
    3. Click OK.
 
  1. Preparing and Installing a Remote PostgreSQL DB
    1. Preparing the PostgreSQL Database
      1. Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted:
subscription-manager register
  1. Find the Red Hat Enterprise Linux Server and Red Hat Virtualization subscription pools and note down the pool IDs:
subscription-manager list --available
  1. Use the pool IDs to attach the subscriptions to the system:
subscription-manager attach --pool=poolid
Note
To find out which subscriptions are currently attached, run:
subscription-manager list --consumed
To list all enabled repositories, run:
yum repolist
  1. Disable all existing repositories:
subscription-manager repos --disable=*
  1. Enable the Red Hat Enterprise Linux and Red Hat Virtualization Manager repositories:
subscription-manager repos --enable=rhel-7-server-rpms
subscription-manager repos --enable=rhel-7-server-rhv-4.2-manager-rpms
  1. Initializing the PostgreSQL Database
    1. Install the PostgreSQL server package:
yum install rh-postgresql95 rh-postgresql95-postgresql-contrib
  1. Initialize the PostgreSQL database, start the postgresql service, and ensure that this service starts on boot:
scl enable rh-postgresql95 -- postgresql-setup --initdb
systemctl enable rh-postgresql95-postgresql
systemctl start rh-postgresql95-postgresql
  1. Connect to the psql command line interface as the postgres user:
su - postgres -c 'scl enable rh-postgresql95 -- psql'
  1. Create a default user. The Manager’s default user is engine and the Data Warehouse’s default user is ovirt_engine_history:
postgres=# create role user_name with login encrypted password 'password';
  1. Create a database. The Manager’s default database name is engine and Data Warehouse’s default database name is ovirt_engine_history:
postgres=# create database database_name owner user_name template template0 encoding 'UTF8' lc_collate 'en_US.UTF-8' lc_ctype 'en_US.UTF-8';
  1. Connect to the new database:
postgres=# \c database_name
  1. Add the uuid-ossp extension:
database_name=# CREATE EXTENSION "uuid-ossp";
  1. Add the plpgsql language if it does not exist:
database_name=# CREATE LANGUAGE plpgsql;
  1. Ensure the database can be accessed remotely by enabling md5 client authentication. Edit the /var/lib/pgsql/data/pg_hba.conf file, and add the following line immediately underneath the line starting with local at the bottom of the file, replacing X.X.X.X with the IP address of the Manager or the Data Warehouse machine:
host    database_name    user_name    ::0/32    md5
host    database_name    user_name    ::0/128   md5
  1. Allow TCP/IP connections to the database. Edit the /var/opt/rh/rh-postgresql95/lib/pgsql/data/postgresql.conf file and add the following line:
listen_addresses='*'
  1. Update the PostgreSQL server’s configuration. Edit the /var/opt/rh/rh-postgresql95/lib/pgsql/data/postgresql.conf file and add the following lines:
autovacuum_vacuum_scale_factor='0.01'
autovacuum_analyze_scale_factor='0.075'
autovacuum_max_workers='6'
maintenance_work_mem='65536'
max_connections='150'
work_mem='8192'
  1. Open the default port used for PostgreSQL database connections, and save the updated firewall rules:
firewall-cmd --zone=public --add-service=postgresql
firewall-cmd --permanent --zone=public --add-service=postgresql
  1. Restart the postgresql service:
systemctl rh-postgresql95-postgresql restart
 
  1. Preparing and Installing a Local PostgreSQL DB
    1. Preparing the PostgreSQL Database
      1. Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted:
subscription-manager register
  1. Find the Red Hat Enterprise Linux Server and Red Hat Virtualization subscription pools and note down the pool IDs:
subscription-manager list --available
  1. Use the pool IDs to attach the subscriptions to the system:
subscription-manager attach --pool=poolid
Note
To find out which subscriptions are currently attached, run:
subscription-manager list --consumed
To list all enabled repositories, run:
yum repolist
  1. Disable all existing repositories:
subscription-manager repos --disable=*
  1. Enable the Red Hat Enterprise Linux and Red Hat Virtualization Manager repositories:
subscription-manager repos --enable=rhel-7-server-rpms
subscription-manager repos --enable=rhel-7-server-rhv-4.2-manager-rpms
  1. Initializing the PostgreSQL Database
    1. Install the PostgreSQL server package:
yum install rh-postgresql95 rh-postgresql95-postgresql-contrib
  1. Initialize the PostgreSQL database, start the postgresql service, and ensure that this service starts on boot:
scl enable rh-postgresql95 -- postgresql-setup --initdb
systemctl enable rh-postgresql95-postgresql
systemctl start rh-postgresql95-postgresql
  1. Connect to the psql command line interface as the postgres user:
su - postgres -c 'scl enable rh-postgresql95 -- psql'
  1. Create a user for the Manager to use when it writes to and reads from the database. The default user name on the Manager is engine:
postgres=# create role user_name with login encrypted password 'password';
  1. Create a database in which to store data about the Red Hat Virtualization environment. The default database name on the Manager is engine:
postgres=# create database database_name owner user_name template template0 encoding 'UTF8' lc_collate 'en_US.UTF-8' lc_ctype 'en_US.UTF-8';
  1. Connect to the new database:
postgres=# \c database_name
  1. Add the uuid-ossp extension:
database_name=# CREATE EXTENSION "uuid-ossp";
  1. Add the plpgsql language if it does not exist:
database_name=# CREATE LANGUAGE plpgsql;
  1. Ensure the database can be accessed remotely by enabling md5 client authentication. Edit the /var/opt/rh/rh-postgresql95/lib/pgsql/data/pg_hba.conf file, and add the following line immediately underneath the line starting with local at the bottom of the file, replacing ::0/32 or ::0/128 with the IP address of the Manager:
host    [database name]    [user name]    0.0.0.0/0  md5
host    [database name]    [user name]    ::/32      md5
host    [database name]    [user name]    ::/128     md5
  1. Update the PostgreSQL server’s configuration. Edit the /var/opt/rh/rh-postgresql95/lib/pgsql/data/postgresql.conf file and add the following lines:
autovacuum_vacuum_scale_factor='0.01'
autovacuum_analyze_scale_factor='0.075'
autovacuum_max_workers='6'
maintenance_work_mem='65536'
max_connections='150'
work_mem='8192'
  1. Restart the postgresql service:
systemctl rh-postgresql95-postgresql restart