Administration
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Virtual Machines |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
This chapter describes the steps required to install a Linux virtual machine:
When all of these steps are complete, the new virtual machine is functional and ready to perform tasks.
Creating a Linux Virtual Machine Create a new virtual machine and configure the required settings. Creating Linux Virtual Machines
The new virtual machine is created and displays in the list of virtual machines with a status of Down.
Starting a Virtual Machine
Starting Virtual Machines
The Status of the virtual machine changes to Up, and the operating system installation begins. Open a console to the virtual machine if one does not open automatically. **Note:** A virtual machine will not start on a host that the CPU is overloaded on. By default, a host’s CPU is considered overloaded if it has a load of more than 80% for 5 minutes but these values can be changed using scheduling policies. Opening a Console to a Virtual Machine Use Remote Viewer to connect to a virtual machine. Connecting to Virtual Machines
Opening a Serial Console to a Virtual Machine You can access a virtual machine’s serial console from the command line instead of opening a console from the Administration Portal or the VM Portal. The serial console is emulated through VirtIO channels, using SSH and key pairs. The Engine acts as a proxy for the connection, provides information about virtual machine placement, and stores the authentication keys. You can add public keys for each user from either the Administration Portal or the VM Portal. You can access serial consoles for only those virtual machines for which you have appropriate permissions. **Important:** To access the serial console of a virtual machine, the user must have **UserVmManager**, **SuperUser**, or **UserInstanceManager** permission on that virtual machine. These permissions must be explicitly defined for each user. It is not enough to assign these permissions to **Everyone**. The serial console is accessed through TCP port 2222 on the Engine. This port is opened during engine-setup on new installations. To change the port, see ovirt-vmconsole/README. The serial console relies on the ovirt-vmconsole package and the ovirt-vmconsole-proxy on the Engine, and the ovirt-vmconsole package and the ovirt-vmconsole-host package on the virtualization hosts. These packages are installed by default on new installations. To install the packages on existing installations, reinstall the host. Enabling a Virtual Machine’s Serial Console
Connecting to a Virtual Machine’s Serial Console On the client machine, connect to the virtual machine’s serial console:
Disconnecting from a Virtual Machine’s Serial Console Press any key followed by ~ . to close a serial console session. If the serial console session is disconnected abnormally, a TCP timeout occurs. You will be unable to reconnect to the virtual machine’s serial console until the timeout period expires. Automatically Connecting to a Virtual Machine Once you have logged in, you can automatically connect to a single running virtual machine. This can be configured in the VM Portal. Automatically Connecting to a Virtual Machine
The next time you log into the VM Portal, if you have only one running virtual machine, you will automatically connect to that machine.
Installing oVirt Guest Agents and Drivers
The oVirt guest agents and drivers provide additional information and functionality for Enterprise Linux and Windows virtual machines. Key features include the ability to monitor resource usage and gracefully shut down or reboot virtual machines from the User Portal and Administration Portal. Install the oVirt guest agents and drivers on each virtual machine on which this functionality is to be available. oVirt Guest Drivers
oVirt Guest Agents and Tools
Installing the Guest Agents and Drivers on Enterprise Linux The ovirt guest agents and drivers are installed on Enterprise Linux virtual machines using the ovirt-engine-guest-agent package provided by the ovirt Agent repository. Installing the Guest Agents and Drivers on Enterprise Linux
The guest agent now passes usage information to the ovirt Engine. The ovirt agent runs as a service called ovirt-guest-agent that you can configure via the ovirt-guest-agent.conf configuration file in the /etc/ directory. Installing the Guest Agent on an Atomic Host The ovirt guest agent should be installed as a system container on Atomic hosts Installing the Guest Agent on an Atomic Host The commands for a Centos7 Atomic host are essentially: # atomic pull --storage=ostree ovirtguestagent/centos7-atomic Note: there is official documentation and separate image registry for those with a RHV subscription.
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Virtual Machine Disks |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
Understanding Virtual Machine Storage oVirt supports three storage types: NFS, iSCSI and FCP. In each type, a host known as the Storage Pool Manager (SPM) manages access between hosts and storage. The SPM host is the only node that has full access within the storage pool; the SPM can modify the storage domain metadata, and the pool's metadata. All other hosts can only access virtual machine hard disk image data. By default in an NFS, local, or POSIX compliant data center, the SPM creates the virtual disk using a thin provisioned format as a file in a file system. In iSCSI and other block-based data centers, the SPM creates a volume group on top of the Logical Unit Numbers (LUNs) provided, and makes logical volumes to use as virtual machine disks. Virtual machine disks on block-based storage are preallocated by default. If the virtual disk is preallocated, a logical volume of the specified size in GB is created. The virtual machine can be mounted on a Red Hat Enterprise Linux server using kpartx, vgscan, vgchange or mount to investigate the virtual machine's processes or problems. If the virtual disk is thinly provisioned, a 1 GB logical volume is created. The logical volume is continuously monitored by the host on which the virtual machine is running. As soon as the usage nears a threshold the host notifies the SPM, and the SPM extends the logical volume by 1 GB. The host is responsible for resuming the virtual machine after the logical volume has been extended. If the virtual machine goes into a paused state it means that the SPM could not extend the disk in time. This occurs if the SPM is too busy or if there is not enough storage space. A virtual disk with a preallocated (RAW) format has significantly faster write speeds than a virtual disk with a thin provisioning (QCOW2) format. Thin provisioning takes significantly less time to create a virtual disk. The thin provision format is suitable for non-I/O intensive virtual machines. The preallocated format is recommended for virtual machines with high I/O writes. If a virtual machine is able to write more than 1 GB every four seconds, use preallocated disks where possible. Understanding Virtual Disks oVirt features Preallocated (thick provisioned) and Sparse (thin provisioned) storage options. Preallocated A preallocated virtual disk allocates all the storage required for a virtual machine up front. For example, a 20 GB preallocated logical volume created for the data partition of a virtual machine will take up 20 GB of storage space immediately upon creation. Sparse A sparse allocation allows an administrator to define the total storage to be assigned to the virtual machine, but the storage is only allocated when required. For example, a 20 GB thin provisioned logical volume would take up 0 GB of storage space when first created. When the operating system is installed it may take up the size of the installed file, and would continue to grow as data is added up to a maximum of 20 GB size. The size of a disk is listed in the Disks sub-tab for each virtual machine and template. The Virtual Size of a disk is the total amount of disk space that the virtual machine can use; it is the number that you enter in the Size(GB) field when a disk is created or edited. The Actual Size of a disk is the amount of disk space that has been allocated to the virtual machine so far. Preallocated disks show the same value for both fields. Sparse disks may show a different value in the Actual Size field from the value in the Virtual Size field, depending on how much of the disk space has been allocated. Note: When creating a Cinder virtual disk, the format and type of the disk are handled internally by Cinder and are not managed by oVirt. The possible combinations of storage types and formats are described in the following table. Permitted Storage Combinations
Settings to Wipe Virtual Disks After Deletion The wipe_after_delete flag, viewed in the Administration Portal as the Wipe After Delete check box will replace used data with zeros when a virtual disk is deleted. If it is set to false, which is the default, deleting the disk will open up those blocks for re-use but will not wipe the data. It is, therefore, possible for this data to be recovered because the blocks have not been returned to zero. The wipe_after_delete flag only works on block storage. On file storage, for example NFS, the option does nothing because the file system will ensure that no data exists. Enabling wipe_after_delete for virtual disks is more secure, and is recommended if the virtual disk has contained any sensitive data. This is a more intensive operation and users may experience degradation in performance and prolonged delete times. Note: The wipe after delete functionality is not the same as secure delete, and cannot guarantee that the data is removed from the storage, just that new disks created on same storage will not expose data from old disks. The wipe_after_delete flag default can be changed to true during the setup process (see "Configuring the oVirt Engine" in the Installation Guide), or by using the engine configuration tool on the oVirt Engine. Restart the engine for the setting change to take effect. Setting SANWipeAfterDelete to Default to True Using the Engine Configuration Tool Run the engine configuration tool with the --set action: # engine-config --set SANWipeAfterDelete=true Restart the engine for the change to take effect: # systemctl restart ovirt-engine.service The /var/log/vdsm/vdsm.log file located on the host can be checked to confirm that a virtual disk was successfully wiped and deleted. For a successful wipe, the log file will contain the entry, storage_domain_id/volume_id was zeroed and will be deleted. For example: a9cb0625-d5dc-49ab-8ad1-72722e82b0bf/a49351a7-15d8-4932-8d67-512a369f9d61 was zeroed and will be deleted For a successful deletion, the log file will contain the entry, finished with VG:storage_domain_id LVs: list_of_volume_ids, img: image_id. For example: finished with VG:a9cb0625-d5dc-49ab-8ad1-72722e82b0bf LVs: {'a49351a7-15d8-4932-8d67-512a369f9d61': ImgsPar(imgs=['11f8b3be-fa96-4f6a-bb83-14c9b12b6e0d'], parent='00000000-0000-0000-0000-000000000000')}, img: 11f8b3be-fa96-4f6a-bb83-14c9b12b6e0d An unsuccessful wipe will display a log message zeroing storage_domain_id/volume_id failed. Zero and remove this volume manually, and an unsuccessful delete will display Remove failed for some of VG: storage_domain_id zeroed volumes: list_of_volume_ids.
Shareable Disks in oVirt Some applications require storage to be shared between servers. oVirt allows you to mark virtual machine hard disks as Shareable and attach those disks to virtual machines. That way a single virtual disk can be used by multiple cluster-aware guests. Shared disks are not to be used in every situation. For applications like clustered database servers, and other highly available services, shared disks are appropriate. Attaching a shared disk to multiple guests that are not cluster-aware is likely to cause data corruption because their reads and writes to the disk are not coordinated. You cannot take a snapshot of a shared disk. Virtual disks that have snapshots taken of them cannot later be marked shareable. You can mark a disk shareable either when you create it, or by editing the disk later.
Read Only Disks in oVirt Some applications require administrators to share data with read-only rights. You can do this when creating or editing a disk attached to a virtual machine via the Disks tab in the details pane of the virtual machine and selecting the Read Only check box. That way, a single disk can be read by multiple cluster-aware guests, while an administrator maintains writing privileges. You cannot change the read-only status of a disk while the virtual machine is running. Important: Mounting a journaled file system requires read-write access. Using the Read Only option is not appropriate for virtual machine disks that contain such file systems (e.g. EXT3, EXT4, or XFS).
Virtual Disk Tasks Creating Floating Virtual Disks You can create a virtual disk that does not belong to any virtual machines. You can then attach this disk to a single virtual machine, or to multiple virtual machines if the disk is shareable. Image disk creation is managed entirely by the Engine. Direct LUN disks require externally prepared targets that already exist. Cinder disks require access to an instance of OpenStack Volume that has been added to the oVirt environment using the External Providers window; see Adding an OpenStack Volume Cinder Instance for Storage Management for more information. Creating Floating Virtual Disks Select the Disks resource tab. Click New. Use the radio buttons to specify whether the virtual disk will be an Image, Direct LUN, or Cinder disk. Select the options required for your virtual disk. The options change based on the disk type selected. See Add Virtual Disk dialogue entries for more details on each option for each disk type. Click OK. Explanation of Settings in the New Virtual Disk Window New Virtual Disk Settings: Image
The Direct LUN settings can be displayed in either Targets > LUNs or LUNs > Targets. Targets > LUNs sorts available LUNs according to the host on which they are discovered, whereas LUNs > Targets displays a single list of LUNs. New Virtual Disk Settings: Direct LUN
Fill in the fields in the Discover Targets section and click Discover to discover the target server. You can then click the Login All button to list the available LUNs on the target server and, using the radio buttons next to each LUN, select the LUN to add. Using LUNs directly as virtual machine hard disk images removes a layer of abstraction between your virtual machines and their data. The following considerations must be made when using a direct LUN as a virtual machine hard disk image: Live storage migration of direct LUN hard disk images is not supported. Direct LUN disks are not included in virtual machine exports. Direct LUN disks are not included in virtual machine snapshots. The Cinder settings form will be disabled if there are no available OpenStack Volume storage domains on which you have permissions to create a disk in the relevant Data Center. Cinder disks require access to an instance of OpenStack Volume that has been added to the oVirt environment using the External Providers window; see Adding an OpenStack Volume Cinder Instance for Storage Management for more information. New Virtual Disk Settings: Cinder
Overview of Live Storage Migration Virtual machine disks can be migrated from one storage domain to another while the virtual machine to which they are attached is running. This is referred to as live storage migration. When a disk attached to a running virtual machine is migrated, a snapshot of that disk's image chain is created in the source storage domain, and the entire image chain is replicated in the destination storage domain. As such, ensure that you have sufficient storage space in both the source storage domain and the destination storage domain to host both the disk image chain and the snapshot. A new snapshot is created on each live storage migration attempt, even when the migration fails. Consider the following when using live storage migration: You can live migrate multiple disks at one time. Multiple disks for the same virtual machine can reside across more than one storage domain, but the image chain for each disk must reside on a single storage domain. You can live migrate disks between any two storage domains in the same data center. You cannot live migrate direct LUN hard disk images or disks marked as shareable. Moving a Virtual Disk Move a virtual disk that is attached to a virtual machine or acts as a floating virtual disk from one storage domain to another. You can move a virtual disk that is attached to a running virtual machine; this is referred to as live storage migration. Alternatively, shut down the virtual machine before continuing. Consider the following when moving a disk: You can move multiple disks at the same time. You can move disks between any two storage domains in the same data center. If the virtual disk is attached to a virtual machine that was created based on a template and used the thin provisioning storage allocation option, you must copy the disks for the template on which the virtual machine was based to the same storage domain as the virtual disk. Moving a Virtual Disk Select the Disks tab. Select one or more virtual disks to move. Click Move to open the Move Disk(s) window. From the Target list, select the storage domain to which the virtual disk(s) will be moved. From the Disk Profile list, select a profile for the disk(s), if applicable. Click OK. The virtual disks are moved to the target storage domain, and have a status of Locked while being moved. Copying a Virtual Disk Summary You can copy a virtual disk from one storage domain to another. The copied disk can be attached to virtual machines. Copying a Virtual Disk Select the Disks tab. Select the virtual disks to copy. Click the Copy button to open the Copy Disk(s) window. Optionally, enter an alias in the Alias text field. Use the Target drop-down menus to select the storage domain to which the virtual disk will be copied. Click OK. Result The virtual disks are copied to the target storage domain, and have a status of Locked while being copied. Uploading a Disk Image to a Storage Domain QEMU-compatible virtual machine disk images can be uploaded from your local machine to a oVirt storage domain and attached to virtual machines. Virtual machine disk image types must be either QCOW2 or Raw. Disks created from a QCOW2 disk image cannot be shareable, and the QCOW2 disk image file must not have a backing file. Prerequisites: You must configure the Image I/O Proxy when running engine-setup. See "Configuring the oVirt Engine" in the Installation Guide for more information. You must import the required certificate authority into the web browser used to access the Administration Portal. Internet Explorer 10, Firefox 35, or Chrome 13 or greater is required to perform this upload procedure. Previous browser versions do not support the required HTML5 APIs. Note: To import the certificate authority, browse to https://engine_address/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA and select all the trust settings. Uploading a Disk Image to a Storage Domain Open the Upload Image screen. From the Disks tab, select Start from the Upload drop-down. Alternatively, from the Storage tab select the storage domain, then select the Disks sub-tab, and select Start from the Upload drop-down. In the Upload Image screen, click Browse and select the image on the local disk. Set Image Type to QCOW2 or Raw. Fill in the Disk Option fields. See Add Virtual Disk dialogue entries for a description of the relevant fields. Click OK. A progress bar will indicate the status of the upload. You can also pause, cancel, or resume uploads from the Upload drop-down. Importing a Disk Image from an Imported Storage Domain Import floating virtual disks from an imported storage domain using the Disk Import tab of the details pane. Note: Only QEMU-compatible disks can be imported into the Engine. Importing a Disk Image Select a storage domain that has been imported into the data center. In the details pane, click Disk Import. Select one or more disk images and click Import to open the Import Disk(s) window. Select the appropriate Disk Profile for each disk. Click OK to import the selected disks. Importing an Unregistered Disk Image from an Imported Storage Domain Import floating virtual disks from a storage domain using the Disk Import tab of the details pane. Floating disks created outside of a oVirt environment are not registered with the Engine. Scan the storage domain to identify unregistered floating disks to be imported. Note: Only QEMU-compatible disks can be imported into the Engine. Importing a Disk Image Select a storage domain that has been imported into the data center. Right-click the storage domain and select Scan Disks so that the Engine can identify unregistered disks. In the details pane, click Disk Import. Select one or more disk images and click Import to open the Import Disk(s) window. Select the appropriate Disk Profile for each disk. Click OK to import the selected disks. Importing a Virtual Disk Image from an OpenStack Image Service Summary Virtual disk images managed by an OpenStack Image Service can be imported into the oVirt Engine if that OpenStack Image Service has been added to the Engine as an external provider. Click the Storage resource tab and select the OpenStack Image Service domain from the results list. Select the image to import in the Images tab of the details pane. Click Import to open the Import Image(s) window. From the Data Center drop-down menu, select the data center into which the virtual disk image will be imported. From the Domain Name drop-down menu, select the storage domain in which the virtual disk image will be stored. Optionally, select a quota from the Quota drop-down menu to apply a quota to the virtual disk image. Click OK to import the image. Result The image is imported as a floating disk and is displayed in the results list of the Disks resource tab. It can now be attached to a virtual machine. Exporting a Virtual Machine Disk to an OpenStack Image Service Summary Virtual machine disks can be exported to an OpenStack Image Service that has been added to the Engine as an external provider. Click the Disks resource tab. Select the disks to export. Click the Export button to open the Export Image(s) window. From the Domain Name drop-down list, select the OpenStack Image Service to which the disks will be exported. From the Quota drop-down list, select a quota for the disks if a quota is to be applied. Click OK. Result The virtual machine disks are exported to the specified OpenStack Image Service where they are managed as virtual machine disk images. Important: Virtual machine disks can only be exported if they do not have multiple volumes, are not thinly provisioned, and do not have any snapshots.
Virtual Disks and Permissions Managing System Permissions for a Virtual Disk As the SuperUser, the system administrator manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for granting a user administrative privileges that limit them to a specific resource. For example, a DataCenterAdmin role has administrator privileges only for the assigned data center with the exception of the storage for that data center, and a ClusterAdmin has administrator privileges only for the assigned cluster. oVirt Engine provides two default virtual disk user roles, but no default virtual disk administrator roles. One of these user roles, the DiskCreator role, enables the administration of virtual disks from the User Portal. This role can be applied to specific virtual machines, to a data center, to a specific storage domain, or to the whole virtualized environment; this is useful to allow different users to manage different virtual resources. The virtual disk creator role permits the following actions: Create, edit, and remove virtual disks associated with a virtual machine or other resources. Edit user permissions for virtual disks. Note: You can only assign roles and permissions to existing users. Virtual Disk User Roles Explained Virtual Disk User Permission Roles The table below describes the user roles and privileges applicable to using and administrating virtual machine disks in the User Portal. oVirt System Administrator Roles
Assigning an Administrator or User Role to a Resource Assign administrator or user roles to resources to allow users to access or manage that resource. Assigning a Role to a Resource Use the resource tabs, tree mode, or the search function to find and select the resource in the results list. Click the Permissions tab in the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource. Click Add. Enter the name or user name of an existing user into the Search text box and click Go. Select a user from the resulting list of possible matches. Select a role from the Role to Assign: drop-down list. Click OK. You have assigned a role to a user; the user now has the inherited permissions of that role enabled for that resource. Removing an Administrator or User Role from a Resource Remove an administrator or user role from a resource; the user loses the inherited permissions associated with the role for that resource. Removing a Role from a Resource Use the resource tabs, tree mode, or the search function to find and select the resource in the results list. Click the Permissions tab in the details pane to list the assigned users, the user's role, and the inherited permissions for the selected resource. Select the user to remove from the resource. Click Remove. The Remove Permission window opens to confirm permissions removal. Click OK. You have removed the user's role, and the associated permissions, from the resource.
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Virtual Machine Pools |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
Introduction to Virtual Machine Pools A virtual machine pool is a group of virtual machines that are all clones of the same template and that can be used on demand by any user in a given group. Virtual machine pools allow administrators to rapidly configure a set of generalized virtual machines for users. Users access a virtual machine pool by taking a virtual machine from the pool. When a user takes a virtual machine from a pool, they are provided with any one of the virtual machines in the pool if any are available. That virtual machine will have the same operating system and configuration as that of the template on which the pool was based, but users may not receive the same member of the pool each time they take a virtual machine. Users can also take multiple virtual machines from the same virtual machine pool depending on the configuration of that pool. Virtual machines in a virtual machine pool are stateless, meaning that data is not persistent across reboots. However, if a user configures console options for a virtual machine taken from a virtual machine pool, those options will be set as the default for that user for that virtual machine pool. In principle, virtual machines in a pool are started when taken by a user, and shut down when the user is finished. However, virtual machine pools can also contain pre-started virtual machines. Pre-started virtual machines are kept in an up state, and remain idle until they are taken by a user. This allows users to start using such virtual machines immediately, but these virtual machines will consume system resources even while not in use due to being idle. Note: Virtual machines taken from a pool are not stateless when accessed from the Administration Portal. This is because administrators need to be able to write changes to the disk if necessary. Virtual Machine Pool Tasks Creating a Virtual Machine Pool You can create a virtual machine pool that contains multiple virtual machines that have been created based on a common template. Creating a Virtual Machine Pool
You have created and configured a virtual machine pool with the specified number of identical virtual machines. You can view these virtual machines in the Virtual Machines resource tab, or in the details pane of the Pools resource tab; a virtual machine in a pool is distinguished from independent virtual machines by its icon. Explanation of Settings and Controls in the New Pool and Edit Pool Windows New Pool and Edit Pool General Settings Explained The following table details the information required on the General tab of the New Pool and Edit Pool windows that are specific to virtual machine pools. All other settings are identical to those in the New Virtual Machine window. General settings
New and Edit Pool Type Settings Explained The following table details the information required on the Type tab of the New Pool and Edit Pool windows. Type settings
New Pool and Edit Pool Console Settings Explained The following table details the information required on the Console tab of the New Pool or Edit Pool window that is specific to virtual machine pools. All other settings are identical to those in the New Virtual Machine and Edit Virtual Machine windows. Console settings
Virtual Machine Pool Host Settings Explained The following table details the options available on the Host tab of the New Pool and Edit Pool windows. Virtual Machine Pool: Host Settings
Editing a Virtual Machine Pool Editing a Virtual Machine Pool After a virtual machine pool has been created, its properties can be edited. The properties available when editing a virtual machine pool are identical to those available when creating a new virtual machine pool except that the Number of VMs property is replaced by Increase number of VMs in pool by. Note: When editing a virtual machine pool, the changes introduced affect only new virtual machines. Virtual machines that existed already at the time of the introduced changes remain unaffected. Editing a Virtual Machine Pool
Prestarting Virtual Machines in a Pool The virtual machines in a virtual machine pool are powered down by default. When a user requests a virtual machine from a pool, a machine is powered up and assigned to the user. In contrast, a prestarted virtual machine is already running and waiting to be assigned to a user, decreasing the amount of time a user has to wait before being able to access a machine. When a prestarted virtual machine is shut down it is returned to the pool and restored to its original state. The maximum number of prestarted virtual machines is the number of virtual machines in the pool. Prestarted virtual machines are suitable for environments in which users require immediate access to virtual machines which are not specifically assigned to them. Only automatic pools can have prestarted virtual machines. Prestarting Virtual Machines in a Pool
You have set a number of prestarted virtual machines in a pool. The prestarted machines are running and available for use. Adding Virtual Machines to a Virtual Machine Pool If you require more virtual machines than originally provisioned in a virtual machine pool, add more machines to the pool. Adding Virtual Machines to a Virtual Machine Pool
You have added more virtual machines to the virtual machine pool. Detaching Virtual Machines from a Virtual Machine Pool You can detach virtual machines from a virtual machine pool. Detaching a virtual machine removes it from the pool to become an independent virtual machine. Detaching Virtual Machines from a Virtual Machine Pool
Click the Virtual Machines tab in the details pane to list the virtual machines in the pool.
Note: The virtual machine still exists in the environment and can be viewed and accessed from the Virtual Machines resource tab. Note that the icon changes to denote that the detached virtual machine is an independent virtual machine. You have detached a virtual machine from the virtual machine pool. Removing a Virtual Machine Pool You can remove a virtual machine pool from a data center. You must first either delete or detach all of the virtual machines in the pool. Detaching virtual machines from the pool will preserve them as independent virtual machines. Removing a Virtual Machine Pool
You have removed the pool from the data center. Pools and Permissions Managing System Permissions for a Virtual Machine Pool As the SuperUser, the system administrator manages all aspects of the Administration Portal. More specific administrative roles can be assigned to other users. These restricted administrator roles are useful for granting a user administrative privileges that limit them to a specific resource. For example, a DataCenterAdmin role has administrator privileges only for the assigned data center with the exception of the storage for that data center, and a ClusterAdmin has administrator privileges only for the assigned cluster. A virtual machine pool administrator is a system administration role for virtual machine pools in a data center. This role can be applied to specific virtual machine pools, to a data center, or to the whole virtualized environment; this is useful to allow different users to manage certain virtual machine pool resources. The virtual machine pool administrator role permits the following actions:
Note: You can only assign roles and permissions to existing users. Virtual Machine Pool Administrator Roles Explained Pool Permission Roles The table below describes the administrator roles and privileges applicable to pool administration. oVirt System Administrator Roles
Assigning an Administrator or User Role to a Resource Assign administrator or user roles to resources to allow users to access or manage that resource. Assigning a Role to a Resource
You have assigned a role to a user; the user now has the inherited permissions of that role enabled for that resource. Removing an Administrator or User Role from a Resource Remove an administrator or user role from a resource; the user loses the inherited permissions associated with the role for that resource. Removing a Role from a Resource
You have removed the user's role, and the associated permissions, from the resource. Trusted Compute Pools Trusted compute pools are secure clusters based on Intel Trusted Execution Technology (Intel TXT). Trusted clusters only allow hosts that are verified by Intel's OpenAttestation, which measures the integrity of the host's hardware and software against a White List database. Trusted hosts and the virtual machines running on them can be assigned tasks that require higher security. For more information on Intel TXT, trusted systems, and attestation, see https://software.intel.com/en-us/articles/intel-trusted-execution-technology-intel-txt-enabling-guide.](https://software.intel.com/en-us/articles/intel-trusted-execution-technology-intel-txt-enabling-guide](https://software.intel.com/en-us/articles/intel-trusted-execution-technology-intel-txt-enabling-guide).) Creating a trusted compute pool involves the following steps:
For information on installing an OpenAttestation server, installing the OpenAttestation agent on hosts, and creating a White List database, see https://github.com/OpenAttestation/OpenAttestation/wiki.](https://github.com/OpenAttestation/OpenAttestation/wiki](https://github.com/OpenAttestation/OpenAttestation/wiki).) Connecting an OpenAttestation Server to the Engine Before you can create a trusted cluster, the oVirt Engine must be configured to recognize the OpenAttestation server. Use engine-config to add the OpenAttestation server's FQDN or IP address: # engine-config -s AttestationServer=attestationserver.example.com The following settings can also be changed if required: OpenAttestation Settings for engine-config
Creating a Trusted Cluster Trusted clusters communicate with an OpenAttestation server to assess the security of hosts. When a host is added to a trusted cluster, the OpenAttestation server measures the host's hardware and software against a White List database. Virtual machines can be migrated between trusted hosts in the trusted cluster, allowing for high availability in a secure environment. Creating a Trusted Cluster
Adding a Trusted Host Enterprise Linux hosts can be added to trusted clusters and measured against a White List database by the OpenAttestation server. Hosts must meet the following requirements to be trusted by the OpenAttestation server:
Adding a Trusted Host
After the host is added to the trusted cluster, it is assessed by the OpenAttestation server. If a host is not trusted by the OpenAttestation server, it will move to a Non Operational state and should be removed from the trusted cluster.
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Snapshots |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Will be added soon ...
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Templates |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
A template is a pre-installed and pre-configured virtual machine and Templates become beneficial where we need to deploy large number similar virtual machines.Templates help us to reduce the time to deploy virtual machine and also reduce the amount of disk space needed.A template does not require to be cloned. Instead a small overlay can be put on top of the base image to store just the changes for one particular instance.
To Convert a virtual machine into a template we need to generalize the virtual machine or in other words sealing virtual machine.
We will be using this virtual machine and will convert it into a template. Refer the following steps :
Step:1 Login to Virtual Machine Console SSH the virtual machine as a root user.
Step:2 Remove SSH host keys using rm command. [root@linuxtechi ~]# rm -f /etc/ssh/ssh_host_*
Step:3 Remove the hostname and set it as local host [root@linuxtechi ~]# hostnamectl set-hostname 'localhost'
Step:4 Remove the host specific information Remove the followings :
[root@linuxtechi ~]# rm -f /etc/udev/rules.d/*-persistent-*.rules
Step:5 Remove RHN systemid associated with virtual machine [root@linuxtechi ~]# rm -f /etc/sysconfig/rhn/systemid
Step:6 Run the command sys-unconfig Run the command sys-unconfig to complete the process and it will also shutdown the virtual machine. [root@linuxtechi ~]# sys-unconfig
Now our Virtual Machine is ready for template. Do the right click on the Machine and select the “Make Template” option Specify the Name and Description of the template and click on OK It will take couple of minutes to create template from the virtual machine. Once Done go to templates Tab and verify whether the newly created template is there or not.
Now start deploying virtual machine from template. Got to the Virtual Machine Tab , click on “New VM“, Select the template that we have created in above steps. Specify the VM name and Description When we click on OK , it will start creating the virtual machine from template. As we can see that after couple of minutes Virtual Machine “test_server1” has been successfully launched from template.
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Storage |
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
If you do not yet have an appropriate FCP data center, select (none).
ImportantAll communication to the storage domain is through the selected host and not directly from the Red Hat Virtualization Manager. At least one active host must exist in the system and be attached to the chosen data center. All hosts must have access to the storage device before the storage domain can be configured.
The new FCP data domain displays in Storage → Domains. It will remain with a Locked status while it is being prepared for use. When ready, it is automatically attached to the data center.
/var/lib/exports/iso 10.1.2.0/255.255.255.0(rw) host01.example.com(rw) host02.example.com(rw)The example above allows read and write access to a single /24 network and two specific hosts. /var/lib/exports/iso is the default file path for the ISO domain. See the exports(5) man page for further formatting options.
exportfs -ra or -avshowmount -eNote that if you manually edit the /etc/exports file after running engine-setup, running engine-cleanup later will not undo the changes.
The ISO domain is now attached to the data center and is automatically activated.
subscription-manager register
subscription-manager list --available
subscription-manager attach --pool=poolidNoteTo find out which subscriptions are currently attached, run:subscription-manager list --consumedTo list all enabled repositories, run:yum repolist
subscription-manager repos --disable=*
subscription-manager repos --enable=rhel-7-server-rpmssubscription-manager repos --enable=rhel-7-server-rhv-4.2-manager-rpms
yum install rh-postgresql95 rh-postgresql95-postgresql-contrib
scl enable rh-postgresql95 -- postgresql-setup --initdbsystemctl enable rh-postgresql95-postgresqlsystemctl start rh-postgresql95-postgresql
su - postgres -c 'scl enable rh-postgresql95 -- psql'
postgres=# create role user_name with login encrypted password 'password';
postgres=# create database database_name owner user_name template template0 encoding 'UTF8' lc_collate 'en_US.UTF-8' lc_ctype 'en_US.UTF-8';
postgres=# \c database_name
database_name=# CREATE EXTENSION "uuid-ossp";
database_name=# CREATE LANGUAGE plpgsql;
host database_name user_name ::0/32 md5host database_name user_name ::0/128 md5
listen_addresses='*'
autovacuum_vacuum_scale_factor='0.01'autovacuum_analyze_scale_factor='0.075'autovacuum_max_workers='6'maintenance_work_mem='65536'max_connections='150'work_mem='8192'
firewall-cmd --zone=public --add-service=postgresqlfirewall-cmd --permanent --zone=public --add-service=postgresql
systemctl rh-postgresql95-postgresql restart
subscription-manager register
subscription-manager list --available
subscription-manager attach --pool=poolidNoteTo find out which subscriptions are currently attached, run:subscription-manager list --consumedTo list all enabled repositories, run:yum repolist
subscription-manager repos --disable=*
subscription-manager repos --enable=rhel-7-server-rpmssubscription-manager repos --enable=rhel-7-server-rhv-4.2-manager-rpms
yum install rh-postgresql95 rh-postgresql95-postgresql-contrib
scl enable rh-postgresql95 -- postgresql-setup --initdbsystemctl enable rh-postgresql95-postgresqlsystemctl start rh-postgresql95-postgresql
su - postgres -c 'scl enable rh-postgresql95 -- psql'
postgres=# create role user_name with login encrypted password 'password';
postgres=# create database database_name owner user_name template template0 encoding 'UTF8' lc_collate 'en_US.UTF-8' lc_ctype 'en_US.UTF-8';
postgres=# \c database_name
database_name=# CREATE EXTENSION "uuid-ossp";
database_name=# CREATE LANGUAGE plpgsql;
host [database name] [user name] 0.0.0.0/0 md5host [database name] [user name] ::/32 md5host [database name] [user name] ::/128 md5
autovacuum_vacuum_scale_factor='0.01'autovacuum_analyze_scale_factor='0.075'autovacuum_max_workers='6'maintenance_work_mem='65536'max_connections='150'work_mem='8192'
systemctl rh-postgresql95-postgresql restart |