Automated Oracle VM up and running in just 7 declarative statements

Automated Oracle VM up and running in just 7 declarative statements Without the correct tools, setting up an Oracle VM farm is very difficult. Using our ovm_config Puppet module, you can now automatically create and maintain your Oracle VM infrastructure with just 7 simple declarative Puppet statements.

The demo setup

In our last blog post we showed you how to setup a development or learning setup on VirtualBox. In this blog post, we are going to show you the Oracle VM puppet setup we use in that demo. We are going to explain what and why we do this and show you how easy it is to get your Oracle VM infrastructure up-and-running with our Puppet module.

Connecting to the Oracle VM Manager

Before we can let Puppet manage the Oracle VM infrastructure, Puppet needs to know how it can connect to the Oracle VM Manager. The Puppet module used the restful interface from the Oracle VM Manager to fetch information and execute actions if and when needed.

To do so, we use the ovm_manager Puppet type and tell it how we need to connect to the restful interface.

ovm_manager { 'default':
  username    => 'admin',
  password    => $password,
  manager_url => '',
  port        => 7002,

You need to specify the username, password, the manager URL and the port number required to connect.

After this Puppet declaration, you can use all Oracle VM Puppet types, and it will know where to get the information, and trigger required actions.

In our example we use the name default. But you can use any name you want. When you have a large infrastructure with multiple Oracle VM Managers, you can use the name to distinguish between them. This means you can use one set of Puppet manifests to manage multiple Oracle VM managers.

Setting up the server pool and a VM server

Although not required by Oracle VM, we strongly suggest you always create a server pool. Here is the Oracle documentation on what a server pool is and why and how you should use it.

The Puppet code to declare a server pool is:

ovm_server_pool { 'default/server_pool_1':
  ensure          => 'present',
  description     => 'Server Pool Number 1',
  keymap          => 'en-us',
  virtual_ip      => '',
  vm_start_policy => 'BEST_SERVER',

As you can see it is very easy. The next step is making sure our VM server is known by Oracle VM and added to the server pool. Here ios the Puppet declaration to do this.

ovm_server { 'default/server_pool_1/':
  ensure     => 'present',
  ip_address => '',
  password   => $password,

Again a very simple and readable piecse of Puppet code.

Adding storage

Before we can create VM’s, we need to add some storage to the Oracle VM infrastructure. In the example, we use an NFS based setup, but you can also add others types of storage.

Again we have to tell the Oracle VM manager how to find the NFS server and how to access its content.

ovm_file_server { 'default/nfs_server_1':
  ensure          => 'present',
  access_host     => '',
  admin_servers   => [''],
  uniform_exports => 'true',

Creating a repository

The next step before we can create a VM, is to create a location where Oracle VM can put the virtual disks, virtual ISO’s and templates. This is called a repository.Here is the Oracle documentation explaining what a repository is and how you should use it. The Puppet code to declare a repo is:

ovm_repository { 'default/repo01':
  ensure               => 'present',
  share_path           => '/repo01/',
  file_system_name     => 'nfs on',
  presented_to_servers => [''],

It needs the name of the earlier declared NFS server and the name of an exported directory on it, as value for the property file_system_name

Ready to create a VM

The Puppet code seen before is all that is required to do the base setup. Now we are ready to create as many VM’s and virtual disks as your servers support, and you need.

Let’s first create a virtual diskfor our VM:

ovm_virtual_disk { 'default/repo01/vm01_disk':
  ensure    => 'present',
  disk_type => 'VIRTUAL_DISK',
  size      => '2G',

As you can see in the title part (after the { and before the :}), we create a disk called vm01_disk in the Oracle VM repository we declared earlier called repo01.

Next step is to create the VM in the declared server pool, connect the newly declared virtual disk:

ovm_vm { 'default/repo01/server_pool_1/':
  ensure                  => 'present',
  boot_order              => ['DISK'],
  cpu_count               => 2,
  cpu_count_limit         => 2,
  cpu_priority            => 50,
  cpu_utilization_cap     => 100,
  description             => 'Test 1',
  keymap_name             => 'en-us',
  memory                  => 192,
  memory_limit            => 192,
  os_type                 => 'Oracle Linux 7',
  restart_action_on_crash => 'RESTART',
  vm_domain_type          => 'XEN_HVM',
  vm_mouse_type           => 'OS_DEFAULT',
  vm_run_state            => 'RUNNING',
  vm_start_policy         => 'BEST_SERVER',
  virtual_disks           => ['0:repo01/vm01_disk']

Now we have a VM running in our Oracle VM infrastructure.

Extending the setup

In the example, we structured the code using the Puppet roles and profiles pattern. Here is the Oracle VM specific profile for this example:

class profile::example()
  contain ::profile::example::base_setup
  contain ::profile::example::storage_setup
  contain ::profile::example::network_setup
  contain ::profile::example::vms


You can see the structure. First, do the base setup, then the storage and the network setup and finally manage the VM's. The Puppet code to declare the VM's looks [like this](

class profile::example::vms()
  require ::profile::example::vms::myvm1

As you can see it is very simple to extend the Puppet code to create and define all required VM’s for your infrastructure.


We hope this article helps you to get a glimpse of how easy it becomes to setup, manage and extend your Oracle VM infrastructure using our ovm_config Oracle VM Puppet module. If you are interested, contact us so we can discuss how you can use these puppet modules to speed up your Oracle VM provisioning and deployment.