Overview

This resource allows you to manage a file persistence stores in an WebLogic domain.

Here is an example on how to use the wls_file_persistence_store:

# this will use default as wls_setting identifier
wls_file_persistence_store{'jmsFile1':
  ensure     => 'present',
  directory  => 'persistence1',
  target     => ['wlsServer1'],
  targettype => ['Server'],
}

In this example you are managing a file persistence store in the default domain. When you want to manage a file persistence store in a specific domain, you can use:

wls_file_persistence_store{'mydomain/jmsFile2':
  ensure     => 'present',
  directory  => 'persistence2',
  target     => ['wlsServer2'],
  targettype => ['Server'],
}

Experience the Power of Puppet for WebLogic

If you want to play and experiment with Puppet and WebLogic, please take a look at our playgrounds. At our playgrounds, we provide you with a pre-installed environment, where you experiment fast and easy.

For WebLogic   here at our playground...

Attributes

Attribute Name Short Description
block_size The smallest addressable block, in bytes, of a file.
cache_directory The location of the cache directory for Direct-Write-With-Cache, ignored for other policies.
deployment_order A priority that the server uses to determine when it deploys an item.
directory The file persistent store directory name
   
disable_autorequire Puppet supports automatic ordering of resources by autorequire.
disable_corrective_change Disable the modification of a resource when Puppet decides it is a corrective change.
disable_corrective_ensure Disable the creation or removal of a resource when Puppet decides is a corrective change.
distribution_policy Specifies how the instances of a configured JMS artifact are named and distributed when deployed to a cluster.
domain With this parameter, you identify the domain, where your objects is in.
ensure The basic property that the resource should be in.
failback_delay_seconds Specifies the amount of time, in seconds, to delay before failing a cluster targeted JMS artifact instance back to its preferred server after the preferred server failed and was restarted.
file_locking_enabled Determines whether OS file locking is used.
file_persistence_name The file persistence name
   
initial_boot_delay_seconds Specifies the amount of time, in seconds, to delay before starting a cluster targeted JMS instance on a newly booted WebLogic server.
initial_size The initial file size, in bytes.
io_buffer_size The I/O buffer size, in bytes, automatically rounded down to the nearest power of 2.
logical_name The name used by subsystems to refer to different stores on different servers using the same name.
max_file_size The maximum file size, in bytes.
max_window_buffer_size The maximum amount of data, in bytes and rounded down to the nearest power of 2, mapped into the JVM’s address space per primary store file.
migration_policy Controls migration and restart behavior of cluster targeted JMS service artifact instances.
min_window_buffer_size The minimum amount of data, in bytes and rounded down to the nearest power of 2, mapped into the JVM’s address space per primary store file.
name The name.
notes Optional information that you can include to describe this configuration.
number_of_restart_attempts Specifies the number of restart attempts before migrating a failed JMS artifact instance to another server in the WebLogic cluster.
partial_cluster_stability_delay_seconds Specifies the amount of time, in seconds, to delay before a partially started cluster starts all cluster targeted JMS artifact instances that are configured with a Migration Policy of Always or On-Failure.
provider resource.
restart_in_place Enables periodic automatic restart of failed cluster targeted JMS artifact instance(s) running on healthy WebLogic Server instances.
seconds_between_restarts Specifies the amount of time, in seconds, to wait in between attempts to restart a failed service instance.
synchronous_write_policy The disk write policy that determines how the file store writes data to disk.
tags Return all tags on this Configuration MBean
   
target An array of target names.
targettype An array of target types.
timeout Timeout for applying a resource.
xa_resource_name Overrides the name of the XAResource that this store registers with JTA.

block_size

The smallest addressable block, in bytes, of a file. When a native wlfileio driver is available and the block size has not been configured by the user, the store selects the minimum OS specific value for unbuffered (direct) I/O, if it is within the range [512, 8192]. A file store’s block size does not change once the file store creates its files. Changes to block size only take effect for new file stores or after the current files have been deleted. See “Tuning the Persistent Store” in Tuning Performance of Oracle WebLogic Server.

An example on how to use this:

wls_file_persistence_store {a_wls_file_persistence_store :
   ...
   block_size => '-1'
   ...
}

This is an extended property. Before you can use it add it to the wls_settings property extra_properties.

wls_setting{'domain':
   ...
  extra_properties => ['wls_file_persistence_store:block_size']
   ...
}

This help text generated from MBean text of the WebLogic server.

Back to overview of wls_file_persistence_store

cache_directory

The location of the cache directory for Direct-Write-With-Cache, ignored for other policies. When Direct-Write-With-Cache is specified as the SynchronousWritePolicy, cache files are created in addition to primary files (see Directory for the location of primary files). If a cache directory location is specified, the cache file path is CacheDirectory/WLStoreCache/StoreNameFileNum.DAT.cache. When specified, Oracle recommends using absolute paths, but if the directory location is a relative path, then CacheDirectory is created relative to the WebLogic Server instance’s home directory. If “” or Null is specified, the Cache Directory is located in the current operating system temp directory as determined by the java.io.tmpdir Java System property (JDK’s default: /tmp on UNIX, %TEMP% on Windows) and is ` TempDirectory/WLStoreCache/DomainName/unique-id/StoreNameFileNum.DAT.cache. The value of java.io.tmpdir varies between operating systems and configurations, and can be overridden by passing -Djava.io.tmpdir=My_path` on the JVM command line. Considerations:

  • Security: Some users may want to set specific directory permissions to limit access to the cache directory, especially if there are custom configured user access limitations on the primary directory. For a complete guide to WebLogic security, see “Securing a Production Environment for Oracle WebLogic Server.”
  • Additional Disk Space Usage: Cache files consume the same amount of disk space as the primary store files that they mirror. See Directory for the location of primary store files.
  • Performance: For the best performance, a cache directory should be located in local storage instead of NAS/SAN (remote) storage, preferably in the operating system’s temp directory. Relative paths should be avoided, as relative paths are located based on the domain installation, which is typically on remote storage. It is safe to delete a cache directory while the store is not running, but this may slow down the next store boot.
  • Preventing Corruption and File Locking: Two same named stores must not be configured to share the same primary or cache directory. There are store file locking checks that are designed to detect such conflicts and prevent corruption by failing the store boot, but it is not recommended to depend on the file locking feature for correctness. See Enable File Locking.
  • Boot Recovery: Cache files are reused to speed up the File Store boot and recovery process, but only if the store’s host WebLogic Server instance has been shut down cleanly prior to the current boot. For example, cache files are not re-used and are instead fully recreated: after a kill -9, after an OS or JVM crash, or after an off-line change to the primary files, such as a store admin compaction. When cache files are recreated, a Warning log message 280102 is generated.
  • Fail-Over/Migration Recovery: A file store safely recovers its data without its cache directory. Therefore, a cache directory does not need to be copied or otherwise made accessible after a fail-over or migration, and similarly does not need to be placed in NAS/SAN storage. A Warning log message 280102, which is generated to indicate the need to recreate the cache on the new host system, can be ignored.
  • Cache File Cleanup: To prevent unused cache files from consuming disk space, test and developer environments should periodically delete cache files.

An example on how to use this:

wls_file_persistence_store {a_wls_file_persistence_store :
   ...
   cache_directory => 'a_value'
   ...
}

This is an extended property. Before you can use it add it to the wls_settings property extra_properties.

wls_setting{'domain':
   ...
  extra_properties => ['wls_file_persistence_store:cache_directory']
   ...
}

This help text generated from MBean text of the WebLogic server.

Back to overview of wls_file_persistence_store

deployment_order

A priority that the server uses to determine when it deploys an item. The priority is relative to other deployable items of the same type. For example, the server prioritizes and deploys all EJBs before it prioritizes and deploys startup classes. Items with the lowest Deployment Order value are deployed first. There is no guarantee on the order of deployments with equal Deployment Order values. There is no guarantee of ordering across clusters.

An example on how to use this:

wls_file_persistence_store {a_wls_file_persistence_store :
   ...
   deployment_order => '1000'
   ...
}

This is an extended property. Before you can use it add it to the wls_settings property extra_properties.

wls_setting{'domain':
   ...
  extra_properties => ['wls_file_persistence_store:deployment_order']
   ...
}

This help text generated from MBean text of the WebLogic server.

Back to overview of wls_file_persistence_store

directory

The file persistent store directory name

Back to overview of wls_file_persistence_store

disable_autorequire

Puppet supports automatic ordering of resources by autorequire. Sometimes, however, this causes issues. Setting this parameter to true, disables autorequiring for this specific resource.

USE WITH CAUTION!!

Here is an example on hopw to use this:

...{'domain_name/...':
  disableautorequire => true,
  ...
}

Back to overview of wls_file_persistence_store

disable_corrective_change

Disable the modification of a resource when Puppet decides it is a corrective change.

(requires easy_type V2.11.0 or higher)

When using a Puppet Server, Puppet knows about adaptive and corrective changes. A corrective change is when Puppet notices that the resource has changed, but the catalog has not changed. This can occur for example, when a user, by accident or willingly, changed something on the system that Puppet is managing. The normal Puppet process then repairs this and puts the resource back in the state as defined in the catalog. This process is precisely what you want most of the time, but not always. This can sometimes also occur when a hardware or network error occurs. Then Puppet cannot correctly determine the current state of the system and thinks the resource is changed, while in fact, it is not. Letting Puppet recreate remove or change the resource in these cases, is NOT wat you want.

Using the disable_corrective_change parameter, you can disable corrective changes on the current resource.

Here is an example of this:

crucial_resource {'be_carefull':
  ...
  disable_corrective_change => true,
  ...
}

When a corrective ensure does happen on the resource Puppet will not modify the resource and signal an error:

    Error: Corrective change present requested by catalog, but disabled by parameter disable_corrective_change
    Error: /Stage[main]/Main/Crucial_resource[be_carefull]/parameter: change from '10' to '20' failed: Corrective change present requested by catalog, but disabled by parameter disable_corrective_change. (corrective)

Back to overview of wls_file_persistence_store

disable_corrective_ensure

Disable the creation or removal of a resource when Puppet decides is a corrective change.

(requires easy_type V2.11.0 or higher)

When using a Puppet Server, Puppet knows about adaptive and corrective changes. A corrective change is when Puppet notices that the resource has changed, but the catalog has not changed. This can occur for example, when a user, by accident or willingly, changed something on the system that Puppet is managing. The normal Puppet process then repairs this and puts the resource back in the state as defined in the catalog. This process is precisely what you want most of the time, but not always. This can sometimes also occur when a hardware or network error occurs. Then Puppet cannot correctly determine the current state of the system and thinks the resource is changed, while in fact, it is not. Letting Puppet recreate remove or change the resource in these cases, is NOT wat you want.

Using the disable_corrective_ensure parameter, you can disable corrective ensure present or ensure absent actions on the current resource.

Here is an example of this:

crucial_resource {'be_carefull':
  ensure                    => 'present',
  ...
  disable_corrective_ensure => true,
  ...
}

When a corrective ensure does happen on the resource Puppet will not create or remove the resource and signal an error:

    Error: Corrective ensure present requested by catalog, but disabled by parameter disable_corrective_ensure.
    Error: /Stage[main]/Main/Crucial_resource[be_carefull]/ensure: change from 'absent' to 'present' failed: Corrective ensure present requested by catalog, but disabled by parameter disable_corrective_ensure. (corrective)

Back to overview of wls_file_persistence_store

distribution_policy

Specifies how the instances of a configured JMS artifact are named and distributed when deployed to a cluster. When this setting is configured on a Store it applies to all JMS artifacts that reference the store. Valid options: <ul> <li> Distributed creates an artifact instance on each cluster member in a cluster. Required for all SAF Agents and for cluster targeted or resource group scoped JMS Servers that host distributed destinations. </li> <li> Singleton creates one artifact instance on a single cluster member of a cluster. Required for cluster targeted or resource group scoped JMS Servers that host standalone (non-distributed) destinations and for cluster targeted or resource group scoped Path Services. The Migration Policy must be On-Failure or Always when using this option with a JMS Server, On-Failure when using this option with a Messaging Bridge, and Always when using this option with a Path Service. </li> </ul> The DistributionPolicy determines the instance name suffix for cluster targeted JMS artifacts. The suffix for a cluster targeted Singleton is -01 and for a cluster targeted Distributed is @ClusterMemberName.

An example on how to use this:

wls_file_persistence_store {a_wls_file_persistence_store :
   ...
   distribution_policy => 'Distributed'
   ...
}

This is an extended property. Before you can use it add it to the wls_settings property extra_properties.

wls_setting{'domain':
   ...
  extra_properties => ['wls_file_persistence_store:distribution_policy']
   ...
}

This help text generated from MBean text of the WebLogic server.

Back to overview of wls_file_persistence_store

domain

With this parameter, you identify the domain, where your objects is in.

The domain name is part of the full qualified name of any WebLogic object on a system. Let’s say we want to describe a WebLogic server. The full qualified name is:

wls_server{'domain_name/server_name':
  ensure => present,
  ...
}

When you don’t specify a domain name, Puppet will use default as domain name. For every domain you want to manage, you’ll have to put a wls_settings in your manifest.

Back to overview of wls_file_persistence_store

ensure

The basic property that the resource should be in.

Valid values are present, absent.

Back to overview of wls_file_persistence_store

failback_delay_seconds

Specifies the amount of time, in seconds, to delay before failing a cluster targeted JMS artifact instance back to its preferred server after the preferred server failed and was restarted. This delay allows time for the system to stabilize and dependent services to be restarted, preventing a system failure during a reboot. <ul> <li>A value > 0 specifies the time, in seconds, to delay before failing a JMS artifact back to its user preferred server.</li> <li>A value of 0 specifies there is no delay and the dynamic load balancer manages the failback process.</li> <li>A value of -1 specifies the default delay value is used.</li> </ul> Note: This setting only applies when the JMS artifact is cluster targeted and the Migration Policy is set to On-Failure or Always>.

An example on how to use this:

wls_file_persistence_store {a_wls_file_persistence_store :
   ...
   failback_delay_seconds => '-1'
   ...
}

This is an extended property. Before you can use it add it to the wls_settings property extra_properties.

wls_setting{'domain':
   ...
  extra_properties => ['wls_file_persistence_store:failback_delay_seconds']
   ...
}

This help text generated from MBean text of the WebLogic server.

Back to overview of wls_file_persistence_store

file_locking_enabled

Determines whether OS file locking is used. When file locking protection is enabled, a store boot fails if another store instance already has opened the store files. Do not disable this setting unless you have procedures in place to prevent multiple store instances from opening the same file. File locking is not required but helps prevent corruption in the event that two same-named file store instances attempt to operate in the same directories. This setting applies to both primary and cache files.

An example on how to use this:

wls_file_persistence_store {a_wls_file_persistence_store :
   ...
   file_locking_enabled => 1,
   ...
}

This is an extended property. Before you can use it add it to the wls_settings property extra_properties.

wls_setting{'domain':
   ...
  extra_properties => ['wls_file_persistence_store:file_locking_enabled']
   ...
}

This help text generated from MBean text of the WebLogic server.

Valid values are absent, 1, 0.

Back to overview of wls_file_persistence_store

file_persistence_name

The file persistence name

Back to overview of wls_file_persistence_store

initial_boot_delay_seconds

Specifies the amount of time, in seconds, to delay before starting a cluster targeted JMS instance on a newly booted WebLogic server. When this setting is configured on a Store it applies to all JMS artifacts that reference the store. This allows time for the system to stabilize and dependent services to be restarted, preventing a system failure during a reboot. <ul> <li>A value > 0 is the time, in seconds, to delay before before loading resources after a failure and restart.</li> <li>A value of 0 specifies no delay.</li> <li>A value of -1 specifies the default delay value is used.</li> </ul> Note: This setting only applies when the JMS artifact is cluster targeted and the Migration Policy is set to On-Failure or Always>.

An example on how to use this:

wls_file_persistence_store {a_wls_file_persistence_store :
   ...
   initial_boot_delay_seconds => '-1'
   ...
}

This is an extended property. Before you can use it add it to the wls_settings property extra_properties.

wls_setting{'domain':
   ...
  extra_properties => ['wls_file_persistence_store:initial_boot_delay_seconds']
   ...
}

This help text generated from MBean text of the WebLogic server.

Back to overview of wls_file_persistence_store

initial_size

The initial file size, in bytes.

  • Set InitialSize to pre-allocate file space during a file store boot. If InitialSize exceeds MaxFileSize, a store creates multiple files (number of files = InitialSize/MaxFileSize rounded up).
  • A file store automatically reuses the space from deleted records and automatically expands a file if there is not enough space for a new write request.
  • Use InitialSize to limit or prevent file expansions during runtime, as file expansion introduces temporary latencies that may be noticeable under rare circumstances.
  • Changes to initial size only take effect for new file stores, or after any current files have been deleted prior to restart.
  • See Maximum File Size.

An example on how to use this:

wls_file_persistence_store {a_wls_file_persistence_store :
   ...
   initial_size => 'a_value'
   ...
}

This is an extended property. Before you can use it add it to the wls_settings property extra_properties.

wls_setting{'domain':
   ...
  extra_properties => ['wls_file_persistence_store:initial_size']
   ...
}

This help text generated from MBean text of the WebLogic server.

Back to overview of wls_file_persistence_store

io_buffer_size

The I/O buffer size, in bytes, automatically rounded down to the nearest power of 2.

  • For the Direct-Write-With-Cache policy when a native wlfileio driver is available, IOBufferSize describes the maximum portion of a cache view that is passed to a system call. This portion does not consume off-heap (native) or Java heap memory.</li>
  • For the Direct-Write and Cache-Flush policies, IOBufferSize is the size of a per store buffer which consumes off-heap (native) memory, where one buffer is allocated during run-time, but multiple buffers may be temporarily created during boot recovery.</li>
  • When a native wlfileio driver is not available, the setting applies to off-heap (native) memory for all policies (including Disabled).</li>
  • For the best runtime performance, Oracle recommends setting IOBufferSize so that it is larger than the largest write (multiple concurrent store requests may be combined into a single write).</li>
  • For the best boot recovery time performance of large stores, Oracle recommends setting IOBufferSize to at least 2 megabytes.</li> See AllocatedIOBufferBytes to find out the actual allocated off-heap (native) memory amount. It is a multiple of IOBufferSize for the Direct-Write and Cache-Flush policies, or zero.</li>
  • See AllocatedIOBufferBytes.

An example on how to use this:

wls_file_persistence_store {a_wls_file_persistence_store :
   ...
   io_buffer_size => '-1'
   ...
}

This is an extended property. Before you can use it add it to the wls_settings property extra_properties.

wls_setting{'domain':
   ...
  extra_properties => ['wls_file_persistence_store:io_buffer_size']
   ...
}

This help text generated from MBean text of the WebLogic server.

Back to overview of wls_file_persistence_store

logical_name

The name used by subsystems to refer to different stores on different servers using the same name. For example, an EJB that uses the timer service may refer to its store using the logical name, and this name may be valid on multiple servers in the same cluster, even if each server has a store with a different physical name. Multiple stores in the same domain or the same cluster may share the same logical name. However, a given logical name may not be assigned to more than one store on the same server.

An example on how to use this:

wls_file_persistence_store {a_wls_file_persistence_store :
   ...
   logical_name => 'a_value'
   ...
}

This is an extended property. Before you can use it add it to the wls_settings property extra_properties.

wls_setting{'domain':
   ...
  extra_properties => ['wls_file_persistence_store:logical_name']
   ...
}

This help text generated from MBean text of the WebLogic server.

Back to overview of wls_file_persistence_store

max_file_size

The maximum file size, in bytes.

  • The MaxFileSize value affects the number of files needed to accommodate a store of a particular size (number of files = store size/MaxFileSize rounded up).
  • A file store automatically reuses space freed by deleted records and automatically expands individual files up to MaxFileSize if there is not enough space for a new record. If there is no space left in exiting files for a new record, a store creates an additional file.
  • A small number of larger files is normally preferred over a large number of smaller files as each file allocates Window Buffer and file handles.
  • If MaxFileSize is larger than 2^24 * BlockSize, then MaxFileSize is ignored, and the value becomes 2^24 * BlockSize. The default BlockSize is 512, and 2^24 * 512 is 8 GB.
  • See Initial Size.

An example on how to use this:

wls_file_persistence_store {a_wls_file_persistence_store :
   ...
   max_file_size => '1342177280'
   ...
}

This is an extended property. Before you can use it add it to the wls_settings property extra_properties.

wls_setting{'domain':
   ...
  extra_properties => ['wls_file_persistence_store:max_file_size']
   ...
}

This help text generated from MBean text of the WebLogic server.

Back to overview of wls_file_persistence_store

max_window_buffer_size

The maximum amount of data, in bytes and rounded down to the nearest power of 2, mapped into the JVM’s address space per primary store file. Applies to synchronous write policies Direct-Write-With-Cache and Disabled but only when the native wlfileio library is loaded. A window buffer does not consume Java heap memory, but does consume off-heap (native) memory. If the store is unable to allocate the requested buffer size, it allocates smaller and smaller buffers until it reaches MinWindowBufferSize, and then fails if cannot honor MinWindowBufferSize. Oracle recommends setting the max window buffer size to more than double the size of the largest write (multiple concurrently updated records may be combined into a single write), and greater than or equal to the file size, unless there are other constraints. 32-bit JVMs may impose a total limit of between 2 and 4GB for combined Java heap plus off-heap (native) memory usage. <ul>

An example on how to use this:

wls_file_persistence_store {a_wls_file_persistence_store :
   ...
   max_window_buffer_size => '-1'
   ...
}

This is an extended property. Before you can use it add it to the wls_settings property extra_properties.

wls_setting{'domain':
   ...
  extra_properties => ['wls_file_persistence_store:max_window_buffer_size']
   ...
}

This help text generated from MBean text of the WebLogic server.

Back to overview of wls_file_persistence_store

migration_policy

Controls migration and restart behavior of cluster targeted JMS service artifact instances. When this setting is configured on a Store it applies to all JMS artifacts that reference the store. Valid options: <ul> <li>Off disables migration and restart support for cluster targeted JMS service objects, including the ability to restart a failed persistent store instance and its associated services. This policy can not be combined with the Singleton Migration Policy. </li> <li>On-Failure enables automatic migration and restart of instances on the failure of a subsystem Service or WebLogic Server instance, including automatic fail-back and load balancing of instances. </li> <li>Always provides the same behavior as On-Failure and automatically migrates instances even in the event of a graceful shutdown or a partial cluster start. </li> </ul> Note: Cluster leasing must be configured for On-Failure and Always.

An example on how to use this:

wls_file_persistence_store {a_wls_file_persistence_store :
   ...
   migration_policy => 'Off'
   ...
}

This is an extended property. Before you can use it add it to the wls_settings property extra_properties.

wls_setting{'domain':
   ...
  extra_properties => ['wls_file_persistence_store:migration_policy']
   ...
}

This help text generated from MBean text of the WebLogic server.

Back to overview of wls_file_persistence_store

min_window_buffer_size

The minimum amount of data, in bytes and rounded down to the nearest power of 2, mapped into the JVM’s address space per primary store file. Applies to synchronous write policies Direct-Write-With-Cache and Disabled, but only when a native wlfileio library is loaded. See Maximum Window Buffer Size.

An example on how to use this:

wls_file_persistence_store {a_wls_file_persistence_store :
   ...
   min_window_buffer_size => '-1'
   ...
}

This is an extended property. Before you can use it add it to the wls_settings property extra_properties.

wls_setting{'domain':
   ...
  extra_properties => ['wls_file_persistence_store:min_window_buffer_size']
   ...
}

This help text generated from MBean text of the WebLogic server.

Back to overview of wls_file_persistence_store

name

The name.

Back to overview of wls_file_persistence_store

notes

Optional information that you can include to describe this configuration. WebLogic Server saves this note in the domain’s configuration file (config.xml) as XML PCDATA. All left angle brackets (<) are converted to the XML entity &lt;. Carriage returns/line feeds are preserved. <dl> <dt>Note:</dt> <dd> If you create or edit a note from the Administration Console, the Administration Console does not preserve carriage returns/line feeds. </dd> </dl>

An example on how to use this:

wls_file_persistence_store {a_wls_file_persistence_store :
   ...
   notes => 'a_value'
   ...
}

This is an extended property. Before you can use it add it to the wls_settings property extra_properties.

wls_setting{'domain':
   ...
  extra_properties => ['wls_file_persistence_store:notes']
   ...
}

This help text generated from MBean text of the WebLogic server.

Back to overview of wls_file_persistence_store

number_of_restart_attempts

Specifies the number of restart attempts before migrating a failed JMS artifact instance to another server in the WebLogic cluster. <ul> <li>A value > 0 specifies the number of restart attempts before migrating a failed service instance.</li> <li>A value of 0 specifies the same behavior as setting {@link #getRestartInPlace} to false.</li> <li>A value of -1 specifies the service is never migrated. Instead, it continues to attempt to restart until it either starts or the server instance shuts down.</li> </ul>

An example on how to use this:

wls_file_persistence_store {a_wls_file_persistence_store :
   ...
   number_of_restart_attempts => '6'
   ...
}

This is an extended property. Before you can use it add it to the wls_settings property extra_properties.

wls_setting{'domain':
   ...
  extra_properties => ['wls_file_persistence_store:number_of_restart_attempts']
   ...
}

This help text generated from MBean text of the WebLogic server.

Back to overview of wls_file_persistence_store

partial_cluster_stability_delay_seconds

Specifies the amount of time, in seconds, to delay before a partially started cluster starts all cluster targeted JMS artifact instances that are configured with a Migration Policy of Always or On-Failure. Before this timeout expires or all servers are running, a cluster starts a subset of such instances based on the total number of servers running and the configured cluster size. Once the timeout expires or all servers have started, the system considers the cluster stable and starts any remaining services. This delay ensures that services are balanced across a cluster even if the servers are started sequentially. It is ignored once a cluster is fully started (stable) or when individual servers are started. <ul> <li>A value > 0 specifies the time, in seconds, to delay before a partially started cluster starts dynamically configured services.</li> <li>A value of 0 specifies no delay.</li> <li>A value of -1 specifies a default delay value of 240 seconds.</li> </ul>

An example on how to use this:

wls_file_persistence_store {a_wls_file_persistence_store :
   ...
   partial_cluster_stability_delay_seconds => '-1'
   ...
}

This is an extended property. Before you can use it add it to the wls_settings property extra_properties.

wls_setting{'domain':
   ...
  extra_properties => ['wls_file_persistence_store:partial_cluster_stability_delay_seconds']
   ...
}

This help text generated from MBean text of the WebLogic server.

Back to overview of wls_file_persistence_store

provider

The specific backend to use for this wls_file_persistence_store resource. You will seldom need to specify this — Puppet will usually discover the appropriate provider for your platform.Available providers are:

simple
Manage file persistence stores

Back to overview of wls_file_persistence_store

restart_in_place

Enables periodic automatic restart of failed cluster targeted JMS artifact instance(s) running on healthy WebLogic Server instances. Restart attempts occur before attempts to migrate an instance to a different server in the cluster. When this setting is configured on a Store it applies to all JMS artifacts that reference the store. <ul> <li>Restarts occur when Restart In Place is set to true, the JMS artifact is cluster targeted, and the Migration Policy is set to On-Failure or Always>.</li> <li>This attribute is not used by WebLogic Messaging Bridges which automatically restart internal connections as needed. </li> </ul>

An example on how to use this:

wls_file_persistence_store {a_wls_file_persistence_store :
   ...
   restart_in_place => 1,
   ...
}

This is an extended property. Before you can use it add it to the wls_settings property extra_properties.

wls_setting{'domain':
   ...
  extra_properties => ['wls_file_persistence_store:restart_in_place']
   ...
}

This help text generated from MBean text of the WebLogic server.

Valid values are absent, 1, 0.

Back to overview of wls_file_persistence_store

seconds_between_restarts

Specifies the amount of time, in seconds, to wait in between attempts to restart a failed service instance.

An example on how to use this:

wls_file_persistence_store {a_wls_file_persistence_store :
   ...
   seconds_between_restarts => '30'
   ...
}

This is an extended property. Before you can use it add it to the wls_settings property extra_properties.

wls_setting{'domain':
   ...
  extra_properties => ['wls_file_persistence_store:seconds_between_restarts']
   ...
}

This help text generated from MBean text of the WebLogic server.

Back to overview of wls_file_persistence_store

synchronous_write_policy

The disk write policy that determines how the file store writes data to disk. This policy also affects the JMS file store’s performance, scalability, and reliability. Oracle recommends Direct-Write-With-Cache which tends to have the highest performance. The default value is Direct-Write. The valid policy options are:

  • Direct-Write Direct I/O is supported on all platforms. When available, file stores in direct I/O mode automatically load the native I/O wlfileio driver. This option tends to out-perform Cache-Flush and tend to be slower than Direct-Write-With-Cache. This mode does not require a native store wlfileio driver, but performs faster when they are available.
  • Direct-Write-With-Cache Store records are written synchronously to primary files in the directory specified by the Directory attribute and asynchronously to a corresponding cache file in the Cache Directory. The Cache Directory provides information about disk space, locking, security, and performance implications. This mode requires a native store wlfileiocode driver. If the native driver cannot be loaded, then the write mode automatically switches to Direct-Write. See Cache Directory.
  • Cache-Flush Transactions cannot complete until all of their writes have been flushed down to disk. This policy is reliable and scales well as the number of simultaneous users increases.Transactionally safe but tends to be a lower performer than direct-write policies.
  • Disabled Transactions are complete as soon as their writes are cached in memory, instead of waiting for the writes to successfully reach the disk. This is the fastest policy because write requests do not block waiting to be synchronized to disk, but, unlike other policies, is not transactionally safe in the event of operating system or hardware failures. Such failures can lead to duplicate or lost data/messages. This option does not require native store wlfileio drivers, but may run faster when they are available. Some non-WebLogic JMS vendors default to a policy that is equivalent to Disabled.

Notes:

  • When available, file stores load WebLogic wlfileio native drivers, which can improve performance. These drivers are included with Windows, Solaris, Linux, and AIX WebLogic installations.
  • Certain older versions of Microsoft Windows may incorrectly report storage device synchronous write completion if the Windows default Write Cache Enabled setting is used. This violates the transactional semantics of transactional products (not specific to Oracle), including file stores configured with a Direct-Write (default) or Direct-Write-With-Cache policy, as a system crash or power failure can lead to a loss or a duplication of records/messages. One of the visible symptoms is that this problem may manifest itself in high persistent message/transaction throughput exceeding the physical capabilities of your storage device. You can address the problem by applying a Microsoft supplied patch, disabling the Windows Write Cache Enabled setting, or by using a power-protected storage device. See http://support.microsoft.com/kb/281672 and http://support.microsoft.com/kb/332023.
  • NFS storage note: On some operating systems, native driver memory-mapping is incompatible with NFS when files are locked. Stores with synchronous write policies Direct-Write-With-Cache or Disabled, and WebLogic JMS paging stores enhance performance by using the native wlfileio driver to perform memory-map operating system calls. When a store detects an incompatibility between NFS, file locking, and memory mapping, it automatically downgrades to conventional read/write system calls instead of memory mapping. For best performance, Oracle recommends investigating alternative NFS client drivers, configuring a non-NFS storage location, or in controlled environments and at your own risk, disabling the file locks (See Enable File Locking). For more information, see “Tuning the WebLogic Persistent Store” in Tuning Performance of Oracle WebLogic Server.

An example on how to use this:

   wls_file_persistence_store {a_wls_file_persistence_store :
      ...
      synchronous_write_policy => 'Direct-Write'
      ...
   }

This is an extended property. Before you can use it add it to the wls_settings property extra_properties.

   wls_setting{'domain':
      ...
     extra_properties => ['wls_file_persistence_store:synchronous_write_policy']
      ...
   }

This help text generated from MBean text of the WebLogic server.

Back to overview of wls_file_persistence_store

tags

Return all tags on this Configuration MBean

An example on how to use this:

wls_file_persistence_store {a_wls_file_persistence_store :
   ...
   tags => 'a_value'
   ...
}

This is an extended property. Before you can use it add it to the wls_settings property extra_properties.

wls_setting{'domain':
   ...
  extra_properties => ['wls_file_persistence_store:tags']
   ...
}

This help text generated from MBean text of the WebLogic server.

Back to overview of wls_file_persistence_store

target

An array of target names.

The array of targets for this resource. A target can be a WebLogic Server, a WebLogic cluster, or a JMS Server. When specifying a target, you’ll also have to specify targettype. Here is an example on how you can specify a target.

..{ 'aResource':
  ...
  target     => ['myServer','myCluster'],
  targettype => ['Server','Cluster'],
  ...
}

here is an example on specifying the target and targettype for a regular WebLogic cluster:

wls_cluster{ 'aCluster':
  ...
  target     => ['myServer','myCluster'],
  targettype => ['Server','Cluster'],
  ...
}

Back to overview of wls_file_persistence_store

targettype

An array of target types.

The array of target types for this resource. A target can be a WebLogic Server, a WebLogic cluster, or a JMS Server. When specifying a targettype, you’ll also have to specify a target. Here is an example on how you can specify a target.

...{ 'aResource':
  ...
  target     => ['myServer','myCluster'],
  targettype => ['Server','Cluster'],
  ...
}

here is an example on specifying the target and targettype for a regular WebLogic cluster:

wls_cluster{ 'aCluster':
  ...
  target     => ['myServer','myCluster'],
  targettype => ['Server','Cluster'],
  ...
}

Back to overview of wls_file_persistence_store

timeout

Timeout for applying a resource.

To be sure no Puppet operation, hangs a Puppet daemon, all operations have a timeout. When this timeout expires, Puppet will abort the current operation and signal an error in the Puppet run.

With this parameter, you can specify the length of the timeout. The value is specified in seconds. In this example, the timeout is set to 600 seconds.

wls_server{'my_server':
  ...
  timeout => 600,
}

The default value for timeout is 120 seconds.

Back to overview of wls_file_persistence_store

xa_resource_name

Overrides the name of the XAResource that this store registers with JTA. You should not normally set this attribute. Its purpose is to allow the name of the XAResource to be overridden when a store has been upgraded from an older release and the store contained prepared transactions. The generated name should be used in all other cases.

An example on how to use this:

wls_file_persistence_store {a_wls_file_persistence_store :
   ...
   xa_resource_name => 'a_value'
   ...
}

This is an extended property. Before you can use it add it to the wls_settings property extra_properties.

wls_setting{'domain':
   ...
  extra_properties => ['wls_file_persistence_store:xa_resource_name']
   ...
}

This help text generated from MBean text of the WebLogic server.

Back to overview of wls_file_persistence_store