This provides a general overview on what script is, how is it written and is interfaced with.


Reader should be familiar with a CIM (Common Information Model). He should have a general idea about, what OpenLMI is and what it does. He should get familiar with LMIShell, which is a python binary shipped with OpenLMI client components.

Also user should be familiar with standard *nix command line utilities [1].


By a script in this document we mean:

  • Python library utilizing LMIShell for instrumenting CIM providers through a CIMOM broker. It resides in lmi.scripts.<script_name> package. Where <script_name> usually corresponds to some LMI profile name.
  • Command wrappers for this library as a set of classes inheriting from LmiBaseCommand. These may create a tree-like hierarchy of commands. They are the entry points of LMI metacommand to the wrapped functionality of library.

Command wrappers are part of the library usually grouped in a single module named after the lmi command or cmd:


Writing a library

Library shall consist of a set of functions taking a namespace or connection object as a first argument. There are no special requirements on how to divide these functions into submodules. Use common sense. Smaller scripts can have all functionality in lmi/scripts/<script_name>/ module. With wrappers usually contained in lmi/scripts/<script_name>/

Library should be written with an ease of use in mind. Functions should represent possible use cases of what can be done with particular providers instead of wrapping 1-to-1 a CIM class’s methods in python functions.

Any function that shall be called by a command wrapper and communicates with a CIMOM must accept a namespace object named as ns. It’s an instance of LMINamespace providing quick access to represented CIM namespace [2] and its classes. It’s also possible to specify that function shall be passed a raw LMIConnection object. For details see Function invocation.

Service example

Suppose we have a service profile in need of ython interface. Real provider implementation can be found at src/service directory in upstream git [3]. For more information please refer to service description.

As you may see, this implements single CIM class LMI_Service with a few useful methods such as:

  • StartService()
  • StopService()

We’d like to provide a way how to list system services, get a details for one of them and allow to start, stop and restart them.

Simplified [4] version of some of these functions may look like this:

def list_services(ns, kind='enabled'):
    for service in sorted(ns.LMI_Service.instances(),
                key=lambda i: i.Name):
        if kind == 'disabled' and service.EnabledDefault != \
        if kind == 'enabled' and service.EnabledDefault != \
            # list only enabled
        yield service

It yields instances of LMI_Service cim class. We prefer to use yield instead of return when enumerating instances because of memory usage reduction. For example when the user limits the number of instances listed. With yield the number of iterations will be reduced automatically.

from import LMIInstanceName
from lmi.scripts.common import get_logger
from lmi.scripts.common.errors import LmiFailed

LOG = get_logger(__name__)

def start_service(ns, service):
    if isinstance(service, basestring):
        # let's accept service as a string
        inst = ns.LMI_Service.first_instance(key="Name", value=service)
        name = service
    else:   # or as LMIInstance or LMIInstanceName
        inst = service
        name = inst.path['Name']
    if inst is None:
        raise LmiFailed('No such service "%s".' % name)
    if isinstance(inst, LMIInstanceName):
        # we need LMIInstance
        inst = inst.to_instance()
    res = inst.StartService()
    if res == 0:
        LOG().debug('Started service "%s" on hostname "%s".',
                    name, ns.hostname)
    return res

In similar fashion, stop_service, restart_service and others could be written.

ns argument typically represents root/cimv2 namespace which is the main implementation namespace for OpenLMI providers. One could also make these functions act upon a connection object like this:

def get_instance(c, service):
    inst = c.root.cimv2.LMI_Service.first_instance(
                key="Name", value=service)
    if inst is None:
        raise LmiFailed('No such service "%s".' % service)
    return inst

User can then easily access any other namespace he may need. Command classes need to be informed about an object type the wrapped function expects (see Function invocation).

The LOG variable provides access to the logger of this module. Messages logged in this way end up in a log file [5] and console. Implicitly only warnings and higher priority messages are logged into a console. This can be changed with metacommand’s parameteres.

If operation fails due to some unexpected error, please raise LmiFailed exception with human readable description.

See also

Exceptions for conventions on using exceptions.

Upstream git for more real world examples.

Command wrappers overview

They are a set of command classes wrapping up library’s functionality. They are structured in a tree-like hierarchy where the root [6] command appears in a help message of LMI metacommand. All commands are subclasses of LmiBaseCommand.

Behaviour of commands is controlled by class properties such as these:

class Show(command.LmiShowInstance):
    CALLABLE = 'lmi.scripts.service:get_instance'
            ('Enabled', lambda i: i.EnabledDefault == 2),
            ('Active', 'Started'),

Example above contains definition of Show command wrapper for instances of LMI_Service. Its associated function is get_instance() located in lmi.scripts.service module [7]. Properties used will be described in detail later. Let’s just say, that PROPERTIES specify a way how the instance is rendered.

Top-level commands

Are entry points of a script library. They are direct subcommands of lmi. For example:

$ lmi help
$ lmi service list
$ lmi sw show openlmi-providers

help, service and sw are top-level commands. One script (such as service above) can provide one or more of them. They need to be listed in a script in entry_points argument of setup() function. More details will be noted later in Setup script.

They contain usage string which is a documentation and prescription of command-line arguments in one string. This string is printed when user requests command’s help:

$ lmi help service

Usage string

looks like this:

System service management.

    %(cmd)s list [--all | --disabled]
    %(cmd)s start <service>

    --all       List all services available.
    --disabled  List only disabled services.

Format of this string is very important. It’s parsed by a docopt command line parser which uses it for parsing command-line arguments. Please refer to its documentation for details.


There is one deviation to common usage string. It’s the use of %(cmd)s formatting mark. It is replaced with full command’s name. Full name means that all subcommands and binary name prefixing current command on command line are part of it. So for example full name of command list in a following string passed to command line:

lmi sw list pkgs

is lmi sw list.

If parsing sw usage, it is just lmi sw.

The formatting mark is mandatory.

Options and arguments given on command-line are pre-processed before they are passed to end-point command. You should get familier with it before writing your own usage strings.

End-point commands

Are associated with one or more function of script library. They handle the following:

  1. call docopt parser on command line arguments
  2. make some name pre-processing on them (see Options pre-processing)
  3. verify them (see End-point commands)
  4. transform them (see End-point commands)
  5. pass them to associated function
  6. collect results
  7. render them and print them

Developper of command wrappers needs to be familiar with each step. We will describe them later in details.

There are following end-point commands available for subclassing:

They differ in how they render the result obtained from associated function.

These are documented in depth in End-point commands.

Command multiplexers

Provide a way how to group multiple commands under one. Suppose you want to list packages, repositories and files. All of these use cases need different arguments, and render different information thus they should be represented by independent end-point commands. What binds them together is the user’s intent to list something. He may wish to do other operation like show, add, remove etc. with the same subject. Having all combination of these intents and subjects would generate a lot of commands under the top-level one. Let’s instead group them under particular intent like this:

  • sw list packages
  • sw list repositories
  • sw list files
  • sw show package

To reflect it in our commands hierarchy, we need to use LmiCommandMultiplexer command.

class Lister(command.LmiCommandMultiplexer):
    """ List information about packages, repositories or files. """
    COMMANDS = {
            'packages'     : PkgLister,
            'repositories' : RepoLister,
            'files'        : FileLister

Where COMMANDS property maps command classes to their names. Each command multiplexer consumes one command argument from command line, denoting its direct subcommand and passes the rest of options to it. In this way we can create arbitrarily tall command trees.

Top-level command is nothing else than a subclass of LmiCommandMultiplexer.

Specifying profile and class requirements

Most commands require some provider installed on managed machine to work properly. Each such provider should be represented by an instance of CIM_RegisteredProfile on remote broker. This instance looks like this (in MOF syntax):

instance of CIM_RegisteredProfile {
    InstanceID = "OpenLMI+OpenLMI-Software+0.4.2";
    RegisteredOrganization = 1;
    OtherRegisteredOrganization = "OpenLMI";
    RegisteredVersion = "0.4.2";
    AdvertiseTypes = [2];
    RegisteredName = "OpenLMI-Software";

We are interested just in RegisteredName and RegisteredVersion properties that we’ll use for requirement specification.

Requirement is written in LMIReSpL language. For its formal definition refer to documentation of parser. Since the language is quite simple, few examples should suffice:

'OpenLMI-Software < 0.4.2'
Requires OpenLMI Software provider to be installed in version lower than 0.4.2.
'OpenLMI-Hardware == 0.4.2 & Openlmi-Software >= 0.4.2'
Requires both hardware and software providers to be installed in particular version. Short-circuit evaluation is utilized here. It means that in this example OpenLMI Software won’t be queried unless OpenLMI Hardware is installed and having desired version.
'profile "OpenLMI-Logical File" > 0.4.2'
If you have spaces in the name of profile, surround it in double quotes. profile keyword is optional. It could be also present in previous examples.

Version requirements are not limited to profiles only. CIM classes may be specified as well:

'class LMI_SoftwareIdentity >= 0.3.0 & OpenLMI-LogicalFile'
In case of class requirements the class keyword is mandatory. As you can see, version requirement is optional.
'! (class LMI_SoftwareIdentity | class LMI_UnixFile)'
Complex expressions can be created with the use of brackets and other operators.

One requirement is evaluated in these steps:

Profile requirement
  1. Query CIM_RegisteredProfile for instances with RegisteredName matching given name. If found, go to 2. Otherwise query CIM_RegisteredSubProfile [10] for instances with RegisteredName matching given name. If not found return False.
  2. Select the (sub)profile with highest version and go to 3.
  3. If the requirement has version specification then compare it to the value of RegisteredVersion using given operator. If the relation does not apply, return False.
  4. Return True.
Class requirement
  1. Get specified class. If not found, return False.
  2. If the requirement has version specification then compare it to the value of Version [11] qualifier of obtained class using given operator. And if the relation does not apply, return False.
  3. Return True.

Now let’s take a look, where these requirements can be specified. There is a special select command used to specify which command to load for particular version on remote broker. It can be written like this:

from lmi.scripts.common.command import LmiSelectCommand

class SoftwareCMD(LmiSelectCommand):

    SELECT = [
          ( 'OpenLMI-Software >= 0.4.2 & OpenLMI-LogicalFile'
          , '')
        , ( 'OpenLMI-Software >= 0.4.2'
          , '')
        , ('OpenLMI-Software', '')

It says to load SwLFCmd command in case both OpenLMI Software and OpenLMI LogicalFile providers are installed. If not, load the SwCMD from current module for OpenLMI Software with recent version and fallback to SwCmd for anything else. If the OpenLMI Software provider is not available at all, no command will be loaded and exception will be raised.

Previous command could be used as an entry point in your script (see the Entry points). There is also a utility that makes it look better:

from lmi.scripts.common.command import select_command

SoftwareCMD = select_command('SoftwareCMD',
      ( 'OpenLMI-Software >= 0.4.2 & OpenLMI-LogicalFile'
      , ''),
      ( 'OpenLMI-Software >= 0.4.2', ''),
      ('OpenLMI-Software', '')

See also

Documentation of LmiSelectCommand and select_command.

And also notes on related LmiSelectCommand properties.

Command wrappers module

Usually consists of:

  1. license header
  2. usage dostring - parseable by docopt
  3. end-point command wrappers
  4. single top-level command

The top-level command is usally defined like this:

Service = command.register_subcommands(
        'Service', __doc__,
        { 'list'    : Lister
        , 'show'    : Show
        , 'start'   : Start
        , 'stop'    : Stop
        , 'restart' : Restart

Where the __doc__ is a usage string and module’s doc string at the same time. It’s mentioned in point 2. Service is a name, which will be listed in entry_points dictionary described below. The global variable’s name we assign to should be the same as the value of the first argument to register_subcommands(). The last argument here is the dictionary mapping all subcommands of service to their names [8].

Egg structure

Script library is distributed as a python egg, making it easy to distribute and install either to system or user directory.

Following tree shows directory structure of service egg residing in upstream git:

├── lmi
│   ├──
│   └── scripts
│       ├──
│       └── service
│           ├──
│           └──
├── Makefile
├── setup.cfg

This library then can be imported with:

from lmi.scripts import service

commands/service/lmi/scripts/service must be a package (directory with because lmi.scripts is a namespace package. It can have arbitrary number of modules and subpackages. The care should be taken to make the API easy to use and learn though.

Use provided commands/ script to generated it.

Setup script

Follows a minimal example of script for service library.

from setuptools import setup, find_packages
    description='LMI command for system service administration.',
    namespace_packages=['lmi', 'lmi.scripts'],
    packages=['lmi', 'lmi.scripts', 'lmi.scripts.service'],

        'lmi.scripts.cmd': [
            'service = lmi.scripts.service.cmd:Service',

It’s a template with just one variable @@VERSION@@ being replaced with recent scripts version by running make setup command.

Entry points

The most notable argument here is entry_points which is a dictionary containing python namespaces where plugins are registered. In this case, we register single top-level command called service in lmi.scripts.cmd namespace. This particular namespace is used by LMI metacommand when searching for registered user commands. Service is a command multiplexer, created with a call to register_subcommands() grouping end-point commands together.

Next example shows set up with more top-level commands [9]:

    'lmi.scripts.cmd': [
        'fs =',
        'partition =',
        'raid =',
        'lv =',
        'vg =',
        'storage =',
        'mount =',


There are several conventions you should try to follow in your shiny scripts.

Logging messages

In each module where logging facilities are going to be used, define global varibale LOG like this:

from lmi.scripts.common import get_logger

LOG = get_logger(__name__)

It’s a callable used throughout particular module in this way:

LOG().warn('All the data of "%s" will be lost!', partition)

Each message should be a whole sentence. It shall begin with an upper case letter and end with a dot or other sentence terminator.

Bad example:

LOG().info('processing %s', card)


Again all the exceptions should be initialized with messages forming a whole sentence.

They will be catched and printed on stderr by LMI metacommand. If the Trace option in Section [Main] is on, traceback will be printed. There is just one exception. If the exception inherits from LmiError, traceback won’t be printed unless verbosity level is the highest one as well:

# self refers to some command ==

This is a feature allowing for common error use-cases to be gracefully handled. In your scripts you should stick to using LmiFailed for such exceptions.

Following is an example of such a common error-case, where printing traceback does not add any interesting information:

iname = ns.LMI_Service.new_instance_name({
    "Name": service,
    "CreationClassName" : "LMI_Service",
    "SystemName" : cs.Name,
    "SystemCreationClassName" : cs.CreationClassName
inst = iname.to_instance()
if inst is None:
    raise errors.LmiFailed('No such service "%s".' % service)
# process the service instance

service is a name provided by user. If such a service is not found, inst will be assigned None. In this case we don’t want to continue in script’s execution thus we raise an exception. We provide very clear message that needs no other comment. We don’t want any traceback to be printed, thus the use of LmiFailed.


To hunt down problems of your script during its development, metacommand comes with few options to assist you:

This option turns on logging of tracebacks. Any exception but LmiError will be logged with traceback to stderr unless --quite option is on. LmiError will be logged with traceback if the verbosity (-v) is highest as well.
Raise a verbosity. Pass it twice to make the verbosity highest. That will cause a lot of messages being produced to stderr. It also turns on logging of tracebacks for LmiError if --trace option is on as well.
Allows to specify output file, where logging takes place. Logging level is not affected by -v option. It can be specified in configuration file.

While you debug it’s convenient to put above in your configuration file ~/.lmirc:

# Print tracebacks.
Trace = True

OutputFile = /tmp/lmi.log
# Logging level for OutputFile.
Level = DEBUG

See also


See also

Docopt documentation, Command classes and Command properties.

[1]Described by a POSIX.
[2]Default namespace is "root/cimv2".
[3]view: git: ssh://
[4]Simplified here means that there are no documentation strings and no type checking.
[5]If logging to a file is enabled in configuration.
[6]Also called a top-level command.
[7]Precisely in an module of this package.
[8]Taken from older version of storage script.
[9]These names must exactly match the names in usage strings.
[10]This is a subclass of CIM_RegisteredProfile thus it has the same properties.
[11]If the Version qualifier is missing, -1 will be used for comparison instead of empty string.