All pages
Powered by GitBook
1 of 21

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Core Concepts

Key concepts in NSO development.

API Overview

Overview of NSO APIs.

NSO Virtual Machines

Extend product functionality to add custom service code or expose data through data provider mechanism.

Services

Implement network automation in your NSO deployment using services.

Services are the cornerstone of network automation with NSO. A service is not just a reusable recipe for provisioning network configurations; it allows you to manage the full configuration life-cycle with minimal effort.

This section examines in greater detail how services work, how to design them, and the different ways to implement them.

For a quicker introduction and a simple showcase of services, see Develop a Simple Service.

Introduction

In NSO, the term service has a special meaning and represents an automation construct that orchestrates create, modify, and delete of a service instance into the resulting native commands to devices in the network. In its simplest form, a service takes some input parameters and maps them to device-specific configurations. It is a recipe or a set of instructions.

Much like you can bake many cakes using a single cake recipe, you can create many service instances using the same service. But unlike cakes, having the recipe produce exactly the same output, is not very useful. That is why service instances define a set of input parameters, which the service uses to customize the produced configuration.

A network engineer on the CLI, or an API call from a northbound system, provides the values for input parameters when requesting a new service instance, and NSO uses the service recipe, called a 'service mapping', to configure the network.

A similar process takes place when deleting the service instance or modifying the input parameters. The main task of a service is therefore: from a given set of input parameters, calculate the minimal set of device operations to achieve the desired service change. Here, it is very important that the service supports any change; create, delete, and update of any service parameter.

Device configuration is usually the primary goal of a service. However, there may be other supporting functions that are expected from the service, such as service-specific actions. The complete service application, implementing all the service functionality, is packaged in an NSO service package.

The following definitions are used throughout this section:

  • Service type: Often referred to simply as a service, denotes a specific type of service, such as "L2 VPN", "L3 VPN", "Firewall", or "DNS".

  • Service instance: A specific instance of a service type, such as "L3 VPN for ACME" or "Firewall for user X".

  • Service model: The schema definition for a service type, defined in YANG. It specifies the names and format of input parameters for the service.

  • Service mapping

Service Mapping

Developing a service that transforms a service instance request to the relevant device configurations is done differently in NSO than in most other tools on the market. As a service developer, you create a mapping from a YANG service model to the corresponding device YANG model.

This is a declarative, model-to-model mapping. Irrespective of the underlying device type and its native device interface, the mapping is towards a YANG device model and not the native CLI (or any other protocol/API). As you write the service mapping, you do not have to worry about the syntax of different CLI commands or in which order these commands are sent to the device. It is all taken care of by the NSO device manager and device NEDs. Implementing a service in NSO is reduced to transforming the input data structure, described in YANG, to device data structures, also described in YANG.

Who writes the models?

  • Developing the service model is part of developing the service application and is covered later in this section.

  • Every device NED comes with a corresponding device YANG model. This model has been designed by the NED developer to capture the configuration data that is supported by the device.

A service application then has two primary artifacts: a YANG service model and a mapping definition to the device YANG, as illustrated in the following figure.

To reiterate:

  • The mapping is not defined using workflows, or sequences of device commands.

  • The mapping is not defined in the native device interface language.

This approach may seem somewhat unorthodox at first, but allows NSO to streamline and greatly simplify how you implement services.

A common problem for traditional automation systems is that a set of instructions needs to be defined for every possible service instance change. Take for example a VPN service. During a service life cycle, you want to:

  1. Create the initial VPN.

  2. Add a new site or leg to the VPN.

  3. Remove a site or leg from the VPN.

  4. Modify the parameters of a VPN leg, such as the IP addresses used.

The possible run-time changes for an existing service instance are numerous. If a developer must define instructions for every possible change, such as a script or a workflow, the task is daunting, error-prone, and never-ending.

NSO reduces this problem to a single data-mapping definition for the "create" scenario. At run-time, NSO renders the minimum resulting change for any possible change in the service instance. It achieves this with the FASTMAP algorithm.

Another challenge in traditional systems is that a lot of code goes into managing error scenarios. The NSO built-in transaction manager takes that burden away from the developer of the service application by providing automatic rollback of incomplete changes.

Another benefit of this approach is that NSO can automatically generate the northbound APIs and database schema from the YANG models, enabling a true DevOps way of working with service models. A new service model can be defined as part of a package and loaded into NSO. An existing service model can be modified and the package upgraded, and all northbound APIs and User Interfaces are automatically regenerated to reflect the new or updated models.

Embedded Erlang Applications

Start user-provided Erlang applications.

NSO is capable of starting user-provided Erlang applications embedded in the same Erlang VM as NSO.

The Erlang code is packaged into applications which are automatically started and stopped by NSO if they are located at the proper place. NSO will search all packages for top-level directories called erlang-lib. The structure of such a directory is the same as a standard lib directory in Erlang. The directory may contain multiple Erlang applications. Each one must have a valid .app file. See the Erlang documentation of application and app for more info.

An Erlang package skeleton can be created by making use of the ncs-make-package command:

Multiple applications can be generated by using the option --erlang-application-name NAME multiple times with different names.

All application code should use the prefix ec_ for module names, application names, registered processes (if any), and named ets tables (if any), to avoid conflict with existing or future names used by NSO itself.

Erlang API

The Erlang API to NSO is implemented as an Erlang/OTP application called econfd. This application comes in two flavors. One is built into NSO to support applications running in the same Erlang VM as NSO. The other is a separate library which is included in source form in the NSO release, in the $NCS_DIR/erlang directory. Building econfd as described in the $NCS_DIR/erlang/econfd/README file will compile the Erlang code and generate the documentation.

This API can be used by applications written in Erlang in much the same way as the C and Java APIs are used, i.e. code running in an Erlang VM can use the econfd API functions to make socket connections to NSO for the data provider, MAAPI, CDB, etc. access. However, the API is also available internally in NSO, which makes it possible to run Erlang application code inside the NSO daemon, without the overhead imposed by the socket communication.

When the application is started, one of its processes should make initial connections to the NSO subsystems, register callbacks, etc. This is typically done in the init/1 function of a gen_server or similar. While the internal connections are made using the exact same API functions (e.g. econfd_maapi:connect/2) as for an application running in an external Erlang VM, any Address and Port arguments are ignored, and instead, standard Erlang inter-process communication is used.

There is little or no support for testing and debugging Erlang code executing internally in NSO since NSO provides a very limited runtime environment for Erlang to minimize disk and memory footprints. Thus the recommended method is to develop Erlang code targeted for this by using econfd in a separate Erlang VM, where an interactive Erlang shell and all the other development support included in the standard Erlang/OTP releases are available. When development and testing are completed, the code can be deployed to run internally in NSO without changes.

For information about the Erlang programming language and development tools, refer to and the available books about Erlang (some are referenced on the website).

The --printlog option to ncs, which prints the contents of the NSO error log, is normally only useful for Cisco support and developers, but it may also be relevant for debugging problems with application code running inside NSO. The error log collects the events sent to the OTP error_logger, e.g. crash reports as well as info generated by calls to functions in the error_logger(3) module. Another possibility for primitive debugging is to run ncs with the --foreground option, where calls to io:format/2 etc will print to standard output. Printouts may also be directed to the developer log by using econfd:log/3.

While Erlang application code running in an external Erlang VM can use basically any version of Erlang/OTP, this is not the case for code running inside NSO, since the Erlang VM is evolving and provides limited backward/forward compatibility. To avoid incompatibility issues when loading the beam files, the Erlang compiler erlc should be of the same version as was used to build the NSO distribution.

NSO provides the VM, erlc and the kernel, stdlib, and crypto OTP applications.

Application code running internally in the NSO daemon can have an impact on the execution of the standard NSO code. Thus, it is critically important that the application code is thoroughly tested and verified before being deployed for production in a system using NSO.

Application Configuration

Applications may have dependencies to other applications. These dependencies affect the start order. If the dependent application resides in another package, this should be expressed by using the required package in the package-meta-data.xml file. Application dependencies within the same package should be expressed in the .app. See below.

The following config settings in the .app file are explicitly treated by NSO:

Example

The examples.ncs/getting-started/developing-with-ncs/18-simple-service-erlang example in the bundled collection shows how to create a service written in Erlang and execute it internally in NSO. This Erlang example is a subset of the Java example examples.ncs/getting-started/developing-with-ncs/4-rfs-service.

Service Handling of Ambiguous Device Models

Perform handling of ambiguous device models.

When new NED versions with diverging XML namespaces are introduced, adaptations might be needed in the services for these new NEDs. But not necessarily; it depends on where in the specific NED models the ambiguities reside. Existing services might not refer to these parts of the model and in that case, they do not need any adaptations.

Finding out if and where services need adaptations can be non-trivial. An important exception is template services which check and point out ambiguities at load time (NSO startup). In Java or Python code this is harder and essentially falls back to code reviews and testing.

The changes in service code to handle ambiguities are straightforward but different for templates and code.

Template Services

In templates, there are new processing instructions if-ned-id and elif-ned-id. When the template specifies a node in an XML namespace where an ambiguity exists, the if-ned-id process instruction is used to resolve that ambiguity.

The processing instruction else can be used in conjunction with if-ned-id and elif-ned-id to capture all other NED IDs.

For the nodes in the XML namespace where no ambiguities occur, this process instruction is not necessary.

Java Services

In Java, the service code must handle the ambiguities by code where the devices' ned-id is tested before setting the nodes and values for the diverging paths.

The ServiceContext class has a new convenience method, getNEDIdByDeviceName which helps retrieve the ned-id from the device name string.

Python Services

In the Python API, there is also a need to handle ambiguities by checking the ned-id before setting the diverging paths. Use get_ned_id() from ncs.application to resolve NED IDs.

ncs-make-package --erlang-skeleton --erlang-application-name <appname> <package-name>

applications

A list of applications that need to be started before this application can be started. This info is used to compute a valid start order.

included_applications

A list of applications that are started on behalf of this application. This info is used to compute a valid start order.

env

A property list, containing [{Key,Val}] tuples. Besides other keys, used by the application itself, a few predefined keys are used by NSO. The key ncs_start_phase is used by NSO to determine which start phase the application is to be started in. Valid values are early_phase0, phase0, phase1, phase1_delayed and phase2. Default is phase1. If the application is not required in the early phases of startup, set ncs_start_phase to phase2 to avoid issues with NSO services being unavailable to the application. The key ncs_restart_type is used by NSO to determine which impact a restart of the application will have. This is the same as the restart_type() type in application. Valid values are permanent, transient and temporary. Default is temporary.

www.erlang.org
: The instructions that implement a service by mapping the input parameters for a service instance to device configuration.
  • Device configuration: Network devices are configured to perform network functions. A service instance results in corresponding device configuration changes.

  • Service application: The code and models implementing the complete service functionality, including service mapping, actions, models for auxiliary data, and so on.

  • Change the interface used for the VPN on a device.
  • ...

  • Delete the VPN.

  • A High-level View of Services in NSO
    Service Model and Mapping
    <config-template xmlns="http://tail-f.com/ns/config/1.0">
      <devices xmlns="http://tail-f.com/ns/ncs">
        <device foreach="{apache-device}">
          <name>{current()}</name>
          <config>
            <?if-ned-id apache-nc-1.0:apache-nc-1.0?>
              <vhosts xmlns="urn:apache">
                <vhost>
                  <hostname>{/vhost}</hostname>
                  <doc-root>/srv/www/{/vhost}</doc-root>
                </vhost>
              </vhosts>
            <?elif-ned-id apache-nc-1.1:apache-nc-1.1?>
              <public xmlns="urn:apache">
                <vhosts>
                  <vhost>
                    <hostname>{/vhost}</hostname>
                    <aliases>{/vhost}.public</aliases>
                    <doc-root>/srv/www/{/vhost}</doc-root>
                  </vhost>
                </vhosts>
              </public>
            <?end?>
          </config>
        </device>
      </devices>
    </config-template>
        @ServiceCallback(servicePoint="websiteservice",
                         callType=ServiceCBType.CREATE)
        public Properties create(ServiceContext context,
                                 NavuNode service,
                                 NavuNode root,
                                 Properties opaque)
                                 throws DpCallbackException {
    
    ...
    
                    NavuLeaf elemName = elem.leaf(Ncs._name_);
                    NavuContainer md = root.container(Ncs._devices_).
                        list(Ncs._device_).elem(elemName.toKey());
    
                    String ipv4Str = baseIp + ((subnet<<3) + server);
                    String ipv6Str = "::ff:ff:" + ipv4Str;
                    String ipStr = ipv4Str;
                    String nedIdStr =
                        context.getNEDIdByDeviceName(elemName.valueAsString());
                    if ("webserver-nc-1.0:webserver-nc-1.0".equals(nedIdStr)) {
                        ipStr = ipv4Str;
                    } else if ("webserver2-nc-1.0:webserver2-nc-1.0"
                               .equals(nedIdStr)) {
                        ipStr = ipv6Str;
                    }
    
                    md.container(Ncs._config_).
                        container(webserver.prefix, webserver._wsConfig_).
                        list(webserver._listener_).
                        sharedCreate(new String[] {ipStr, ""+8008});
    
                    ms.list(lb._backend_).sharedCreate(
                        new String[]{baseIp + ((subnet<<3) + server++),
                                     ""+8008});
    ...
    
                return opaque;
            } catch (Exception e) {
                throw new DpCallbackException("Service create failed", e);
            }
    
        }
    import ncs
    
    def _get_device(service, name):
        dev_path = '/ncs:devices/ncs:device{%s}' % (name, )
        return ncs.maagic.cd(service, dev_path)
    
    class ServiceCallbacks(Service):
        @Service.create
        def cb_create(self, tctx, root, service, proplist):
            self.log.info('Service create(service=', service._path, ')')
    
            for name in service.apache_device:
                self.create_apache_device(service, name)
    
            template = ncs.template.Template(service)
            self.log.info(
                'applying web-server-template for device {}'.format(name))
            template.apply('web-server-template')
            self.log.info(
                'applying load-balancer-template for device {}'.format(name))
            template.apply('load-balancer-template')
    
        def create_apache_device(self, service, name):
            dev = _get_device(service, name)
            if 'apache-nc-1.0:apache-nc-1.0' == ncs.application.get_ned_id(dev):
                self.create_apache1_device(dev)
            elif 'apache-nc-1.1:apache-nc-1.1' == ncs.application.get_ned_id(dev):
                self.create_apache2_device(dev)
            else:
                raise Exception('unknown ned-id {}'.format(get_ned_id(dev)))
    
        def create_apache1_device(self, dev):
            self.log.info(
                'creating config for apache1 device {}'.format(dev.name))
            dev.config.ap__listen_ports.listen_port.create(("*", 8080))
            dev.config.ap__clash = dev.name
    
        def create_apache2_device(self, dev):
            self.log.info(
                'creating config for apache2 device {}'.format(dev.name))
            dev.config.ap__system.listen_ports.listen_port.create(("*", 8080))
            dev.config.ap__clash = dev.name

    NSO SNMP Agent

    Description of SNMP agent.

    The SNMP agent in NSO is used mainly for monitoring and notifications. It supports SNMPv1, SNMPv2c, and SNMPv3.

    The following standard MIBs are supported by the SNMP agent:

    • SNMPv2-MIB RFC 3418

    • SNMP-FRAMEWORK-MIB RFC 3411

    • SNMP-USER-BASED-SM-MIB RFC 3414

    • SNMP-VIEW-BASED-ACM-MIB

    • SNMP-COMMUNITY-MIB

    • SNMP-TARGET-MIB and SNMP-NOTIFICATION-MIB

    • SNMP-MPD-MIB

    • TRANSPORT-ADDRESS-MIB

    • SNMP-USM-AES-MIB

    • IPV6-TC

    The usmHMACMD5AuthProtocol authentication protocol and the usmDESPrivProtocol privacy protocol specified in SNMP-USER-BASED-SM-MIB are not supported, since they are not considered secure. The usmHMACSHAAuthProtocol authentication protocol specified in SNMP-USER-BASED-SM-MIB and the usmAesCfb128Protocol privacy protocol specified in SNMP-USM-AES-MIB are supported.

    Configuring the SNMP Agent

    The SNMP agent is configured through any of the normal NSO northbound interfaces. It is possible to control most aspects of the agent through for example the CLI.

    The YANG models describing all configuration capabilities of the SNMP agent reside under $NCS_DIR/src/ncs/snmp/snmp-agent-config/*.yang in the NSO distribution.

    An example session configuring the SNMP agent through the CLI may look like:

    The SNMP agent configuration data is stored in CDB as any other configuration data, but is handled as a transformation between the data shown above and the data stored in the standard MIBs.

    If you want to have a default configuration of the SNMP agent, you must provide that in an XML file. The initialization data of the SNMP agent is stored in an XML file that has precisely the same format as CDB initialization XML files, but it is not loaded by CDB, rather it is loaded at first startup by the SNMP agent. The XML file must be called snmp_init.xml and it must reside in the load path of NSO. In the NSO distribution, there is such an initialization file in $NCS_DIR/etc/ncs/snmp/snmp_init.xml. It is strongly recommended that this file be customized with another engine ID and other community strings and v3 users.

    If no snmp_init.xml file is found in the load path a default configuration with the agent disabled is loaded. Thus, the easiest way to start NSO without the SNMP agent is to ensure that the directory $NCS_DIR/etc/ncs/snmp/ is not part of the NSO load path.

    Note, that this only relates to initialization the first time NSO is started. On subsequent starts, all the SNMP agent configuration data is stored in CDB and the snmp_init.xml is never used again.

    Alarm MIB

    The NSO SNMP alarm MIB is designed for ease of use in alarm systems. It defines a table of alarms and SNMP alarm notifications corresponding to alarm state changes. Based on the alarm model in NSO (see ), the notifications as well as the alarm table contain the parameters that are required for alarm standards compliance (X.733 and 3GPP). The MIB files are located in $NCS_DIR/src/ncs/snmp/mibs.

    • TAILF-TOP-MIB.mib The tail-f enterprise OID.

    • TAILF-TC-MIB.mib Textual conventions for the alarm mib.

    • TAILF-ALARM-MIB.mib The actual alarm MIB.

    • IANA-ITU-ALARM-TC-MIB.mib Import of IETF mapping of X.733 parameters.

    The alarm table has the following columns:

    • tfAlarmIndex An imaginary index for the alarm row that is persistent between restarts.

    • tfAlarmType This provides an identification of the alarm type and together with tfAlarmSpecificProblem forms a unique identification of the alarm.

    • tfAlarmDevice The alarming network device - can be NSO itself.

    • tfAlarmObject The alarming object within the device.

    The MIB defines separate notifications for every severity level to support SNMP managers that only can map severity levels to individual notifications. Every notification contains the parameters of the alarm table.

    SNMP Object Identifiers

    Using the SNMP Alarm MIB

    Alarm Managers should subscribe to the notifications and read the alarm table to synchronize the alarm list. To do this you need an access view that matches the alarm MIB and creates a SNMP target. Default SNMP settings in NSO let you read the alarm MIB with v2c and community public. A target is set up in the following way, (assuming the SNMP Alarm Manager has IP address 192.168.1.1 and wants community string public in the v2c notifications):

    ITU-ALARM-TC-MIB.mib Import of IETF mapping of X.733 parameters.

  • tfAlarmObjectOID In case the original alarm notification was an SNMP notification this column identifies the alarming SNMP object.

  • tfAlarmObjectStr Name of alarm object based on any other naming.

  • tfAlarmSpecificProblem This object is used when the 'tfAlarmType' object cannot uniquely identify the alarm type.

  • tfAlarmEventType The event type according to X.733 and based on the mapping of the alarm type in the NSO alarm model.

  • tfAlarmProbableCause The probable cause to X.733 and based on the mapping of the alarm type in the NSO alarm model. Note that you can configure this to match the probable cause values in the receiving alarm system.

  • tfAlarmOrigTime The time for the first occurrence of this alarm.

  • tfAlarmTime The time for the last state change of this alarm.

  • tfAlarmSeverity The latest severity (non-clear) reported for this alarm.

  • tfAlarmCleared Boolean indicated if the latest state change reports a clear.

  • tfAlarmText The latest alarm text.

  • tfAlarmOperatorState The latest operator alarm state such as ack.

  • tfAlarmOperatorNote The latest operator note.

  • RFC 3415
    RFC 3584
    RFC 3413
    RFC 3412
    RFC 3419
    RFC 3826
    RFC 2465
    NSO Alarms
    The NSO Alarm MIB
    admin@ncs# config
    Entering configuration mode terminal
    admin@ncs(config)# snmp agent udp-port 3457
    admin@ncs(config)# snmp community public name foobaz
    admin@ncs(config-community-public)# commit
    Commit complete.
    admin@ncs(config-community-public)# top
    admin@ncs(config)# show full-configuration snmp
    snmp agent enabled
    snmp agent ip    0.0.0.0
    snmp agent udp-port 3457
    snmp agent version v1
    snmp agent version v2c
    snmp agent version v3
    snmp agent engine-id enterprise-number 32473
    snmp agent engine-id from-text testing
    snmp agent max-message-size 50000
    snmp system contact ""
    snmp system name ""
    snmp system location ""
    snmp usm local user initial
     auth sha password GoTellMom
     priv aes password GoTellMom
    !
    snmp target monitor
     ip       127.0.0.1
     udp-port 162
     tag      [ monitor ]
     timeout  1500
     retries  3
     v2c sec-name public
    !
    snmp community public
     name     foobaz
     sec-name public
    !
    snmp notify foo
     tag  monitor
     type trap
    !
    snmp vacm group initial
     member initial
      sec-model [ usm ]
     !
     access usm no-auth-no-priv
      read-view   internet
      notify-view internet
     !
     access usm auth-no-priv
      read-view   internet
      notify-view internet
     !
     access usm auth-priv
      read-view   internet
      notify-view internet
     !
    !
    snmp vacm group public
     member public
      sec-model [ v1 v2c ]
     !
     access any no-auth-no-priv
      read-view   internet
      notify-view internet
     !
    !
    snmp vacm view internet
     subtree 1.3.6.1
      included
     !
    !
    snmp vacm view restricted
     subtree 1.3.6.1.6.3.11.2.1
      included
     !
     subtree 1.3.6.1.6.3.15.1.1
      included
     !
    !
    Example: Object Identifiers
     tfAlarmMIB             node         1.3.6.1.4.1.24961.2.103
     tfAlarmObjects         node         1.3.6.1.4.1.24961.2.103.1
     tfAlarms               node         1.3.6.1.4.1.24961.2.103.1.1
     tfAlarmNumber          scalar       1.3.6.1.4.1.24961.2.103.1.1.1
     tfAlarmLastChanged     scalar       1.3.6.1.4.1.24961.2.103.1.1.2
     tfAlarmTable           table        1.3.6.1.4.1.24961.2.103.1.1.5
     tfAlarmEntry           row          1.3.6.1.4.1.24961.2.103.1.1.5.1
     tfAlarmIndex           column       1.3.6.1.4.1.24961.2.103.1.1.5.1.1
     tfAlarmType            column       1.3.6.1.4.1.24961.2.103.1.1.5.1.2
     tfAlarmDevice          column       1.3.6.1.4.1.24961.2.103.1.1.5.1.3
     tfAlarmObject          column       1.3.6.1.4.1.24961.2.103.1.1.5.1.4
     tfAlarmObjectOID       column       1.3.6.1.4.1.24961.2.103.1.1.5.1.5
     tfAlarmObjectStr       column       1.3.6.1.4.1.24961.2.103.1.1.5.1.6
     tfAlarmSpecificProblem column       1.3.6.1.4.1.24961.2.103.1.1.5.1.7
     tfAlarmEventType       column       1.3.6.1.4.1.24961.2.103.1.1.5.1.8
     tfAlarmProbableCause   column       1.3.6.1.4.1.24961.2.103.1.1.5.1.9
     tfAlarmOrigTime        column       1.3.6.1.4.1.24961.2.103.1.1.5.1.10
     tfAlarmTime            column       1.3.6.1.4.1.24961.2.103.1.1.5.1.11
     tfAlarmSeverity        column       1.3.6.1.4.1.24961.2.103.1.1.5.1.12
     tfAlarmCleared         column       1.3.6.1.4.1.24961.2.103.1.1.5.1.13
     tfAlarmText            column       1.3.6.1.4.1.24961.2.103.1.1.5.1.14
     tfAlarmOperatorState   column       1.3.6.1.4.1.24961.2.103.1.1.5.1.15
     tfAlarmOperatorNote    column       1.3.6.1.4.1.24961.2.103.1.1.5.1.16
     tfAlarmNotifications   node         1.3.6.1.4.1.24961.2.103.2
     tfAlarmNotifsPrefix    node         1.3.6.1.4.1.24961.2.103.2.0
     tfAlarmNotifsObjects   node         1.3.6.1.4.1.24961.2.103.2.1
     tfAlarmStateChangeText scalar       1.3.6.1.4.1.24961.2.103.2.1.1
     tfAlarmIndeterminate   notification 1.3.6.1.4.1.24961.2.103.2.0.1
     tfAlarmWarning         notification 1.3.6.1.4.1.24961.2.103.2.0.2
     tfAlarmMinor           notification 1.3.6.1.4.1.24961.2.103.2.0.3
     tfAlarmMajor           notification 1.3.6.1.4.1.24961.2.103.2.0.4
     tfAlarmCritical        notification 1.3.6.1.4.1.24961.2.103.2.0.5
     tfAlarmClear           notification 1.3.6.1.4.1.24961.2.103.2.0.6
     tfAlarmConformance     node         1.3.6.1.4.1.24961.2.103.10
     tfAlarmCompliances     node         1.3.6.1.4.1.24961.2.103.10.1
     tfAlarmCompliance      compliance   1.3.6.1.4.1.24961.2.103.10.1.1
     tfAlarmGroups          node         1.3.6.1.4.1.24961.2.103.10.2
     tfAlarmNotifs          group        1.3.6.1.4.1.24961.2.103.10.2.1
     tfAlarmObjs            group        1.3.6.1.4.1.24961.2.103.10.2.2
    Example: Subscribing to SNMP Alarms
    $ ncs_cli -u admin -C
    admin@ncs# config
    Entering configuration mode terminal
    admin@ncs(config)# snmp notify monitor type trap tag monitor
    admin@ncs(config-notify-monitor)# snmp target alarm-system ip 192.168.1.1 udp-port 162 \
            tag monitor v2c sec-name public
    admin@ncs(config-target-alarm-system)# commit
    Commit complete.
    admin@ncs(config-target-alarm-system)# show full-configuration snmp target
    snmp target alarm-system
     ip       192.168.1.1
     udp-port 162
     tag      [ monitor ]
     timeout  1500
     retries  3
     v2c sec-name public
    !
    snmp target monitor
     ip       127.0.0.1
     udp-port 162
     tag      [ monitor ]
     timeout  1500
     retries  3
     v2c sec-name public
    !
    admin@ncs(config-target-alarm-system)#

    NSO Python VM

    Run your Python code using Python Virtual Machine (VM).

    NSO is capable of starting one or several Python VMs where Python code in user-provided packages can run.

    An NSO package containing a python directory will be considered to be a Python Package. By default, a Python VM will be started for each Python package that has a python-class-name defined in its package-meta-data.xml file. In this Python VM, the PYTHONPATH environment variable will be pointing to the python directory in the package.

    If any required package that is listed in the package-meta-data.xml contains a python directory, the path to that directory will be added to the PYTHONPATH of the started Python VM and thus its accompanying Python code will be accessible.

    Several Python packages can be started in the same Python VM if their corresponding package-meta-data.xml files contain the same python-package/vm-name.

    A Python package skeleton can be created by making use of the ncs-make-package command:

    YANG Model

    The tailf-ncs-python-vm.yang defines the python-vm container which, along with ncs.conf, is the entry point for controlling the NSO Python VM functionality. Study the content of the YANG model in the example below (The Python VM YANG Model). For a full explanation of all the configuration data, look at the YANG file and man ncs.conf. Here will follow a description of the most important configuration parameters.

    Note that some of the nodes beneath python-vm are by default invisible due to a hidden attribute. To make everything under python-vm visible in the CLI, two steps are required:

    1. First, the following XML snippet must be added to ncs.conf:\

    2. Next, the unhide command may be used in the CLI session:

    The sanity-checks/self-assign-warning controls the self-assignment warnings for Python services with off, log, and alarm (default) modes. An example of a self-assignment:

    As several service invocations may run in parallel, self-assignment will likely cause difficult-to-debug issues. An alarm or a log entry will contain a warning and a keypath to the service instance that caused the warning. Example log entry:

    With the logging/level, the amount of logged information can be controlled. This is a global setting applied to all started Python VMs unless explicitly set for a particular VM, see . The levels correspond to the pre-defined Python levels in the Python logging module, ranging from level-critical to level-debug.

    Refer to the official Python documentation for the logging module for more information about the log levels.

    The logging/log-file-prefix define the prefix part of the log file path used for the Python VMs. This prefix will be appended with a Python VM-specific suffix which is based on the Python package name or the python-package/vm-name from the package-meta-data.xml file. The default prefix is logs/ncs-python-vm so e.g., if a Python package named l3vpn is started, a logfile with the name logs/ncs-python-vm-l3vpn.log will be created.

    The status/start and status/current contains operational data. The status/start command will show information about what Python classes, as declared in the package-meta-data.xml file, were started and whether the outcome was successful or not. The status/current command will show which Python classes that are currently running in a separate thread. The latter assumes that the user-provided code cooperates by informing NSO about any thread(s) started by the user code, see .

    The start and stop actions make it possible to start and stop a particular Python VM.

    Structure of the User-provided Code

    The package-meta-data.xml file must contain a component of type application with a python-class-name specified as shown in the example below.

    The component name (L3VPN Service in the example) is a human-readable name of this application component. It will be shown when doing show python-vm in the CLI. The python-class-name should specify the Python class that implements the application entry point. Note that it needs to be specified using Python's dot notation and should be fully qualified (given the fact that PYTHONPATH is pointing to the package python directory).

    Study the excerpt of the directory listing from a package named l3vpn below.

    Look closely at the python directory above. Note that directly under this directory is another directory named the package (l3vpn) that contains the user code. This is an important structural choice that eliminates the chance of code clashes between dependent packages (only if all dependent packages use this pattern of course).

    As you can see, the service.py is located according to the description above. There is also a __init__.py (which is empty) there to make the l3vpn directory considered a module from Python's perspective.

    Note the _namespaces/l3vpn_ns.py file. It is generated from the l3vpn.yang model using the ncsc --emit-python command and contains constants representing the namespace and the various components of the YANG model, which the User code can import and make use of.

    The service.py file should include a class definition named Service which acts as the component's entry point. See for details.

    Notice that there is also a file named upgrade.py present which holds the implementation of the upgrade component specified in the package-meta-data.xml excerpt above. See for details regarding upgrade components.

    The application Component

    The Python class specified in the package-meta-data.xml file will be started in a Python thread which we call a component thread. This Python class should inherit ncs.application.Application and should implement the methods setup() and teardown().

    NSO supports two different modes for executing the implementations of the registered callpoints, threading and multiprocessing.

    The default threading mode will use a single thread pool for executing the callbacks for all callpoints.

    The multiprocessing mode will start a subprocess for each callpoint. Depending on the user code, this can greatly improve the performance on systems with a lot of parallel requests, as a separate worker process will be created for each Service, Nano Service, and Action.

    The behavior is controlled by three factors:

    • callpoint-model setting in the package-meta-data.xml file.

    • Number of registered callpoints in the Application.

    • Operating System support for killing child processes when the parent exits.

    If the callpoint-model is set to multiprocessing, more than one callpoint is registered in the Application and the Operating System supports killing child processes when the parent exits, NSO will enable multiprocessing mode.

    The Service class will be instantiated by NSO when started or whenever packages are reloaded. Custom initialization, such as registering service and action callbacks should be done in the setup() method. If any cleanup is needed when NSO finishes or when packages are reloaded it should be placed in the teardown() method.

    The existing log functions are named after the standard Python log levels, thus in the example above the self.log object contains the functions debug,info,warning,error,critical. Where to log and with what level can be controlled from NSO?

    The upgrade Component

    The Python class specified in the upgrade section of package-meta-data.xml will be run by NSO in a separately started Python VM. The class must be instantiable using the empty constructor and it must have a method called upgrade as in the example below. It should inherit ncs.upgrade.Upgrade.

    Debugging of Python Packages

    Python code packages are not running with an attached console and the standard out from the Python VMs are collected and put into the common log file ncs-python-vm.log. Possible Python compilation errors will also end up in this file.

    Normally the logging objects provided by the Python APIs are used. They are based on the standard Python logging module. This gives the possibility to control the logging if needed, e.g., getting a module local logger to increase logging granularity.

    The default logging level is set to info. For debugging purposes, it is very useful to increase the logging level:

    This sets the global logging level and will affect all started Python VMs. It is also possible to set the logging level for a single package (or multiple packages running in the same VM), which will take precedence over the global setting:

    The debugging output is printed to separate files for each package and the log file naming is ncs-python-vm-pkg_name.log

    Log file output example for package l3vpn:

    Using Non-standard Python

    There are occasions where the standard Python installation is incompatible or maybe not preferred to be used together with NSO. In such cases, there are several options to tell NSO to use another Python installation for starting a Python VM.

    By default NSO will use the file $NCS_DIR/bin/ncs-start-python-vm when starting a new Python VM. The last few lines in that file read:

    As seen above NSO first looks for python3 and if found it will be used to start the VM. If python3 is not found NSO will try to use the command python instead. Here we describe a couple of options for deciding which Python NSO should start.

    Configure NSO to Use a Custom Start Command (recommended)

    NSO can be configured to use a custom start command for starting a Python VM. This can be done by first copying the file $NCS_DIR/bin/ncs-start-python-vm to a new file and then changing the last lines of that file to start the desired version of Python. After that, edit ncs.conf and configure the new file as the start command for a new Python VM. When the file ncs.conf has been changed reload its content by executing the command ncs --reload.

    Example:

    Add the following snippet to ncs.conf:

    The new start-command will take effect upon the next restart or configuration reload.

    Changing the Path to python3 or python

    Another way of telling NSO to start a specific Python executable is to configure the environment so that executing python3 or python starts the desired Python. This may be done system-wide or can be made specific for the user running NSO.

    Updating the Default Start Command (not recommended)

    Changing the last line of $NCS_DIR/bin/ncs-start-python-vm is of course an option but altering any of the installation files of NSO is discouraged.

    Caveats

    Using Multiprocessing

    Using the multiprocessing library from Python components, where the callpoint-model is set to threading, can cause unexpected disconnects from NSO if errors occur in the code executed by the multiprocessing library.

    As a workaround to this, either use multiprocessing as the callpoint-model or force the start method to be spawn by executing:

    Packages

    Run user code in NSO using packages.

    All user code that needs to run in NSO must be part of a package. A package is basically a directory of files with a fixed file structure. A package consists of code, YANG modules, custom Web UI widgets, etc., that are needed to add an application or function to NSO. Packages are a controlled way to manage the loading and versions of custom applications.

    A package is a directory where the package name is the same as the directory name. At the top level of this directory, a file called package-meta-data.xml must exist. The structure of that file is defined by the YANG model $NCS_DIR/src/ncs/yang/tailf-ncs-packages.yang. A package may also be a tar archive with the same directory layout. The tar archive can be either uncompressed with the suffix .tar, or gzip-compressed with the suffix .tar.gz or .tgz. The archive file should also follow some naming conventions. There are two acceptable naming conventions for archive files, one is that after the introduction of CDM in the NSO 5.1, it can be named by ncs-<ncs-version>-<package-name>-<package-version>.<suffix>, e.g. ncs-5.3-my-package-1.0.tar.gz

    NSO Concurrency Model

    Learn how NSO enhances transactional efficiency with parallel transactions.

    From version 6.0, NSO uses the so-called 'optimistic concurrency', which greatly improves parallelism. With this approach, NSO avoids the need for serialization and a global lock to run user code which would otherwise limit the number of requests the system can process in a given time unit.

    Using this concurrency model, your code, such as a service mapping or custom validation code, can run in parallel, either with another instance of the same service or an entirely different service (or any other provisioning code, for that matter). As a result, the system can take better advantage of available resources, especially the additional CPU cores, making it a lot more performant.

    Optimistic Concurrency

    Transactional systems, such as NSO, must process each request in a way that preserves what are known as the ACID properties, such as atomicity and isolation of requests. A traditional approach to ensure this behavior is by using locking to apply requests or transactions one by one. The main downside is that requests are processed sequentially and may not be able to fully utilize the available resources.

    Debugging of Python packages
    Structure of the User-provided Code
    The Application Component
    The Upgrade Component
    ncs-make-package --service-skeleton python <package-name>
    <hide-group>
       <name>debug</name>
    </hide-group>
    admin@ncs(config)# unhide debug
    admin@ncs(config)#
    class ServiceCallbacks(Service):
        @Service.create
        def cb_create(self, tctx, root, service, proplist):
            self.counter = 42
    <WARNING> ... Assigning to self is not thread safe: /mysrvc:mysrvc{2}
    Example: The Python VM YANG Model
    > yanger -f tree tailf-ncs-python-vm.yang
              
    submodule: tailf-ncs-python-vm (belongs-to tailf-ncs)
      +--rw python-vm
         +--rw sanity-checks
         |  +--rw self-assign-warning?   enumeration
         +--rw logging
         |  +--rw log-file-prefix?   string
         |  +--rw level?             py-log-level-type
         |  +--rw vm-levels* [node-id]
         |     +--rw node-id    string
         |     +--rw level      py-log-level-type
         +--rw status
         |  +--ro start* [node-id]
         |  |  +--ro node-id     string
         |  |  +--ro packages* [package-name]
         |  |     +--ro package-name    string
         |  |     +--ro components* [component-name]
         |  |        +--ro component-name    string
         |  |        +--ro class-name?       string
         |  |        +--ro status?           enumeration
         |  |        +--ro error-info?       string
         |  +--ro current* [node-id]
         |     +--ro node-id     string
         |     +--ro packages* [package-name]
         |        +--ro package-name    string
         |        +--ro components* [component-name]
         |           +--ro component-name    string
         |           +--ro class-names* [class-name]
         |              +--ro class-name    string
         |              +--ro status?       enumeration
         +---x stop
         |  +---w input
         |  |  +---w name    string
         |  +--ro output
         |     +--ro result?   string
         +---x start
            +---w input
            |  +---w name    string
            +--ro output
               +--ro result?   string
    Example: package-meta-data.xml Excerpt
    <component>
      <name>L3VPN Service</name>
      <application>
        <python-class-name>l3vpn.service.Service</python-class-name>
      </application>
    </component>
    <component>
      <name>L3VPN Service model upgrade</name>
      <upgrade>
        <python-class-name>l3vpn.upgrade.Upgrade</python-class-name>
      </upgrade>
    </component>
    Example: Python Package Directory Structure
    packages/
    +-- l3vpn/
        +-- package-meta-data.xml
        +-- python/
        |   +-- l3vpn/
        |       +-- __init__.py
        |       +-- service.py
        |       +-- upgrade.py
        |       +-- _namespaces/
        |           +-- __init__.py
        |           +-- l3vpn_ns.py
        +-- src
            +-- Makefile
            +-- yang/
                +-- l3vpn.yang
    Example: Component Class Skeleton
    import ncs
    
    class Service(ncs.application.Application):
        def setup(self):
            # The application class sets up logging for us. It is accessible
            # through 'self.log' and is a ncs.log.Log instance.
            self.log.info('Service RUNNING')
    
            # Service callbacks require a registration for a 'service point',
            # as specified in the corresponding data model.
            #
            self.register_service('l3vpn-servicepoint', ServiceCallbacks)
    
            # If we registered any callback(s) above, the Application class
            # took care of creating a daemon (related to the service/action point).
    
            # When this setup method is finished, all registrations are
            # considered done and the application is 'started'.
    
        def teardown(self):
            # When the application is finished (which would happen if NCS went
            # down, packages were reloaded or some error occurred) this teardown
            # method will be called.
    
            self.log.info('Service FINISHED')
    Example: Upgrade Class Example
    import ncs
    import _ncs
    
    
    class Upgrade(ncs.upgrade.Upgrade):
        """An upgrade 'class' that will be instantiated by NSO.
    
        This class can be named anything as long as NSO can find it using the
        information specified in <python-class-name> for the <upgrade>
        component in package-meta-data.xml.
    
        Is should inherit ncs.upgrade.Upgrade.
    
        NSO will instantiate this class using the empty contructor.
        The class MUST have a method named 'upgrade' (as in the example below)
        which will be called by NSO.
        """
    
        def upgrade(self, cdbsock, trans):
            """The upgrade 'method' that will be called by NSO.
    
            Arguments:
            cdbsock -- a connected CDB data socket for reading current (old) data.
            trans -- a ncs.maapi.Transaction instance connected to the init
                     transaction for writing (new) data.
    
            There is no need to connect a CDB data socket to NSO - that part is
            already taken care of and the socket is passed in the first argument
            'cdbsock'. A session against the DB needs to be started though. The
            session doesn't need to be ended and the socket doesn't need to be
            closed - NSO will do that automatically.
    
            The second argument 'trans' is already attached to the init transaction
            and ready to be used for writing the changes. It can be used to create a
            maagic object if that is preferred. There's no need to detach or finish
            the transaction, and, remember to NOT apply() the transaction when work
            is finished.
    
            The method should return True (or None, which means that a return
            statement is not needed) if everything was OK.
            If something went wrong the method should return False or throw an
            error. The northbound client initiating the upgrade will be alerted
            with an error message.
    
            Anything written to stdout/stderr will end up in the general log file
            for various output from Python VMs. If not configured the file will
            be named ncs-python-vm.log.
            """
    
            # start a session against running
            _ncs.cdb.start_session2(cdbsock, ncs.cdb.RUNNING,
                                    ncs.cdb.LOCK_SESSION | ncs.cdb.LOCK_WAIT)
    
            # loop over a list and do some work
            num = _ncs.cdb.num_instances(cdbsock, '/path/to/list')
            for i in range(0, num):
                # read the key (which in this example is 'name') as a ncs.Value
                value = _ncs.cdb.get(cdbsock, '/path/to/list[{0}]/name'.format(i))
                # create a mandatory leaf 'level' (enum - low, normal, high)
                key = str(value)
                trans.set_elem('normal', '/path/to/list{{{0}}}/level'.format(key))
    
            # not really needed
            return True
    
            # Error return example:
            #
            # This indicates a failure and the string written to stdout below will
            # written to the general log file for various output from Python VMs.
            #
            # print('Error: not implemented yet')
            # return False
        $ ncs_cli -u admin
        admin@ncs> config
        admin@ncs% set python-vm logging level level-debug
        admin@ncs% commit
        $ ncs_cli -u admin
        admin@ncs> config
        admin@ncs% set python-vm logging vm-levels pkg_name level level-debug
        admin@ncs% commit
        $ tail -f logs/ncs-python-vm-l3vpn.log
        2016-04-13 11:24:07 - l3vpn - DEBUG - Waiting for Json msgs
        2016-04-13 11:26:09 - l3vpn - INFO - action name: double
        2016-04-13 11:26:09 - l3vpn - INFO - action input.number: 21
            if [ -x "$(which python3)" ]; then
                echo "Starting python3 -u $main $*"
                exec python3 -u "$main" "$@"
            fi
            echo "Starting python -u $main $*"
            exec python -u "$main" "$@"
    $ cd $NCS_DIR/bin
    $ pwd
    /usr/local/nso/bin
    $ cp ncs-start-python-vm my-start-python-vm
    $ # Use your favourite editor to update the last lines of the new
    $ # file to start the desired Python executable.
    <python-vm>
        <start-command>/usr/local/nso/bin/my-start-python-vm</start-command>
    </python-vm>
    Example: Set Start Method to spawn
    if multiprocessing.get_start_method() != 'spawn':
        multiprocessing.set_start_method('spawn', force=True)
    and the other is
    <package-name>-<package-version>.<suffix>
    , e.g.
    my-package-1.0.tar.gz
    .
    • package-name: should use letters, and digits and may include underscores (_) or dashes (-), but no additional punctuation, and digits can not follow underscores or dashes immediately.

    • package-version: should use numbers and dot (.).

    Package Model

    Packages are composed of components. The following types of components are defined: NED, Application, and Callback.

    The file layout of a package is:

    The package-meta-data.xml defines several important aspects of the package, such as the name, dependencies on other packages, the package's components, etc. This will be thoroughly described later in this chapter.

    When NSO starts, it needs to search for packages to load. The ncs.conf parameter /ncs-config/load-path defines a list of directories. At initial startup, NSO searches these directories for packages and copies the packages to a private directory tree in the directory defined by the /ncs-config/state-dir parameter in ncs.conf, and loads and starts all the packages found. All .fxs (compiled YANG files) and .ccl (compiled CLI spec files) files found in the directory load-dir in a package are loaded. On subsequent startups, NSO will by default only load and start the copied packages - see Loading Packages for different ways to get NSO to search the load path for changed or added packages.

    A package usually contains Java code. This Java code is loaded by a class loader in the NSO Java VM. A package that contains Java code must compile the Java code so that the compilation results are divided into .jar files where code that is supposed to be shared among multiple packages is compiled into one set of .jar files, and code that is private to the package itself is compiled into another set of .jar files. The shared and the common jar files shall go into the shared-jar directory and the private-jar directory, respectively. By putting for example the code for a specific service in a private jar, NSO can dynamically upgrade the service without affecting any other service.

    The optional webui directory contains the WEB UI customization files.

    An Example Package

    The NSO example collection for developers contains a number of small self-contained examples. The collection resides at $NCS_DIR/examples.ncs/getting-started/developing-with-ncs Each of these examples defines a package. Let's take a look at some of these packages. The example 3-aggregated-stats has a package ./packages/stats. The package-meta-data.xml file for that package looks like this:

    The file structure in the package looks like this:

    The package-meta-data.xml File

    The package-meta-data.xml file defines the name of the package, additional settings, and one component. Its settings are defined by the $NCS_DIR/src/ncs/yang/tailf-ncs-packages.yang YANG model, where the package list name gets renamed to ncs-package. See the tailf-ncs-packages.yang module where all options are described in more detail. To get an overview, use the IETF RFC 8340-based YANG tree diagram.

    The order of the XML entries in a package-meta-data.xml must be in the same order as the model shown above.

    A sample package configuration is taken from the $NCS_DIR/examples.ncs/development-guide/nano-services/netsim-vrouterexample:

    Below is a brief list of the configurables in the tailf-ncs-packages.yang YANG model that applies to the metadata file. A more detailed description can be found in the YANG model:

    • name - the name of the package. All packages in the system must have unique names.

    • package-version - the version of the package. This is for administrative purposes only, NSO cannot simultaneously handle two versions of the same package.

    • ncs-min-version - the oldest known NSO version where the package works.

    • ncs-max-version - the latest known NSO version where the package works.

    • python-package - Python-specific package data.

      • vm-name - the Python VM name for the package. Default is the package vm-name. Packages with the same vm-name run in the same Python VM. Applicable only when callpoint-model = threading.

    • directory - the path to the directory of the package.

    • templates - the templates defined by the package.

    • template-loading-mode - control if the templates are interpreted in strict or relaxed mode.

    • supported-ned-id - the list of ned-ids supported by this package. An example of the expected format taken from the $NCS_DIR/examples.ncs/development-guide/nano-services/netsim-vrouter example:

    • supported-ned-id-match - the list of regular expressions for ned-ids supported by this package. Ned-ids in the system that matches at least one of the regular expressions in this list are added to the supported-ned-id list. The following example demonstrates how all minor versions with a major number of 1 of the router-nc NED can be added to a package's list of supported ned-ids:

    • required-package - a list of names of other packages that are required for this package to work.

    • component - Each package defines zero or more components.

    Components

    Each component in a package has a name. The names of all the components must be unique within the package. The YANG model for packages contains:

    Lots of additional information can be found in the YANG module itself. The mandatory choice that defines a component must be one of ned, callback, application, or upgrade. We have:

    Component Types

    NED

    A Network Element Driver component is used southbound of NSO to communicate with managed devices (described in Network Element Drivers (NEDs). The easiest NED to understand is the NETCONF NED which is built into NSO.

    There are 4 different types of NEDs:

    • NETCONF: used for NETCONF-enabled devices such as Juniper routers, ConfD-powered devices, or any device that speaks proper NETCONF and also has YANG models. Plenty of packages in the NSO example collection have NETCONF NED components, for example $NCS_DIR/examples.ncs/getting-started/developing-with-ncs/0-router-network/packages/router.

    • SNMP: Used for SNMP devices.

      The example $NCS_DIR/examples.ncs/snmp-ned/basic has a package that has an SNMP NED component.

    • CLI: used for CLI devices. The package $NCS_DIR/packages/neds/cisco-ios is an example of a package that has a CLI NED component.

    • Generic: used for generic NED devices. The example $NCS_DIR/examples.ncs/generic-ned/xmlrpc-device has a package called xml-rpc which defines a NED component of type generic_._

    A CLI NED and a generic NED component must also come with additional user-written Java code, whereas a NETCONF NED and an SNMP NED have no Java code.

    Callback

    This defines a component with one or many Java classes that implement callbacks using the Java callback annotations.

    If we look at the components in the stats package above we have:

    The Stats class here implements a read-only data provider. See DP API.

    The callback type of component is used for a wide range of callback-type Java applications, where one of the most important are the Service Callbacks. The following list of Java callback annotations applies to callback components.

    • ServiceCallback to implement service-to-device mappings. See the example: $NCS_DIR/examples.ncs/getting-started/developing-with-ncs/4-rfs-service See Developing NSO Services for a thorough introduction to services.

    • ActionCallback to implement user-defined tailf:actions or YANG RPCs. See the example: $NCS_DIR/examples.ncs/getting-started/developing-with-ncs/2-actions.

    • DataCallback to implement the data getters and setters for a data provider. See the example $NCS_DIR/examples.ncs/getting-started/developing-with-ncs/3-aggregated-stats.

    • TransCallback to implement the transaction portions of a data provider callback. See the example $NCS_DIR/examples.ncs/getting-started/developing-with-ncs/3-aggregated-stats.

    • DBCallback to implement an external database. See the example: $NCS_DIR/examples.ncs/getting-started/developing-with-ncs/6-extern-db.

    • SnmpInformResponseCallback to implement an SNMP listener - See the example $NCS_DIR/examples.ncs/snmp-notification-receiver.

    • TransValidateCallback, ValidateCallback to implement a user-defined validation hook that gets invoked on every commit.

    • AuthCallback to implement a user hook that gets called whenever a user is authenticated by the system.

    • AuthorizationCallback to implement an authorization hook that allows/disallows users to do operations and/or access data. Note, that this callback should normally be avoided since, by nature, invoking a callback for any operation and/or data element is a performance impairment.

    A package that has a callback component usually has some YANG code and then also some Java code that relates to that YANG code. By convention, the YANG and the Java code reside in a src directory in the component. When the source of the package is built, any resulting fxs files (compiled YANG files) must reside in the load-dir of package and any resulting Java compilation results must reside in the shared-jar and private-jar directories. Study the 3-aggregated-stats example to see how this is achieved.

    Application

    Used to cover Java applications that do not fit into the callback type. Typically this is functionality that should be running in separate threads and work autonomously.

    The example $NCS_DIR/examples.ncs/getting-started/developing-with-ncs/1-cdb contains three components that are of type application. These components must also contain a java-class-name element. For application components, that Java class must implement the ApplicationComponent Java interface.

    Upgrade

    Used to migrate data for packages where the yang model has changed and the automatic CDB upgrade is not sufficient. The upgrade component consists of a Java class with a main method that is expected to run one time only.

    The example $NCS_DIR/examples.ncs/getting-started/developing-with-ncs/14-upgrade-service illustrates user CDB upgrades using upgrade components.

    Creating Packages

    NSO ships with a tool ncs-make-package that can be used to create packages. Package Development discusses in depth how to develop a package.

    Creating a NETCONF NED Package

    This use case applies if we have a set of YANG files that define a managed device. If we wish to develop an EMS solution for an existing device and that device has YANG files and also speaks NETCONF, we need to create a package for that device to be able to manage it. Assuming all YANG files for the device are stored in ./acme-router-yang-files, we can create a package for the router as:

    The above command will create a package called acme in ./acme. The acme package can be used for two things; managing real acme routers, and as input to the ncs-netsim tool to simulate a network of acme routers.

    In the first case, managing real acme routers, all we need to do is to put the newly generated package in the load-path of NSO, start NSO with package reload (see Loading Packages), and then add one or more acme routers as managed devices to NSO. The ncs-setup tool can be used to do this:

    The above command generates a directory ./ncs-project which is suitable for running NSO. Assume we have an existing router at the IP address 10.2.3.4 and that we can log into that router over the NETCONF interface using the username bob, and password secret. The following session shows how to set up NSO to manage this router:

    We can also use the newly generated acme package to simulate a network of acme routers. During development, this is especially useful. The ncs-netsim tool can create a simulated network of acme routers as:

    Finally, ncs-setup can be used to initialize an environment where NSO is used to manage all devices in an ncs-netsim network:

    Creating an SNMP NED Package

    Similarly, if we have a device that has a set of MIB files, we can use ncs-make-package to generate a package for that device. An SNMP NED package can, similarly to a NETCONF NED package, be used to both manage real devices and also be fed to ncs-netsim to generate a simulated network of SNMP devices.

    Assuming we have a set of MIB files in ./mibs, we can generate a package for a device with those mibs as:

    Creating a CLI NED Package or a Generic NED Package

    For CLI NEDs and Generic NEDs, we cannot (yet) generate the package. Probably the best option for such packages is to start with one of the examples. A good starting point for a CLI NED is $NCS_DIR/packages/neds/cisco-ios and a good starting point for a Generic NED is the example $NCS_DIR/examples.ncs/generic-ned/xmlrpc-device

    Creating a Service Package or a Data Provider Package

    The ncs-make-package can be used to generate empty skeleton packages for a data provider and a simple service. The flags --service-skeleton and --data-provider-skeleton.

    Alternatively, one of the examples can be modified to provide a good starting point. For example $NCS_DIR/examples.ncs/getting-started/developing-with-ncs/4-rfs-service

    Optimistic concurrency, on the other hand, allows transactions to run in parallel. It works on the premise that data conflicts are rare, so most of the time the transactions can be applied concurrently and will retain the required properties. NSO ensures this by checking that there are no conflicts with other transactions just before each transaction is committed. In particular, NSO will verify that all the data accessed as part of the transaction is still valid when applying changes. Otherwise, the system will reject the transaction.

    Such a model makes sense because a lot of the time concurrent transactions deal with separate sets of data. Even if multiple transactions share some data in a read-only fashion, it is fine as they still produce the same result.

    Nonconflicting Concurrent Transactions

    In the figure, svc1 in the T1 transaction and svc2 in the T2 transaction both read (but do not change) the same, shared piece of data and can proceed as usual, unperturbed.

    On the other hand, a conflict is when a piece of data, that has been read by one transaction, is changed by another transaction before the first transaction is committed. In this case, at the moment the first transaction completes, it is already working with stale data and must be rejected, as the following figure shows.

    Conflicting Concurrent Transactions

    In the figure, the transaction T1 reads dns-server to use in the provisioning of svc1 but transaction T2 changes dns-server value in the meantime. The two transactions conflict and T1 is rejected because T2 completed first.

    To be precise, for a transaction to experience a conflict, both of the following has to be true:

    1. It reads some data that is changed after being read and before the transaction is completed.

    2. It commits a set of changes in NSO.

    This means a set of read-only transactions or transactions, where nothing is changed, will never conflict. It is also possible that multiple write-only transactions won't conflict even when they update the same data nodes.

    Allowing multiple concurrent transactions to write (and only write, not read) to the same data without conflict may seem odd at first. But from a transaction's standpoint, it does not depend on the current value because it was never read. Suppose the value changed the previous day, the transaction would do the exact same thing and you wouldn't consider it a conflict. So, the last write wins, regardless of the time elapsed between the two transactions.

    It is extremely important that you do not mix multiple transactions, because it will prevent NSO from detecting conflicts properly. For example, starting multiple separate transactions and using one to write data, based on what was read from a different one, can result in subtle bugs that are hard to troubleshoot.

    While the optimistic concurrency model allows transactions to run concurrently most of the time, ultimately some synchronization (a global lock) is still required to perform the conflict checks and serialize data writes to the CDB and devices. The following figure shows everything that happens after a client tries to apply a configuration change, including acquiring and releasing the lock. This process takes place, for example, when you enter the commit command on the NSO CLI or when a PUT request of the RESTCONF API is processed.

    Stages of a Transaction Commit

    As the figure shows (and you can also observe it in the progress trace output), service mapping, validation, and transforms all happen in the transaction before taking a (global) transaction lock.

    At the same time, NSO tracks all of the data reads and writes from the start of the transaction, right until the lock and conflict check. This includes service mapping callbacks and XML templates, as well as transform and custom validation hooks if you are using any. It even includes reads done as part of the YANG validation and rollback creation that NSO performs automatically.

    If reads do not overlap with writes from other transactions, the conflict check passes. The change is written to the CDB and disseminated to the affected network devices, through the prepare and commit phases. Kickers and subscribers are called and, finally, the global lock can be released.

    On the other hand, if there is overlap and the system detects a conflict, the transaction obviously cannot proceed. To recover if this happens, the transaction should be retried. Sometimes the system can do it automatically and sometimes the client itself must be prepared to retry it.

    An ingenious developer might consider avoiding the need for retries by using explicit locking, in the way the NETCONF lock command does. However, be aware that such an approach is likely to significantly degrade the throughput of the whole system and is discouraged. If explicit locking is required, it should be considered with caution and sufficient testing.

    In general, what affects the chance of conflict is the actual data that is read and written by each transaction. So, if there is more data, the surface for potential conflict is bigger. But you can minimize this chance by accounting for it in the application design.

    Identifying Conflicts

    When a transaction conflict occurs, NSO logs an entry in the developer log, often found at logs/devel.log or a similar path. Suppose you have the following code in Python:

    If the /mysvc-dns leaf changes while the code is executing, the t.apply() line fails and the developer log contains an entry similar to the following example:

    Here, the transaction with id 3347 reads a value of /mysvc-dns as “10.1.2.2” but that value was changed by the transaction with id 3346 to “10.1.1.138” by the time the first transaction called t.apply(). The entry also contains some additional data, such as the user that initiated the other transaction and the low-level operations that resulted in the conflict.

    At the same time, the Python code raises an ncs.error.Error exception, with confd_errno set to the value of ncs.ERR_TRANSACTION_CONFLICT and error text, such as the following:

    In Java code, a matching com.tailf.conf.ConfException is thrown, with errorCode set to the com.tailf.conf.ErrorCode.ERR_TRANSACTION_CONFLICT value.

    A thing to keep in mind when examining conflicts is that the transaction that performed the read operations is the one that gets the error and causes the log entry, while the other transaction, performing the write operations to the same path, is already completed successfully.

    The error includes a reference to the work phase. The phase tells which part of the transaction encountered a conflict. The work phase signifies changes in an open transaction before it is applied. In practice, this is a direct read in the code that started the transaction before calling the apply() or applyTrans() function: the example reads the value of the leaf into dns_server.

    On the other hand, if two transactions configure two service instances and the conflict arises in the mapping code, then the phase shows transform instead. It is also possible for a conflict to occur in more than one place, such as the phase transform,work denoting a conflict in both, the service mapping code as well as the initial transaction.

    The complete list of conflict sources, that is, the possible values for the phase, is as follows:

    • work: read in an open transaction before it is applied

    • rollback: read during rollback file creation

    • pre-transform: read while validating service input parameters according to the service YANG model

    • transform: read during service (FASTMAP) or another transform invocation

    • validation: read while validating the final configuration (YANG validation)

    For example, pre-transform indicates that the service YANG model validation is the source of the conflict. This can help tremendously when you try to narrow down the conflicting code in complex scenarios. In addition, the phase information is useful when you troubleshoot automatic transaction retries in case of conflict: when the phase includes work, automatic retry is not possible.

    Automatic Retries

    In some situations, NSO can retry a transaction that first failed to apply due to a conflict. A prerequisite is that NSO knows which code caused the conflict and that it can run that code again.

    Changes done in the work phase are changes made directly by an external agent, such as a Python script connecting to the NSO or a remote NETCONF client. Since NSO is not in control of and is not aware of the logic in the external agent, it can only reject the conflicting transaction.

    However, for the phases that follow the work phase, all the logic is implemented in NSO and NSO can run it on demand. For example, NSO is in charge of calling the service mapping code and the code can be run as many times as needed (a requirement for service re-deploy and similar). So, in case of a conflict, NSO can rerun all of the necessary logic to provision or de-provision a service.

    NSO keeps checkpoints for each transaction, to restart it from the conflicting phase and save itself from redoing the work from the preceding phases if possible. NSO automatically checks if the transaction checkpoint read- or write-set grows too large. This allows for larger transactions to go through without memory exhaustion. When all checkpoints are skipped, no transaction retries are possible, and the transaction fails. When later-stage checkpoints are skipped, the transaction retry will take more time.

    Moreover, in case of conflicts during service mapping, NSO optimizes the process even further. It tracks the conflicting services to not schedule them concurrently in the future. This automatic retry behavior is enabled by default.

    For services, retries can be configured further or even disabled under /services/global-settings. You can also find the service conflicts NSO knows about by running the show services scheduling conflict command. For example:

    Since a given service may not always conflict and can evolve over time, NSO reverts to default scheduling after expiry time, unless new conflicts occur.

    Sometimes, you know in advance that a service will conflict, either with itself or another service. You can encode this information in the service YANG model using the conflicts-with parameter under the servicepoint definition:

    The parameter ensures that NSO will never schedule and execute this service concurrently with another service using the specified servicepoint. It adds a non-expiring static scheduling conflict entry. This way, you can avoid the unnecessary occasional retry when the dynamic scheduling conflict entry expires.

    Declaring a conflict with itself is especially useful when you have older, non-thread-safe service code that cannot be easily updated to avoid threading issues.

    For the NSO CLI and JSON-RPC (WebUI) interfaces, a commit of a transaction that results in a conflict will trigger an automatic rebase and retry when the resulting configuration is the same despite the conflict. If the rebase does not resolve the conflict, the transaction will fail. The conflict can, in some CLI cases, be resolved manually. A successful automatic rebase and a retry will generate something like the following pseudo-log entries in the developer log (trace log level):

    Handling Conflicts

    When a transaction fails to apply due to a read-write conflict in the work phase, NSO rejects the transaction and returns a corresponding error. In such a case, you must start a new transaction and redo all the changes.

    Why is this necessary? Suppose you have code, let's say as part of a CDB subscriber or a standalone program, similar to the following Python snippet:

    If mysvc-use-dhcp has one value when your code starts provisioning but is changed mid-process, your code needs to restart from the beginning or you can end up with a broken system. To guard against such a scenario, NSO needs to be conservative and return an error.

    Since there is a chance of a transaction failing to apply due to a conflict, robust code should implement a retry scheme. You can implement the retry algorithm yourself, or you can use one of the provided helpers.

    In Python, Maapi class has a run_with_retry() method, which creates a new transaction and calls a user-supplied function to perform the work. On conflict, run_with_retry() will recreate the transaction and call the user function again. For details, please see the relevant API documentation.

    The same functionality is available in Java as well, as the Maapi.ncsRunWithRetry() method. Where it differs from the Python implementation is that it expects the function to be implemented inside a MaapiRetryableOp object.

    As an alternative option, available only in Python, you can use the retry_on_conflict() function decorator.

    Example code for each of these approaches is shown next. In addition, the examples.ncs/development-guide/concurrency-model/retry example showcases this functionality as part of a concrete service.

    Example Retrying Code in Python

    Suppose you have some code in Python, such as the following:

    Since the code performs reads and writes of data in NSO through a newly established transaction, there is a chance of encountering a conflict with another, concurrent transaction.

    On the other hand, if this was a service mapping code, you wouldn't be creating a new transaction yourself because the system would already provide one for you. You wouldn't have to worry about the retry because, again, the system would handle it for you through the automatic mechanism described earlier.

    Yet, you may find such code in CDB subscribers, standalone scripts, or action implementations. As a best practice, the code should handle conflicts.

    If you have an existing ncs.maapi.Maapi object already available, the simplest option might be to refactor the actual logic into a separate function and call it through run_with_retry(). For the current example, this might look like the following:

    If the new function is not entirely independent and needs additional values passed as parameters, you can wrap it inside an anonymous (lambda) function:

    An alternative implementation with a decorator is also possible and might be easier to implement if the code relies on the single_write_trans() or similar function. Here, the code does not change unless it has to be refactored into a separate function. The function is then adorned with the @ncs.maapi.retry_on_conflict() decorator. For example:

    The major benefit of this approach is when the code is already in a function and only a decorator needs to be added. It can also be used with methods of the Action class and alike.

    For actions in particular, please note that the order of decorators is important and the decorator is only useful when you start your own write transaction in the wrapped function. This is what single_write_trans() does in the preceding example because the old transaction cannot be used any longer in case of conflict.

    Example Retrying Code in Java

    Suppose you have some code in Java, such as the following:

    To read and write some data in NSO, the code starts a new transaction with the help of NavuContext.startRunningTrans() but could have called Maapi.startTrans() directly as well. Regardless of the way such a transaction is started, there is a chance of encountering a read-write conflict. To handle those cases, the code can be rewritten to use Maapi.ncsRunWithRetry().

    The ncsRunWithRetry() call creates and manages a new transaction, then delegates work to an object implementing the com.tailf.maapi.MaapiRetryableOp interface. So, you need to move the code that does the work into a new class, let's say MyProvisioningOp:

    This class does not start its own transaction any more but uses the transaction handle tid, provided by the ncsRunWithRetry() wrapper.

    You can create the MyProvisioningOp as an inner or nested class if you wish so but note that, depending on your code, you may need to designate it as a static class to use it directly as shown here.

    If the code requires some extra parameters when called, you can also define additional properties on the new class and use them for this purpose. With the new class ready, you instantiate and call into it with the ncsRunWithRetry() function. For example:

    And what if your use case requires you to customize how the transaction is started or applied? ncsRunWithRetry() can take additional parameters that allow you to control those aspects. Please see the relevant API documentation for the full reference.

    Designing for Concurrency

    In general, transaction conflicts in NSO cannot be avoided altogether, so your code should handle them gracefully with retries. Retries are required to ensure correctness but do take up additional time and resources. Since a high percentage of retries will notably decrease the throughput of the system, you should endeavor to construct your data models and logic in a way that minimizes the chance of conflicts.

    A conflict arises when one transaction changes a value that one or more other ongoing transactions rely on. From this, you can make a couple of observations that should help guide your implementation.

    First, if the shared data changes infrequently, it will rarely cause a conflict (regardless of the number of reads) because it only affects the transactions happening at the time it is changed. Conversely, a frequent change can clash with other transactions much more often and warrants spending some effort to analyze and possibly make conflict-free.

    Next, if a transaction runs a long time, a greater number of other write transactions can potentially run in the meantime, increasing the chances of a conflict. For this reason, you should avoid long-running read-write transactions.

    Likewise, the more data nodes and the different parts of the data tree the transaction touches, the more likely it is to run into a conflict. Limiting the scope and the amount of the changes to shared data is an important design aspect.

    Also, when considering possible conflicts, you must account for all the changes in the transaction. This includes changes propagated to other parts of the data model through dependencies. For example, consider the following YANG snippet. Changing a single provision-dns leaf also changes every mysvc list item because of the when statement.

    Ultimately, what matters is the read-write overlap with other transactions. Thus, you should avoid needless reads in your code: if there are no reads of the changed values, there can't be any conflicts.

    Avoiding Needless Reads

    A technique used in some existing projects, in service mapping code and elsewhere, is to first prepare all the provisioning parameters by reading a number of things from the CDB. But some of these parameters, or even most, may not really be needed for that particular invocation.

    Consider the following service mapping code:

    Here, a service performs NTP configuration when enabled through the do_ntp switch. But even if the switch is off, there are still a lot of reads performed. If one of the values changes during provisioning, such as the list of the available NTP servers in ntp_servers, it will cause a conflict and a retry.

    An improved version of the code only calculates the NTP server value if it is actually needed:

    Handling Dependent Services

    Another thing to consider in addition to the individual service implementation is the placement and interaction of the service within the system. What happens if one service is used to generate input for another service? If the two services run concurrently, writes of the first service will invalidate reads of the other one, pretty much guaranteeing a conflict. Then it is wasteful to run both services concurrently and they should really run serially.

    A way to achieve this is through a design pattern called stacked services. You create a third service that instantiates the first service (generating the input data) before the second one (dependent on the generated data).

    Searching and Enumerating Lists

    When there is a need to search or filter a list for specific items, you will often find for-loops or similar constructs in the code. For example, to configure NTP, you might have the following:

    This approach is especially prevalent in ordered-by-user lists since the order of the items and their processing is important.

    The interesting bit is that such code reads every item in the list. If the list is changed while the transaction is ongoing, you get a conflict with the message identifying the get_next operation (which is used for list traversal). This is not very surprising: if another active item is added or removed, it changes the result of your algorithm. So, this behavior is expected and desirable to ensure correctness.

    However, you can observe the same conflict behavior in less obvious scenarios. If the list model contains a unique YANG statement, NSO performs the same kind of enumeration of list items for you to verify the unique constraint. Likewise, a must or when statement can also trigger the evaluation of every item during validation, depending on the XPath expression.

    NSO knows how to discern between access to specific list items based on the key value, where it tracks reads only to those particular items, and enumerating the list, where no key value is supplied and a list with all elements is treated as a single item. This works for your code as well as for the XPath expressions (in YANG and otherwise). As you can imagine, adding or removing items in the first case doesn't cause conflicts, while in the second one, it does.

    In the end, it depends on the situation whether list enumeration can affect throughput or not. In the example, the NTP servers could be configured manually, by the operator, so they would rarely change, making it a non-issue. But your use case might differ.

    Python Assigning to Self

    As several service invocations may run in parallel, Python self-assignment in service handling code can cause difficult-to-debug issues. Therefore, NSO checks for such patterns and issues an alarm (default) or a log entry containing a warning and a keypath to the service instance that caused the warning. See NSO Python VM for details.

    NSO Java VM

    Run your Java code using Java Virtual Machine (VM).

    The NSO Java VM is the execution container for all Java classes supplied by deployed NSO packages.

    The classes, and other resources, are structured in jar files and the specific use of these classes is described in the component tag in the respective package-meta-data.xml file. Also as a framework, it starts and controls other utilities for the use of these components. To accomplish this, a main class com.tailf.ncs.NcsMain, implementing the Runnable interface is started as a thread. This thread can be the main thread (running in a java main()) or be embedded into another Java program.

    When the NcsMain thread starts it establishes a socket connection towards NSO. This is called the NSO Java VM control socket. It is the responsibility of NcsMain to respond to command requests from NSO and pass these commands as events to the underlying finite state machine (FSM). The NcsMain FSM will execute all actions as requested by NSO. This includes class loading and instantiation as well as registration and start of services, NEDs, etc.

    When NSO detects the control socket connection from the NSO Java VM, it starts an initialization process:

    1. First, NSO sends a INIT_JVM request to the NSO Java VM. At this point, the NSO Java VM will load schemas i.e. retrieve all known YANG module definitions. The NSO Java VM responds when all modules are loaded.

    2. Then, NSO sends a LOAD_SHARED_JARS request for each deployed NSO package. This request contains the URLs for the jars situated in the shared-jar directory in the respective NSO package. The classes and resources in these jars will be globally accessible for all deployed NSO packages.

    3. The next step is to send a LOAD_PACKAGE

    See for tips on customizing startup behavior and debugging problems when the Java VM fails to start

    YANG Model

    The file tailf-ncs-java-vm.yang defines the java-vm container which, along with ncs.conf, is the entry point for controlling the NSO Java VM functionality. Study the content of the YANG model in the example below (The Java VM YANG model). For a full explanation of all the configuration data, look at the YANG file and man ncs.conf.

    Many of the nodes beneath java-vm are by default invisible due to a hidden attribute. To make everything under java-vm visible in the CLI, two steps are required:

    1. First, the following XML snippet must be added to ncs.conf:\

    2. Next, the unhide command may be used in the CLI session:

    Java Packages and the Class Loader

    Each NSO package will have a specific java classloader instance that loads its private jar classes. These package classloaders will refer to a single shared classloader instance as its parent. The shared classloader will load all shared jar classes for all deployed NSO packages.

    The jar's in the shared-jar and private-jar directories should NOT be part of the Java classpath.

    The purpose of this is first to keep integrity between packages which should not have access to each other's classes, other than the ones that are contained in the shared jars. Secondly, this way it is possible to hot redeploy the private jars and classes of a specific package while keeping other packages in a run state.

    Should this class loading scheme not be desired, it is possible to suppress it by starting the NSO Java VM with the system property TAILF_CLASSLOADER set to false.

    This will force NSO Java VM to use the standard Java system classloader. For this to work, all jar's from all deployed NSO packages need to be part of the classpath. The drawback of this is that all classes will be globally accessible and hot redeploy will have no effect.

    There are four types of components that the NSO Java VM can handle:

    • The ned type. The NSO Java VM will handle NEDs of sub-type cli and generic which are the ones that have a Java implementation.

    • The callback type. These are any forms of callbacks that are defined by the DP API.

    • The application

    In some situations, several NSO packages are expected to use the same code base, e.g. when third-party libraries are used or the code is structured with some common parts. Instead of duplicate jars in several NSO packages, it is possible to create a new NSO package, add these jars to the shared-jar directory, and let the package-meta-data.xml file contains no component definitions at all. The NSO Java VM will load these shared jars and these will be accessible from all other NSO packages.

    Inside the NSO Java VM, each component type has a specific Component Manager. The responsibility of these Managers is to manage a set of component classes for each NSO package. The Component Manager acts as an FSM that controls when a component should be registered, started, stopped, etc.

    For instance, the DpMuxManager controls all callback implementations (services, actions, data providers, etc). It can load, register, start, and stop such callback implementations.

    The NED Component Type

    NEDs can be of type netconf, snmp, cli, or generic. Only the cli and generic types are relevant for the NSO Java VM because these are the ones that have a Java implementation. Normally these NED components come in self-contained and prefabricated NSO packages for some equipment or class of equipment. It is however possible to tailor make NEDs for any protocol. For more information on this see and in NED Development

    The Callback Component Type

    Callbacks are the collective name for a number of different functions that can be implemented in Java. One of the most important is the service callbacks, but also actions, transaction control, and data provision callbacks are in common use in an NSO implementation. For more on how to program callback using the DP API, see .

    The Application Component Type

    For programs that are none of the above types but still need to access NSO as a daemon process, it is possible to use the ApplicationComponent Java interface. The ApplicationComponent interface expects the implementing classes to implement a init(), finish() and a run() method.

    The NSO Java VM will start each class in a separate thread. The init() is called before the thread is started. The run() runs in a thread similar to the run() method in the standard Java Runnable interface. The finish() method is called when the NSO Java VM wants the application thread to stop. It is the responsibility of the programmer to stop the application thread i.e., stop the execution in the run() method when finish() is called. Note, that making the thread stop when finish() is called is important so that the NSO Java VM will not be hanging at a STOP_VM request.

    An example of an application component implementation is found in .

    The Resource Manager

    User Implementations typically need resources like Maapi, Maapi Transaction, Cdb, Cdb Session, etc. to fulfill their tasks. These resources can be instantiated and used directly in the user code. This implies that the user code needs to handle connection and close of additional sockets used by these resources. There is however another recommended alternative, and that is to use the Resource manager. The Resource manager is capable of injecting these resources into the user code. The principle is that the programmer will annotate the field that should refer to the resource rather than instantiate it.

    This way the NSO Java VM and the Resource manager can keep control over used resources and also can intervene e.g. close sockets at forced shutdowns.

    The Resource manager can handle two types of resources: MAAPI and CDB.

    For both the Maapi and Cdb resource types a socket connection is opened towards NSO by the Resource manager. At a stop, the Resource manager will disconnect these sockets before ending the program. User programs can also tell the resource manager when its resources are no longer needed with a call to ResourceManager.unregisterResources().

    The resource annotation has three attributes:

    • type defines the resource type.

    • scope defines if this resource should be unique for each instance of the Java class (Scope.INSTANCE) or shared between different instances and classes (Scope.CONTEXT). For CONTEXT scope the sharing is confined to the defining NSO package, i.e., a resource cannot be shared between NSO packages.

    • qualifier

    When the NSO Java VM starts it will receive component classes to load from NSO. Note, that the component classes are the classes that are referred to in the package-meta-data.xml file. For each component class, the Resource Manager will scan for annotations and inject resources as specified.

    However, the package jars can contain lots of classes in addition to the component classes. These will be loaded at runtime and will be unknown by the NSO Java VM and therefore not handled automatically by the Resource Manager. These classes can also use resource injection but need a specific call to the Resource Manager for the mechanism to take effect. Before the resources are used for the first time the resource should be used, a call of ResourceManager.registerResources(...) will force the injection of the resources. If the same class is registered several times the Resource manager will detect this and avoid multiple resource injections.

    The Alarm Centrals

    The AlarmSourceCentral and AlarmSinkCentral, which is part of the NSO Alarm API, can be used to simplify reading and writing alarms. The NSO Java VM will start these centrals at initialization. User implementations can therefore expect this to be set up without having to handle the start and stop of either the AlarmSinkCentral or the AlarmSourceCentral. For more information on the alarm API, see .

    Embedding the NSO Java VM

    As stated above the NSO Java VM is executed in a thread implemented by the NcsMain. This implies that somewhere a java main() must be implemented that launches this thread. For NSO this is provided by the NcsJVMLauncher class. In addition to this, there is a script named ncs-start-java-vm that starts Java with the NcsJVMLauncher.main(). This is the recommended way of launching the NSO Java VM and how it is set up in a default installation. If there is a need to run the NSO Java VM as an embedded thread inside another program. This can be done simply by instantiating the class NcsMain and starting this instance in a new thread.

    However, with the embedding of the NSO Java VM comes the responsibility to manage the life cycle of the NSO Java VM thread. This thread cannot be started before NSO has started and is running or else the NSO Java VM control socket connection will fail. Also, running NSO without the NSO Java VM being launched will render runtime errors as soon as NSO needs NSO Java VM functionality.

    To be able to control an embedded NSO Java VM from another supervising Java thread or program an optional JMX interface is provided. The main functionality in this interface is listing, starting, and stopping the NSO Java VM and its Component Managers.

    JMX Interface

    Normal control of the NSO Java engine is performed from NSO e.g. using the CLI. However, NcsMain class and all component managers implement JMX interfaces to make it possible to control the NSO Java VM also using standard Java tools like JvisualVM and JConsol.

    The JMX interface is configured via the Java VM YANG model (see $NCS_DIR/src/ncs/yang/tailf-ncs-java-vm.yang) in the NSO configuration. For JMX connection purposes there are four attributes to configure:

    • jmx-address The hostname or IP for the RMI registry.

    • jmx-port The port for the RMI registry.

    • jndi-address The hostname or IP for the JMX RMI server.

    The JMX connection server uses two sockets for communication with a JMX client. The first socket is the JNDI RMI registry where the JMX Mbean objects are looked up. The second socket is the JMX RMI server from which the JMX connection objects are exported. For all practical purposes, the host/IP for both sockets are the same and only the ports differ.

    An example of a JMX connection URL connecting to localhost is: service:jmx:rmi://localhost:4445/jndi/rmi://localhost:4444/ncs

    In addition to the JMX URL, the JMX user needs to authenticate using a legitimate user/password from the AAA configuration. An example of JMX authentication using the JConsol standard Java tool is the following:

    The following JMX MBeans interfaces are defined:

    Logging

    NSO has extensive logging functionality. Log settings are typically very different for a production system compared to a development system. Furthermore, the logging of the NSO daemon and the NSO Java VM is controlled by different mechanisms. During development, we typically want to turn on the developer-log. The sample ncs.conf that comes with the NSO release has log settings suitable for development, while the ncs.conf created by a System Install are suitable for production deployment.

    The NSO Java VM uses Log4j for logging and will read its default log settings from a provided log4j2.xml file in the ncs.jar. Following that, NSO itself has java-vm log settings that are directly controllable from the NSO CLI. We can do:

    This will dynamically reconfigure the log level for package com.tailf.maapi to be at the level trace. Where the Java logs end up is controlled by the log4j2.xml file. By default, the NSO Java VM writes to stdout. If the NSO Java VM is started by NSO, as controlled by the ncs.conf parameter /java-vm/auto-start, NSO will pick up the stdout of the service manager and write it to:

    (The details pipe command also displays default values)

    The NSO Java VM Timeouts

    The section /ncs-config/japi in ncs.conf contains a number of very important timeouts. See $NCS_DIR/src/ncs/ncs_config/tailf-ncs-config.yang and in Manual Pages for details.

    • new-session-timeout controls how long NSO will wait for the NSO Java VM to respond to a new session.

    • query-timeout controls how long NSO will wait for the NSO Java VM to respond to a request to get data.

    • connect-timeout controls how long NSO will wait for the NSO Java VM to initialize a DP connection after the initial socket connect.

    Whenever any of these timeouts trigger, NSO will close the sockets from NSO to the NSO Java VM. The NSO Java VM will detect the socket close and exit. If NSO is configured to start (and restart) the NSO Java VM, the NSO Java VM will be automatically restarted. If the NSO Java VM is started by some external entity, if it runs within an application server, it is up to that entity to restart the NSO Java VM.

    Debugging Startup

    When using the auto-start feature (the default), NSO will start the NSO Java VM (as outlined in the start of this section), there are a number of different settings in the java-vm YANG model (see $NCS_DIR/src/ncs/yang/tailf-ncs-java-vm.yang) that controls what happens when something goes wrong during the startup.

    The two timeout configurations connect-time and initialization-time are most relevant during startup. If the Java VM fails during the initial stages (during INIT_JVM, LOAD_SHARED_JARS, or LOAD_PACKAGE) either because of a timeout or because of a crash, NSO will log The NCS Java VM synchronization failed in ncs.log.

    The synchronization error message in the log will also have a hint as to what happened:

    • closed usually means that the Java VM crashed (and closed the socket connected to NSO)

    • timeout means that it failed to start (or respond) within the time limit. For example, if the Java VM runs out of memory and crashes, this will be logged as

    After logging, NSO will take action based on the synchronization-timeout-action setting:

    • log: NSO will log the failure, and if auto-restart is set to true NSO will try to restart the Java VM

    • log-stop (default): NSO will log the failure, and if the Java VM has not stopped already NSO will also try to stop it. No restart action is taken.

    • exit: NSO will log the failure, and then stop NSO itself.

    If you have problems with the Java VM crashing during startup, a common pitfall is running out of memory (either total memory on the machine, or heap in the JVM). If you have a lot of Java code (or a loaded system) perhaps the Java VM did not start in time. Try to determine the root cause, check ncs.log and ncs-java-vm.log, and if needed increase the timeout.

    For complex problems, for example with the class loader, try logging the internals of the startup:

    Setting this will result in a lot more detailed information in ncs-java-vm.log during startup.

    When the auto-restart setting is true (the default), it means that NSO will try to restart the Java VM when it fails (at any point in time, not just during startup). NSO will at most try three restarts within 30 seconds, i.e., if the Java VM crashes more than three times within 30 seconds NSO gives up. You can check the status of the Java VM using the java-vm YANG model. For example in the CLI:

    The start-status can have the following values:

    • auto-start-not-enabled: Autostart is not enabled.

    • stopped: The Java VM has been stopped or is not yet started.

    • started: The Java VM has been started. See the leaf 'status' to check the status of the Java application code.

    The status can have the following values:

    • not-connected: The Java application code is not connected to NSO.

    • initializing: The Java application code is connected to NSO, but not yet initialized.

    • running: The Java application code is connected and initialized.

               <package-name>/package-meta-data.xml
                        load-dir/
                        shared-jar/
                        private-jar/
                        webui/
                        templates/
                        src/
                        doc/
                        netsim/
    An Example Package
    <ncs-package xmlns="http://tail-f.com/ns/ncs-packages">
      <name>stats</name>
      <package-version>1.0</package-version>
      <description>Aggregating statistics from the network</description>
      <ncs-min-version>3.0</ncs-min-version>
      <required-package>
        <name>router-nc-1.0</name>
      </required-package>
      <component>
        <name>stats</name>
        <callback>
          <java-class-name>com.example.stats.Stats</java-class-name>
        </callback>
      </component>
    </ncs-package>
    |----package-meta-data.xml
    |----private-jar
    |----shared-jar
    |----src
    |    |----Makefile
    |    |----yang
    |    |    |----aggregate.yang
    |    |----java
    |         |----build.xml
    |         |----src
    |              |----com
    |                   |----example
    |                        |----stats
    |                             |----namespaces
    |                             |----Stats.java
    |----doc
    |----load-dir
    $ yanger -f tree tailf-ncs-packages.yang
    submodule: tailf-ncs-packages (belongs-to tailf-ncs)
      +--ro packages
         +--ro package* [name] <-- renamed to "ncs-package" in package-meta-data.xml
            +--ro name                     string
            +--ro package-version          version
            +--ro description?             string
            +--ro ncs-min-version*         version
            +--ro ncs-max-version*         version
            +--ro python-package!
            |  +--ro vm-name?           string
            |  +--ro callpoint-model?   enumeration
            +--ro directory?               string
            +--ro templates*               string
            +--ro template-loading-mode?   enumeration
            +--ro supported-ned-id*        union
            +--ro supported-ned-id-match*  string
            +--ro required-package* [name]
            |  +--ro name           string
            |  +--ro min-version?   version
            |  +--ro max-version?   version
            +--ro component* [name]
               +--ro name                 string
               +--ro description?         string
               +--ro entitlement-tag?     string
               +--ro (type)
                  +--:(ned)
                  |  +--ro ned
                  |     +--ro (ned-type)
                  |     |  +--:(netconf)
                  |     |  |  +--ro netconf
                  |     |  |     +--ro ned-id?   identityref
                  |     |  +--:(snmp)
                  |     |  |  +--ro snmp
                  |     |  |     +--ro ned-id?   identityref
                  |     |  +--:(cli)
                  |     |  |  +--ro cli
                  |     |  |     +--ro ned-id             identityref
                  |     |  |     +--ro java-class-name    string
                  |     |  +--:(generic)
                  |     |     +--ro generic
                  |     |        +--ro ned-id             identityref
                  |     |        +--ro java-class-name    string
                  |     +--ro device
                  |     |  +--ro vendor            string
                  |     |  +--ro product-family?   string
                  |     +--ro option* [name]
                  |        +--ro name     string
                  |        +--ro value?   string
                  +--:(upgrade)
                  |  +--ro upgrade
                  |     +--ro (type)
                  |        +--:(java)
                  |        |  +--ro java-class-name?     string
                  |        +--:(python)
                  |           +--ro python-class-name?   string
                  +--:(callback)
                  |  +--ro callback
                  |     +--ro java-class-name*   string
                  +--:(application)
                    +--ro application
                        +--ro (type)
                        |  +--:(java)
                        |  |  +--ro java-class-name      string
                        |  +--:(python)
                        |     +--ro python-class-name    string
                        +--ro start-phase?               enumeration
    $ ncs_load -o -Fp -p /packages
    <config xmlns="http://tail-f.com/ns/config/1.0">
      <packages xmlns="http://tail-f.com/ns/ncs">
        <package>
          <name>router-nc-1.1</name>
          <package-version>1.1</package-version>
          <description>Generated netconf package</description>
          <ncs-min-version>5.7</ncs-min-version>
          <directory>./state/packages-in-use/1/router</directory>
          <component>
            <name>router</name>
            <ned>
              <netconf>
                <ned-id xmlns:router-nc-1.1="http://tail-f.com/ns/ned-id/router-nc-1.1">
                router-nc-1.1:router-nc-1.1</ned-id>
              </netconf>
              <device>
                <vendor>Acme</vendor>
              </device>
            </ned>
          </component>
          <oper-status>
            <up/>
          </oper-status>
        </package>
        <package>
          <name>vrouter</name>
          <package-version>1.0</package-version>
          <description>Nano services netsim virtual router example</description>
          <ncs-min-version>5.7</ncs-min-version>
          <python-package>
            <vm-name>vrouter</vm-name>
            <callpoint-model>threading</callpoint-model>
          </python-package>
          <directory>./state/packages-in-use/1/vrouter</directory>
          <templates>vrouter-configured</templates>
          <template-loading-mode>strict</template-loading-mode>
          <supported-ned-id xmlns:router-nc-1.1="http://tail-f.com/ns/ned-id/router-nc-1.1">
          router-nc-1.1:router-nc-1.1</supported-ned-id>
          <required-package>
            <name>router-nc-1.1</name>
            <min-version>1.1</min-version>
          </required-package>
          <component>
            <name>nano-app</name>
            <description>Nano service callback and post-actions example</description>
            <application>
              <python-class-name>vrouter.nano_app.NanoApp</python-class-name>
              <start-phase>phase2</start-phase>
            </application>
          </component>
          <oper-status>
            <up/>
          </oper-status>
        </package>
      </packages>
    </config>
    ....
    list component {
      key name;
      leaf name {
        type string;
      }
      ...
      choice type {
        mandatory true;
        case ned {
          ...
        }
        case callback {
          ...
        }
        case application {
          ...
        }
        case upgrade {
          ...
        }
        ....
      }
      ....
      <component>
        <name>stats</name>
        <callback>
          <java-class-name>
            com.example.stats.Stats
          </java-class-name>
        </callback>
      </component>
      $ ncs-make-package --netconf-ned ./acme-router-yang-files acme
      $ cd acme/src; make
     $ ncs-setup --ned-package ./acme --dest ./ncs-project
     $ cd ./ncs-project
     $ ncs
     $ ncs_cli -u admin
     > configure
     > set devices authgroups group southbound-bob umap admin \
            remote-name bob remote-password secret
     > set devices device acme1 authgroup southbound-bob address 10.2.3.4
     > set devices device acme1 device-type netconf
     > commit
     $ ncs-netsim create-network ./acme 5 a --dir ./netsim
     $ ncs-netsim start
    DEVICE a0 OK STARTED
    DEVICE a1 OK STARTED
    DEVICE a2 OK STARTED
    DEVICE a3 OK STARTED
    DEVICE a4 OK STARTED
     $
     $ ncs-setup --netsim-dir ./netsim --dest ncs-project
     $ ncs-make-package --snmp-ned ./mibs acme
     $ cd acme/src; make
    with ncs.maapi.single_write_trans('admin', 'system') as t:
        root = ncs.maagic.get_root(t)
        # Read a value that can change during this transaction
        dns_server = root.mysvc_dns
        # Now perform complex work... or time.sleep(10) for testing
        # Finally, write the result
        root.some_data = 'the result'
        t.apply()
    <INFO> 23-Aug-2022::03:31:17.029 linux-nso ncs[<0.18350.3>]: ncs writeset collector:
       check conflict tid=3347 min=234 seq=237 wait=0ms against=[3346] elapsed=1ms
       -> conflict on: /mysvc-dns read: <<"10.1.2.2">> (op: get_delem tid: 3347)
       write: <<"10.1.1.138">> (op: write tid: 3346 user: admin) phase(s): work
       write tids: 3346
    Conflict detected (70): Transaction 3347 conflicts with transaction 3346 started by
       user admin: /mysvc:mysvc-dns read-op get_delem write-op write in work phase(s)
    admin@ncs# unhide debug
    admin@ncs# show services scheduling conflict | notab
    services scheduling conflict mysvc-servicepoint mysvc-servicepoint
     type           dynamic
     first-seen     2022-08-27T17:15:10+00:00
     inactive-after 2022-08-27T17:15:09+00:00
     expires-after  2022-08-27T18:05:09+00:00
     ttl-multiplier 1
    admin@ncs#
    list mysvc {
      uses ncs:service-data;
      ncs:servicepoint mysvc-servicepoint {
        ncs:conflicts-with "mysvc-servicepoint";
        ncs:conflicts-with "some-other-servicepoint";
      }
      // ...
    }
    <INFO> … check for read-write conflicts: conflict found
    <INFO> … rebase transaction
    …
    <INFO> … rebase transaction: ok
    <INFO> … retrying transaction after rebase
    with ncs.maapi.single_write_trans('admin', 'system') as t:
        if t.get_elem('/mysvc-use-dhcp') == True:
            # do something
        else:
            # do something entirely different that breaks
            # your network if mysvc-use-dhcp happens to be true
        t.apply()
    with ncs.maapi.single_write_trans('admin', 'python') as t:
        root = ncs.maagic.get_root(t)
        # First read some data, then write some too.
        # Finally, call apply.
        t.apply()
    def do_provisioning(t):
        """Function containing the actual logic"""
        root = ncs.maagic.get_root(t)
        # First read some data, then write some too.
        # ...
        # Finally, return True to signal apply() has to be called.
        return True
    
    # Need to replace single_write_trans() with a Maapi object
    with ncs.maapi.Maapi() as m:
        with ncs.maapi.Session(m, 'admin', 'python'):
            m.run_with_retry(do_provisioning)
    m.run_with_retry(lambda t: do_provisioning(t, one_param, another_param))
    from ncs.maapi import retry_on_conflict
    
    @retry_on_conflict()
    def do_provisioning():
        # This is the same code as before but in a function
        with ncs.maapi.single_write_trans('admin', 'python') as t:
            root = ncs.maagic.get_root(t)
            # First read some data, then write some too.
            # ...
            # Finally, call apply().
            t.apply()
    
    do_provisioning()
    class MyAction(ncs.dp.Action):
        @ncs.dp.Action.action
        @retry_on_conflict()
        def cb_action(self, uinfo, name, kp, input, output, trans):
            with ncs.maapi.single_write_trans('admin', 'python') as t:
                ...
    public class MyProgram {
        public static void main(String[] arg) throws Exception {
            Socket socket = new Socket("127.0.0.1", Conf.NCS_PORT);
            Maapi maapi = new Maapi(socket);
            maapi.startUserSession("admin", InetAddress.getByName(null),
                                   "system", new String[]{},
                                   MaapiUserSessionFlag.PROTO_TCP);
            NavuContext context = new NavuContext(maapi);
            int tid = context.startRunningTrans(Conf.MODE_READ_WRITE);
    
            // Your code here that reads and writes data.
    
            // Finally, call apply.
            context.applyClearTrans();
            maapi.endUserSession();
            socket.close();
        }
    }
    public class MyProvisioningOp implements MaapiRetryableOp {
        public boolean execute(Maapi maapi, int tid)
            throws IOException, ConfException, MaapiException
        {
            // Create context for the provided, managed transaction;
            // note the extra parameter compared to before and no calling
            // context.startRunningTrans() anymore.
            NavuContext context = new NavuContext(maapi, tid);
    
            // Your code here that reads and writes data.
    
            // Finally, return true to signal apply() has to be called.
            return true;
        }
    }
    public class MyProgram {
        public static void main(String[] arg) throws Exception {
            Socket socket = new Socket("127.0.0.1", Conf.NCS_PORT);
            Maapi maapi = new Maapi(socket);
            maapi.startUserSession("admin", InetAddress.getByName(null),
                                   "system", new String[]{},
                                   MaapiUserSessionFlag.PROTO_TCP);
            // Deletegate work to MyProvisioningOp, with retry.
            maapi.ncsRunWithRetry(new MyProvisioningOp());
            // No more calling applyClearTrans() or friends,
            // ncsRunWithRetry() does that for you.
            maapi.endUserSession();
            socket.close();
        }
    }
    leaf provision-dns {
      type boolean;
    }
    list mysvc {
      container dns {
        when "../../provision-dns";
        // ...
      }
    }
    def cb_create(self, tctx, root, service, proplist):
        device = root.devices.device[service.device]
    
        # Search device interfaces and CDB for mgmt IP
        device_ip = find_device_ip(device)
    
        # Find the best server to use for this device
        ntp_servers = root.my_settings.ntp_servers
        use_ntp_server = find_closest_server(device_ip, ntp_servers)
    
        if service.do_ntp:
            device.ntp.servers.append(use_ntp_server)
    def cb_create(self, tctx, root, service, proplist):
        device = root.devices.device[service.device]
    
        if service.do_ntp:
            # Search device interfaces and CDB for mgmt IP
            device_ip = find_device_ip(device)
    
            # Find the best server to use for this device
            ntp_servers = root.my_settings.ntp_servers
            use_ntp_server = find_closest_server(device_ip, ntp_servers)
    
            device.ntp.servers.append(use_ntp_server)
    for ntp_server in root.my_settings.ntp_servers:
        # Only select active servers
        if ntp_server.is_active:
            # Do something
    callpoint-model - A Python package runs Services, Nano Services, and Actions in the same OS process. If the callpoint-model is set to multiprocessing each will get a separate worker process. Running Services, Nano Services, and Actions in parallel can, depending on the application, improve the performance at the cost of complexity. See The Application Component for details.
    request for each deployed NSO package. This request contains the URLs for the jars situated in the
    private-jar
    directory in the respective NSO package. These classes and resources will be private to the respective NSO package. In addition, classes that are referenced in a
    component
    tag in the respective NSO package
    package-meta-data.xml
    file will be instantiated.
  • NSO will send a INSTANTIATE_COMPONENT request for each component in each deployed NSO package. At this point, the NSO Java VM will register a start method for the respective component. NSO will send these requests in a proper start phase order. This implies that the INSTANTIATE_COMPONENT requests can be sent in an order that mixes components from different NSO packages.

  • Lastly, NSO sends a DONE_LOADING request which indicates that the initialization process is finished. After this, the NSO Java VM is up and running.

  • type. These are user-defined daemons that implement a specific
    ApplicationComponent
    Java interface.
  • The upgrade type. This component type is activated when deploying a new version of a NSO package and the NSO automatic CDB data upgrade is not sufficient. See Writing an Upgrade Package Component for more information.

  • is an optional string to identify the resource as a unique resource. All instances that share the same context-scoped resource need to have the same qualifier. If the qualifier is not given it defaults to the value
    DEFAULT
    i.e., shared between all instances that have the
    DEFAULT
    qualifier.
    jndi-port The port for the JMX RMI server.
    closed
    .
    failed: The Java VM has been terminated. If auto-restart is enabled, the Java VM restart has been disabled due to too frequent restarts.
    timeout: The Java application connected to NSO, but failed to initialize within the stipulated timeout 'initialization-time'.
    Debugging Startup
    Network Element Drivers (NEDs)
    Writing a data model for a CLI NED
    DP API
    SNMP Notification Receiver
    Alarm Manager
    ncs.conf(5)
    NSO Service Manager
    Component Managers
    jconsole Login Window
    <supported-ned-id xmlns:router-nc-1.1="http://tail-f.com/ns/ned-id/router-nc-1.1">
    router-nc-1.1:router-nc-1.1</supported-ned-id>
    <supported-ned-id-match>router-nc-1.\d+:router-nc-1.\d+</supported-ned-id-match>
    <hide-group>
        <name>debug</name>
    </hide-group>
    admin@ncs(config)# unhide debug
    admin@ncs(config)#
    Example: The Java VM YANG Model
            > yanger -f tree tailf-ncs-java-vm.yang
              submodule: tailf-ncs-java-vm (belongs-to tailf-ncs)
      +--rw java-vm
         +--rw stdout-capture
         |  +--rw enabled?   boolean
         |  +--rw file?      string
         |  +--rw stdout?    empty
         +--rw connect-time?                     uint32
         +--rw initialization-time?              uint32
         +--rw synchronization-timeout-action?   enumeration
         +--rw exception-error-message
         |  +--rw verbosity?   error-verbosity-type
         +--rw java-logging
         |  +--rw logger* [logger-name]
         |     +--rw logger-name    string
         |     +--rw level          log-level-type
         +--rw jmx!
         |  +--rw jndi-address?   inet:ip-address
         |  +--rw jndi-port?      inet:port-number
         |  +--rw jmx-address?    inet:ip-address
         |  +--rw jmx-port?       inet:port-number
         +--ro start-status?                     enumeration
         +--ro status?                           enumeration
         +---x stop
         |  +--ro output
         |     +--ro result?   string
         +---x start
         |  +--ro output
         |     +--ro result?   string
         +---x restart
            +--ro output
               +--ro result?   string
    java -DTAILF_CLASSLOADER=false ...
    Example: ApplicationComponent Interface
    package com.tailf.ncs;
    
    /**
     * User defined Applications should implement this interface that
     * extends Runnable, hence also the run() method has to be implemented.
     * These applications are registered as components of type
     * "application" in a Ncs packages.
     *
     * Ncs Java VM will start this application in a separate thread.
     * The init() method is called before the thread is started.
     * The finish() method is expected to stop the thread. Hence stopping
     * the thread is user responsibility
     *
     */
    public interface ApplicationComponent extends Runnable {
    
        /**
         * This method is called by the Ncs Java vm before the
         * thread is started.
         */
        public void init();
    
        /**
         * This method is called by the Ncs Java vm when the thread
         * should be stopped. Stopping the thread is the responsibility of
         * this method.
         */
        public void finish();
    
    }
    Example: Resource Injection
    @Resource(type=ResourceType.MAAPI, scope=Scope.INSTANCE)
    public Maapi m;
    Example: Resource Types
    package com.tailf.ncs.annotations;
    
    /**
     * ResourceType set by the Ncs ResourceManager
     */
    public enum ResourceType {
    
        MAAPI(1),
        CDB(2);
    }
    Example: Resource Annotation
    package com.tailf.ncs.annotations;
    
    /**
     * Annotation class for Action Callbacks Attributes are callPoint and callType
     */
    @Retention(RetentionPolicy.RUNTIME)
    @Target(ElementType.FIELD)
    public @interface Resource {
    
        public ResourceType type();
    
        public Scope scope();
    
        public String qualifier() default "DEFAULT";
    
    }
    Example: Scopes
    package com.tailf.ncs.annotations;
    
    /**
     * Scope for resources managed by the Resource Manager
     */
    public enum Scope {
    
        /**
         * Context scope implies that the resource is
         * shared for all fields having the same qualifier in any class.
         * The resource is shared also between components in the package.
         * However sharing scope is confined to the package i.e sharing cannot
         * be extended between packages.
         * If the qualifier is not given it becomes "DEFAULT"
         */
        CONTEXT(1),
        /**
         * Instance scope implies that all instances will
         * get new resource instances. If the instance needs
         * several resources of the same type they need to have
         * separate qualifiers.
         */
        INSTANCE(2);
    }
    Example: Force Resource Injection
    MyClass myclass = new MyClass();
    try {
        ResourceManager.registerResources(myclass);
    } catch (Exception e) {
        LOGGER.error("Error injecting Resources", e);
    }
    Example: Starting NcsMain
    NcsMain ncsMain   = NcsMain.getInstance(host);
    Thread  ncsThread = new Thread(ncsMain);
    
    ncsThread.start();
    Example: NcsMain JMX Bean
    package com.tailf.ncs;
    /**
     * This is the JMX interface for the NcsMain class
     */
    public interface NcsMainMBean {
    
        /**
         * JMX interface - shutdown Ncs java vm main thread
         */
        public void shutdown();
    
        /**
         * JMX interface - hot redeploy all packages
         */
        public void redeployAll();
    
        /**
         * JMX interface - list shared jars
         */
        public String[] listSharedJars();
    }
    Example: NedMuxManager JMX Bean
    package com.tailf.ncs.ctrl;
    /**
     * This interface is the JMX interface for the NedMuxManager class
     */
    public interface NedMuxManagerMBean {
    
        /**
         * JMX interface - list all Application components
         */
        public String[] listPackageComponents();
    }
    Example: DpMuxManager JMX Bean
    package com.tailf.ncs.ctrl;
    /**
     * This interface is the JMX interface for the DpMuxManager class
     */
    public interface DpMuxManagerMBean {
    
        /**
         * JMX interface - list all callback components
         */
        public String[] listPackageComponents();
    }
    Example: ApplicationMuxManager JMX Bean
    package com.tailf.ncs.ctrl;
    /**
     * This interface is the JMX interface for the ApplicationMuxManager class
     */
    public interface ApplicationMuxManagerMBean {
    
        /**
         * JMX interface - list all Application components
         */
        public String[] listPackageComponents();
    }
    Example: AlarmSinkCentral JMX Bean
    package com.tailf.ncs.alarmman.producer;
    /**
     * This is the JMX interface for the AlarmSinkCentral class
     */
    public interface AlarmSinkCentralMBean {
    
        public void start();
    
        public boolean isAlive();
    
        public void stop();
    }
    Example: AlarmSourceCentral JMX Bean
    package com.tailf.ncs.alarmman.consumer;
    /**
     * This is the JMX interface for the AlarmSourceCentral class
     */
    public interface AlarmSourceCentralMBean {
    
        public void start();
    
        public boolean isAlive();
    
        public void stop();
    }
    admin@ncs(config)# java-vm java-logging logger com.tailf.maapi level level-trace
    admin@ncs(config-logger-com.tailf.maapi)# commit
    Commit complete.
    admin@ncs(config)# show full-configuration java-vm stdout-capture
    java-vm stdout-capture file /var/log/ncs/ncs-java-vm.log
    admin@ncs(config)# java-vm java-logging logger com.tailf.ncs level level-all
    admin@ncs(config-logger-com.tailf.maapi)# commit
    Commit complete.
    admin@ncs# show java-vm
    java-vm start-status started
    java-vm status running

    Northbound APIs

    Understand different types of northbound APIs and their working mechanism.

    This section describes the various northbound programmatic APIs in NSO NETCONF, REST, and SNMP. These APIs are used by external systems that need to communicate with NSO, such as portals, OSS, or BSS systems.

    NSO has two northbound interfaces intended for human usage, the CLI and the WebUI. These interfaces are described in NSO CLI and Web User Interface respectively.

    There are also programmatic Java, Python, and Erlang APIs intended to be used by applications integrated with NSO itself. See Running Application Code for more information about these APIs.

    Integrating an External System with NSO

    There are two APIs to choose from when an external system should communicate with NSO:

    • NETCONF

    • REST

    Which one to choose is mostly a subjective matter. REST may, at first sight, appear to be simpler to use, but is not as feature-rich as NETCONF. By using a NETCONF client library such as the open source Java library or Python library , the integration task is significantly reduced.

    Both NETCONF and REST provide functions for manipulating the configuration (including creating services) and reading the operational state from NSO. NETCONF provides more powerful filtering functions than REST.

    NETCONF and SNMP can be used to receive alarms as notifications from NSO. NETCONF provides a reliable mechanism to receive notifications over SSH, whereas SNMP notifications are sent over UDP.

    Regardless of the protocol you choose for integration, keep in mind all of them communicate with the NSO server over network sockets, which may be unreliable. Additionally, write transactions in NSO can fail if they conflict with another, concurrent transaction. As a best practice, the client implementation should be able to gracefully handle such errors and be prepared to retry requests. For details on the NSO concurrency, refer to the

    Templates

    Simplify change management in your network using templates.

    NSO comes with a flexible and powerful built-in templating engine, which is based on XML. The templating system simplifies how you apply configuration changes across devices of different types and provides additional validation against the target data model. Templates are a convenient, declarative way of updating structured configuration data and allow you to avoid lots of boilerplate code.

    You will most often find this type of configuration templates used in services, which is why they are sometimes also called service templates. However, we mostly refer to them simply as XML templates, since they are defined in XML files.

    NSO loads templates as part of a package, looking for XML files in the templates subdirectory. You then apply an XML template through API or by connecting it with a service through a service point, allowing NSO to use it whenever a service instance needs updating.

    XML templates are distinct from so-called “device templates”, which are dynamically created and applied as needed by the operator, for example in the CLI. There are also other types of templates in NSO, unrelated to XML templates described here.

    JNC
    ncclient
    NSO Concurrency Model.

    Structure of a Template

    Template is an XML file with the config-template root element, residing in the http://tail-f.com/ns/config/1.0 namespace. The root contains configuration elements according to NSO YANG schema and XML processing instructions.

    Configuration element structure is very much like the one you would find in a NETCONF message since it uses the same encoding rules defined by YANG. Additionally, each element can specify a tags attribute that refines how the configuration is applied.

    A typical template for configuring an NSO-managed device is:

    The first line defines the root node. It contains elements that follow the same structure as that used by the CDB, in particular, the devices device`` ``name`` ``config path in the CLI. In the printout, two elements, device and config, also have a tags attribute.

    You can write this structure by studying the YANG schema if you wish. However, a more typical approach is to start with manipulating NSO configuration by hand, such as through the NSO CLI or web UI. Then generate the XML structure with the help of NSO output filters. You can use commit dry-run outformat xml or show ... | display xml commands, or even the ncs_load utility. For a worked, step-by-step example, refer to the section A Template is All You Need.

    Having the basic structure in place, you can then fine-tune the template by adding different processing instructions and tags, as well as replacing static values with variable references using the XPath syntax.

    Note that a single template can configure multiple devices of different type, services, or any other configurable data in NSO; basically the same as you can do in a CLI commit. But a single, gigantic template can become a burden to maintain. That is why many developers prefer to split up bigger configurations into multiple feature templates, either by functionality or by device type.

    Finally, the name of the file, without the .xml extension is the name of the template. The name allows you to reference the template from the code later on. Since all the template names reside in the same namespace, it is a good practice to use a common naming scheme, preferably <package name>-<feature>.xml to ensure template names are unique.

    Other Ways to Generate the XML Template Structure

    The NSO CLI features a templatize command that allows you to analyze a given configuration and find common configuration patterns. You can use these to, for example, create a configuration template for a service.

    Suppose you have an existing interface configuration on a device:

    Using the templatize command, you can search for patterns in this part of the configuration, which produces the following:

    In this case, NSO finds a single pattern (the only one) and creates the corresponding template. In general, NSO might produce a number of templates. As an example, try running the command within the examples.ncs/implement-a-service/dns-v3 environment.

    The algorithm works by searching the data at the specified path. For any list it encounters, it compares every item in the list with its siblings. If the two items have the same structure but not necessarily the same actual values (for leafs), that part of the configuration can be made into a template. If the two list items use the same value for a leaf, the value is used directly in the generated template. Otherwise, a unique variable name is created and used in its place, as shown in the example.

    However, templatize requires you to reference existing configurations in NSO. If such configuration is not readily available to you and you want to avoid manually creating sample configuration in NSO first, you can use the sample-xml-skeleton functionality of the yanger utility to generate sample XML data directly:

    You can replace the value of --sample-xml-skeleton-path with the path to the part of the configuration you want to generate.

    In case the target data model contains submodules, or references other non-built-in modules, you must also tell yanger where to find additional modules with the -p parameter, such as adding -p src/yang/ to the invocation.

    Values in a Template

    Some XML elements, notably those that represent leafs or leaf-lists, specify element text content as values that you wish to configure, such as:

    NSO converts the string value to the actual value type of the YANG model automatically when the template is applied.

    Along with hard-coded, static content (rtr01), the value may also contain curly brackets ({...}), which the templating engine treats as XPath 1.0 expressions.

    The simplest form of an XPath expression is a plain XPath variable:

    A value can contain any number of {...} expressions and strings. The end result is the concatenation of all the strings and XPath expressions. For example, <description>Link to PE: {$PE} - {$PE_INT_NAME}</description> might evaluate to <description>Link to PE: pe0 - GigabitEthernet0/0/0/3</description>.

    if you set PE to pe0 and PE_INT_NAME to GigabitEthernet0/0/0/3 when applying the template.

    You set the values for variables in the code where you apply the template. NSO also sets some predefined variables, which you can reference:

    • $DEVICE: The name of the current device. Cannot be overridden.

    • $TEMPLATE_NAME: The name of the current template. Cannot be overridden.

    • $SCHEMA_OPAQUE: Defined if the template is registered for a servicepoint (the top node in the template has servicepoint attribute) and the corresponding ncs:servicepoint statement in the YANG model has tailf:opaque substatement. Set to the value of the tailf:opaque statement.

    • $OPERATION: Defined if the template is registered for a servicepoint with the cbtype attribute set to pre-/post-modification (see ). Contains the requested service operation; create, update, or delete.

    The {...} expression can also be any other valid XPath 1.0 expression. To address a reachable node, you might for example use:

    Or to select a leaf node, device:

    NSO then uses the value of this leaf, say ce5, when constructing the value of the expression.

    However, there are some special cases. If the result of the expression is a node-set (e.g. multiple leafs), and the target is a leaf list or a list's key leaf, the template configures multiple destination nodes. This handling allows you to set multiple values for a leaf list or set multiple list items.

    Similarly, if the result is an empty node set, nothing is set (the set operation is ignored).

    Finally, what nodes are reachable in the XPath expression, and how, depends on the root node and context used in the template. See XPath Context in Templates.

    Conditional Statements

    The if, and the accompanying elif, else, processing instructions make it possible to apply parts of the template, based on a condition. For example:

    The preceding template shows how to produce different configuration, for network bandwidth management in this case, when different qos-class/priority values are specified.

    In particular, the sub-tree containing the priority-realtime tag will only be evaluated if qos-class/priority in the if processing instruction evaluates to the string 'realtime'.

    The subtree under the elif processing instruction will be executed if the preceding if expression evaluated to false, i.e. qos-class/priority is not equal to the string 'realtime', but 'critical' instead.

    The subtree under the else processing instruction will be executed when both the preceding if and elif expressions evaluated to false, i.e. qos-class/priority is not 'realtime' nor 'critical'.

    In your own code you can of course use just a subset of these instructions, such as a simple if - end conditional evaluation. But note that every conditional evaluation must end with the end processing instruction, to allow nesting multiple conditionals.

    The evaluation of the XPath statements used in the if and elif processing instructions follow the XPath standard for computing boolean values. In summary, the conditional expression will evaluate to false when:

    • The argument evaluates to an empty node-set.

    • The value of the argument is either an empty string or numeric zero.

    • The argument is of boolean type and evaluates to false, such as using the not(true()) function.

    Loop Statements

    The foreach and for processing instructions allow you to avoid needless repetition: they iterate over a set of values and apply statements in a sub-tree several times. For example:

    The printout shows the use of foreach to configure a set of IP routes (the list ip-route-forwarding-list) for a Cisco network router. If there is a tunnel list in the service model, the {/tunnel} expression selects all the items from the list. If this is a non-empty set, then the sub-tree containing ip-route-forwarding-list is evaluated once for every item in that node set.

    For each iteration, the initial context is set to one node, that is, the node being processed in that iteration. The XPath function current() retrieves this initial context if needed. Using the context, you can access the node data with relative XPath paths, e.g. the {network} code in the example refers to /tunnel[...]/network for the current item.

    foreach only supports a single XPath expression as its argument and the result needs to be a node-set, not a simple value. However, you may use XPath union operator to join multiple node sets in a single expression when required: {some-list-1 | some-leaf-list-2}.

    Similarly, for is a processing instruction that uses a variable to control the iteration, in line with traditional programming languages. For example, the following template disables the first four (0-3) interfaces on a Cisco router:

    In this example, three semicolon-separated clauses follow the for keyword:

    • The first clause is the initial step executed before the loop is entered the first time. The format of the clause is that of a variable name followed by an equals sign and an expression. The latter may combine literal strings and XPath expressions surrounded by {}. The expression is evaluated in the same way as the XML tag contents in templates. This clause is optional.

    • The second clause is the progress condition. The loop will execute as long as this condition evaluates to true, using the same rules as the if processing instruction. The format of this clause is an XPath expression surrounded by {}. This clause is mandatory.

    • The third clause is executed after each iteration. It has the same format as the first clause (variable assignment) and is optional.

    The foreach and for expressions make the loop explicit, which is why they are the first choice for most programmers. Alternatively, under certain circumstances, the template invokes an implicit loop, as described in XPath Context in Templates.

    Template Operations

    The most common use-case for templates is to produce new configuration but other behavior is possible too. This is accomplished by setting the tags attribute on XML elements.

    NSO supports the following tags values, colloquially referred to as “tags”:

    • merge: Merge with a node if it exists, otherwise create the node. This is the default operation if no operation is explicitly set.

    • replace: Replace a node if it exists, otherwise create the node.

    • create: Creates a node. The node must not already exist. An error is raised if the node exists.

    • nocreate: Merge with a node if it exists. If it does not exist, it will not be created.

    • delete: Delete the node.

    Tags merge and nocreate are inherited to their sub-nodes until a new tag is introduced.

    Tags create and replace are not inherited and only apply to the node they are specified on. Children of the nodes with create or replace tags have merge behavior.

    Tag delete applies only to the current node; any children (except keys specifying the list/leaf-list entry to delete) are ignored.

    Operations on Ordered Lists and Leaf-lists

    For ordered-by-user lists and leaf lists, where item order is significant, you can use the insert attribute to specify where in the list, or leaf-list, the node should be inserted. You specify whether the node should be inserted first or last in the node-set, or before or after a specific instance.

    For example, if you have a list of rules, such as ACLs, you may need to ensure a particular order:

    However, it is not uncommon that there are multiple services managing the same ordered-by user list or leaf-list. The relative order of elements inserted by these services might not matter, but there are some constraints on element positions that need to be fulfilled.

    Following the ACL rules example, suppose that initially the list contains only the "deny-all" rule:

    There are services that prepend permit rules to the beginning of the list using the insert="first" operation. If there are two services creating one entry each, say 10.0.0.0/8 and 192.168.0.0/24 respectively, then the resulting configuration looks like this:

    Note that the rule for the second service comes first because it was configured last and inserted as the first item in the list.

    If you now try to check-sync the first service (10.0.0.0/8), it will report as out-of-sync, and re-deploying it would move the 10.0.0.0/8 rule first. But what you really want is to ensure the deny-all rule comes last. This is when the guard attribute comes in handy.

    If both the insert and guard attributes are specified on a list entry in a template, then the template engine first checks whether the list entry already exists in the resulting configuration between the target position (as indicated by the insert attribute) and the position of an element indicated by the guard attribute:

    • If the element exists and fulfills this constraint, then its position is preserved. If a template list entry results in multiple configuration list entries, then all of them need to exist in the configuration in the same order as calculated by the template, and all of them need to fulfill the guard constraint in order for their position to be preserved.

    • If the list entry/entries do not exist, are not in the same order, or do not fulfill the constraint, then the list is reordered as instructed by the insert statement.

    So, in the ACL example, the template can specify the guard as follows:

    A guard can be specified literally (e.g. guard="deny-all" if "name" is the key of the list) or using an XPath expression (e.g. guard="{$LASTRULE}"). If the guard evaluates to a node-set consisting of multiple elements, then only the first element in this node-set is considered as the guard. The constraint defined by the guard is evaluated as follows:

    • If the guard evaluates to an empty node-set (i.e. the node indicated by the guard does not exist in the target configuration), then the constraint is not fulfilled.

    • If insert="first", then the constraint is fulfilled if the element exists in the configuration before the element indicated by the guard.

    • If insert="last", then the constraint is fulfilled if the element exists in the configuration after the element indicated by the guard.

    • If insert="after", then the constraint is fulfilled if the element exists in the configuration before the element indicated by the guard, but after the element indicated by the value attribute.

    • If insert="before", then the constraint is fulfilled if the element exists in the configuration after the element indicated by the guard, but before the element indicated by the or value attribute.

    Macros in Templates

    Templates support macros - named XML snippets that facilitate reuse and simplify complex templates. When you call a previously defined macro, the templating engine inserts the macro data, expanded with the values of the supplied arguments. The following example demonstrates the use of a macro.

    When using macros, be mindful of the following:

    • A macro must be a valid chunk of XML, or a simple string without any XML markup. So, a macro cannot contain only start-tags or only end-tags, for example.

    • Each macro is defined between the <?macro?> and <?endmacro?> processing instructions, immediately following the <config-template> tag in the template.

    • A macro definition takes a name and an optional list of parameters. Each parameter may define a default value.

      In the preceding example, a macro is defined as:\

      Here, GbEth is the name of the macro. This macro takes three parameters, name, ip, and mask. The parameters name and mask have default values, and ip does not.

      The default value for mask is a fixed string, while the one for name by default gets its value through an XPath expression.

    • A macro can be expanded in another location in the template using the <?expand?> processing instruction. As shown in the example (line 29), the <?expand?> instruction takes the name of the macro to expand, and an optional list of parameters and their values.

      The parameters in the macro definition are replaced with the values given during expansion. If a parameter is not given any value during expansion, the default value is used. If there is no default value in the definition, not supplying a value causes an error.

    • Macro definitions cannot be nested - that is, a macro definition cannot contain another macro definition. But a macro definition can have <?expand?> instructions to expand another macro within this macro (line 17 in the example).

      The macro expansion and the parameter replacement work on just strings - there is no schema validation or XPath evaluation at this stage. A macro expansion just inserts the macro definition at the expansion site.

    • Macros can be defined in multiple files, and macros defined in the same package are visible to all templates in that package. This means that a template file could have just the definitions of macros, and another file in the same package could use those macros.

    When reporting errors in a template using macros, the line numbers for the macro invocations are also included, so that the actual location of the error can be traced. For example, an error message might resemble service.xml:19:8 Invalid parameters for processing instruction set. - meaning that there was a macro expansion on line 19 in service.xml and an error occurred at line 8 in the file defining that macro.

    XPath Context in Templates

    When the evaluation of a template starts, the XPath context node and root node are both set to either the service instance data node (with a template-only service) or the node specified with the API call to apply the template (usually the service instance data node as well).

    The root node is used as the starting point for evaluating absolute paths starting with / and puts a limit on where you can navigate with ../.

    You can access data outside the current root node subtree by dereferencing a leafref type leaf or by changing the root node from within the template.

    To change the root node within the template, use the set-root-node XML processing instruction. The instruction takes an XPath expression as a parameter and this expression is evaluated in a special context, where the root node is the root of the datastore. This makes it possible to change to a node outside the current evaluation context.

    For example: <?set-root-node {/}?> changes the accessible tree to the whole data store. Note that, as all processing instructions, the effect of set-root-node only applies until the closing parent tag.

    The context node refers to the node that is used as the starting point for navigation with relative paths, such as ../device or device.

    You can change the current context node using the set-context-node or other context-related processing instructions. For example: <?set-context-node {..}?> changes the context node to the parent of the current context node.

    There is a special case where NSO automatically changes the evaluation context as it progresses through and applies the template, which makes it easier to work with lists. There are two conditions required to trigger this special case:

    1. The value being set in the template is the key of a list.

    2. The XPath expression used for this key evaluates to a node set, not a value.

    To illustrate, consider the following example.

    Suppose you are using the template to configure interfaces on a device. Target device YANG model defines the list of interfaces as:

    You also use a service model that allows configuring multiple links:

    The context-changing mechanism allows you to configure the device interface with the specified address using the template:

    The /links/link[0]/intf-name evaluates to a node and the evaluation context node is changed to the parent of this node, /links/link[0], because name is a key leaf. Now you can refer to /links/link[0]/intf-addr with a simple relative path {intf-addr}.

    The true power and usefulness of context changing becomes evident when used together with XPath expressions that produce node sets with multiple nodes. You can create a template that configures multiple interfaces with their corresponding addresses (note the use of link instead of link[0]):

    The first expression returns a node set possibly including multiple leafs. NSO then configures multiple list items (interfaces), based on their name. The context change mechanism triggers as well, making {intf-addr} refer to the corresponding leaf in the same link definition. Alternatively, you can achieve the same outcome with a loop (see Loop Statements).

    However, in some situations, you may not desire to change the context. You can avoid it by making the XPath expression return a value instead of a node/node-set. The simplest way is to use the XPath string() function, for example:

    Namespaces and Multi-NED Support

    When a device makes itself known to NSO, it presents a list of capabilities (see Capabilities, Modules, and Revision Management), which includes what YANG modules that particular device supports. Since each YANG module defines a unique XML namespace, this information can be used in a template.

    Hence, a template may include configuration for many diverse devices. The templating system streamlines this by applying only those pieces of the template that have a namespace matching the one advertised by the device (see Supporting Different Device Types).

    Additionally, the system performs validation of the template against the specified namespace when loading the template as part of the package load sequence, allowing you to detect a lot of the errors at load time instead of at run time.

    In case the namespace matching is insufficient, such as when you want to check for a particular version of a NED, you can use special processing instructions if-ned-id or if-ned-id-match. See Processing Instructions Reference for details and Supporting Different Device Types for an example.

    However, strict validation against the currently loaded schema may become a problem for developing generic, reusable templates that should run in different environments with different sets of NEDs and NED versions loaded. For example, an NSO instance having fewer NED versions than the template is designed for may result in some elements not being recognized, while having more NED versions may introduce ambiguities.

    In order to allow templates to be reusable while at the same time keeping as many errors as possible detectable at load time, NSO has a concept of supported-ned-ids. This is a set of NED IDs the package developer declares in the package-meta-data.xml file, indicating all NEDs the XML templates contained in this package are designed to support. This gives NSO a hint on how to interpret the template.

    Namely, if a package declares a list of supported-ned-ids, then the templates in this package are interpreted as if no other ned-ids are loaded in the system. If such a template is attempted to be applied to a device with ned-id outside the supported list, then a run-time error is generated because this ned-id was not considered when the template was loaded. This allows us to ignore ambiguities in the data model introduced by additional NEDs that were not considered during template development.

    If a package declares a list of supported-ned-ids and the runtime system does not have one or more declared NEDs loaded, then the template engine uses the so-called relaxed loading mode, which means it ignores any unknown namespaces and <?if-ned-id?> clauses containing exclusively unknown ned-ids, assuming that these parts of the template are not applicable in the current running system.

    Because relaxed loading mode performs less strict validation and potentially prevents some errors from being detected, the package developer should always make sure to test in the system with all the supported ned-ids loaded, i.e. when the loading mode is strict. The loading mode can be verified by looking at the value of template-loading-mode leaf for the corresponding package under /packages/package list.

    If the package does not declare any supported-ned-ids, then the templates are loaded in strict mode, using the full set of currently loaded NED IDs. This may make the package less reusable between different systems, but is usually fine in environments where the package is intended to be used in runtime systems fully under the control of the package developer.

    Passing Deep Structures from API

    When applying the template via API, you typically pass parameters to a template through variables, as described in Templates and Code and Values in a Template. One limitation of this mechanism is that a variable can only hold one string value. Yet, sometimes there is a need to pass not just a single value, but a list, map, or even more complex data structures from API to the template.

    One way to achieve this is to use smaller templates, such as invoking the template repeatedly, one by one for each list item (or perhaps pair-by-pair in the case of a map). However, there are certain disadvantages to this approach. One of them is the performance: every invocation of the template from the API requires a context switch between the user application process and the NSO core process, which can be costly. Another disadvantage is that the logic is split between Java or Python code and the template, which makes it harder to understand and implement.

    An alternative approach described in this section involves modeling the required auxiliary data as operational data and populating it in the code, before applying the template. For a service, the service callback code in Java or Python first populates the auxiliary data and then passes control to the template, which handles the main service configuration logic. The auxiliary data is accessible in the template, by means of XPath, just like any other service input data.

    There are different approaches to modeling the auxiliary data. It can reside in the service tree as it is private to the service instance; either integrated in the existing data tree or as a separate subtree under the service instance. It can also be located outside of the service instance, however, it is important to keep in mind that operational data cannot be shared by multiple services because there are no refcounters or backpointers stored on operational data.

    After the service is deployed, the auxiliary leafs remain in the database which facilitates debugging because they can be seen via all northbound interfaces. If this is not the intention, they can be hidden with the help of tailf:hidden statement. Because operational data is also a part of FASTMAP diff, these values will be deleted when the service is deleted and need to be recomputed when the service is re-deployed. This also means that in most cases there should be no need to write any additional code to clean up this data.

    One example of a task that is hard to solve in the template by native XPath functions is converting a network prefix into a network mask or vice versa. Below is a snippet of a data model that is part of a service input data and contains a list of interfaces along with IP addresses to be configured on those interfaces. If the input IP address contains a prefix, but the target device accepts an IP address with a network mask instead, then you can use an auxiliary operational leaf to pass the mask (calculated from the prefix) to the template.

    The code that calls the template needs to populate the mask. For example, using the Python Maagic API in a service:

    The corresponding iface-template might then be as simple as:

    Service Callpoints and Templates

    The archetypical use case for XML templates is service provisioning and NSO allows you to directly invoke a template for a service, without writing boilerplate code in Python or Java. You can take advantage of this feature by configuring the servicepoint attribute on the root config-template element. For example:

    Adding the attribute registers this template for the given servicepoint, defined in the YANG service model. Without any additional attributes, the registration corresponds to the standard create service callback.

    While the template (file) name is not referred to in this case, it must still be unique in an NSO node.

    In a similar manner, you can register templates for each state of a nano service, using componenttype and state attributes. The section Nano Service Callbacks contains examples.

    Services also have pre- and post-modification callbacks, further described in Service Callbacks, which you can also implement with templates. Simply put, pre- and post-modification templates are applied before and after applying the main service template.

    These pre- and post-modification templates can only be used in classic (non-nano) services when the create callback is implemented as a template. That is, they cannot be used together with create callbacks implemented in Java or Python. If you want to mix the two approaches for the same service, consider using nano services.

    To define a template as pre- or post-modification, appropriately configure the cbtype attribute, along with servicepoint. The cbtype attribute supports these three values:

    • pre-modification

    • create

    • post-modification

    NSO supports only a single registration for each servicepoint and callback type. Therefore, you cannot register multiple templates for the same servicepoint/cbtype combination.

    The $OPERATION variable is set internally by NSO in pre- and post-modification templates to contain the service operation, i.e., create, update, or delete, that triggered the callback. The $OPERATION variable can be used together with template conditional statements (see Conditional Statements) to apply different parts of the template depending on the triggering operation. Note that the service data is not available in the pre- or post-modification callbacks when $OPERATION = 'delete' since the service has been deleted already in the transaction context where the template is applied.

    Debugging Templates

    You can request additional information when applying templates in order to understand what is going on. When applying or committing a template in the CLI, the debug pipe command enables debug information:

    The debug xpath option outputs all XPath evaluations for the transaction, and is not limited to the XPath expressions inside templates.

    The debug template option outputs XPath expression results from the template, under which context expressions are evaluated, what operation is used, and how it affects the configuration, for all templates that are invoked. You can narrow it down to only show debugging information for a template of interest:

    Additionally, the template and xpath debugging can be combined:

    For XPath evaluation, you can also inspect the XPath trace log if it is enabled (e.g. with tail -f logs/xpath.trace). XPath trace is enabled in the ncs.conf configuration file and is enabled by default for the examples.

    Another option to help you get the XPath selections right is to use the NSO CLI show command with the xpath display flag to find out the correct path to an instance node. This shows the name of the key elements and also the namespace changes.

    When using more complex expressions, the ncs_cmd utility can be used to experiment with and debug expressions. ncs_cmd is used in a command shell. The command does not print the result as XPath selections but is still of great use when debugging XPath expressions. The following example selects FastEthernet interface names on the device c0:

    Example Debug Template Output

    The following text walks through the output of the debug template command for a dns-v3 example service, found in examples.ncs/implement-a-service/dns-v3. To try it out for yourself, start the example with make demo and configure a service instance:

    The XML template used in the service is simple but non-trivial:

    Applying the template produces a substantial amount of output. Let's interpret it piece by piece. The output starts with:

    The templating engine found the foreach in the dns-template.xml file at line 4. In this case, it is the only foreach block in the file but in general, there might be more. The {/target-device} expression is evaluated using the /dns[name='instance1'] context, resulting in the complete /dns[name='instance1']/target-device path. Note that the latter is based on the root node (not shown in the output), not the context node (which happens to be the same as the root node at the start of template evaluation).

    NSO found two nodes in the leaf-list for this expression, which you can verify in the CLI:

    Next comes:

    The template starts with the first iteration of the loop with the c1 value. Since the node was an item in a leaf-list, the context refers to the actual value. If instead, it was a list, the context would refer to a single item in the list.

    This line signifies the system “applied” line 6 in the template, selecting the c1 device for further configuration. The line also informs you the device (the item in the /devices/device list with this name) exists.

    The template then evaluates the if condition, resulting in processing of the lines 10 and 11 in the template:

    The last line shows how a new value is added to the target leaf-list, that was not there (non-existing) before.

    As the if statement matched, the else part does not apply and a new iteration of the loop starts, this time with the c2 value.

    Now the same steps take place for the other, c2, device:

    Finally, the template processing completes as there are no more nodes in the loop, and NSO outputs the new dry-run configuration:

    Processing Instructions Reference

    NSO template engine supports a number of XML processing instructions to allow more dynamic templates:

    Syntax
    Description

    Allows you to assign a new variable or manipulate the existing value of a variable v. If used to create a new variable, the scope of visibility of this variable is limited to the parent tag of the processing instruction or the current processing instruction block. Specifically, if a new variable is defined inside a loop, then it is discarded at the end of each iteration.

    Processing instruction block that allows conditional execution based on the boolean result of the expression. For a detailed description, see .

    The expression must evaluate to a (possibly empty) XPath node-set. The template engine will then iterate over each node in the node set by changing the XPath current context node to this node and evaluating all children tags within this context. For the detailed description see .

    This processing instruction allows you to iterate over the same set of template tags by changing a variable value. The variable visibility scope obeys the same rules as the set processing instruction, except the variable value, is carried over to the next iteration instead of being discarded at the end of each iteration.

    Only the condition expression is mandatory, either or both of initial and next value assignment can be omitted, e.g.:

    For a detailed description see .

    This instruction is analogous to copy_tree() function available in the MAAPI API. The parameter is an XPath expression that must evaluate to exactly one node in the data tree and indicate the source path to copy from. The target path is defined by the position of the copy-tree instruction in the template within the current context.

    Allows to manipulate the root node of the XPath accessible tree. This expression is evaluated in an XPath context where the accessible tree is the entire datastore, which means that it is possible to select a root node outside the currently accessible tree. The current context node remains unchanged. The expression must evaluate to exactly one node in the data tree.

    The variable value in both set and for processing instructions are evaluated in the same way as the values within XML tags in a template (see Values in a Template). So, it can be a mix of literal values and XPath expressions surrounded by {...}.

    The variable value is always stored as a string, so any XPath expression will be converted to literal using XPath string() function. Namely, if the expression results in an integer or a boolean, then the resulting value would be a string representation of the integer or boolean. If the expression results in a node set, then the value of the variable is a concatenated string of values of nodes in this node set.

    It is important to keep in mind that while in some cases XPath converts the literal to another type implicitly (for example, in an expression {$x < 3} a value x='1' is converted to integer 1 implicitly), in other cases an explicit conversion is needed. For example, using the expression {$x > $y}, if x='9' and y='11', the result of the expression is true due to alphabetic order as both variables are strings. In order to compare the values as numbers, an explicit conversion of at least one argument is required: {number($x) > $y}.

    XPath Functions

    This section lists a few useful functions, available in XPath expressions. The list is not exhaustive; please refer to the XPath standard, YANG standard, and NSO-specific extensions in XPATH FUNCTIONS in Manual Pages for a full list.

    Type Conversion
    • bit-is-set()

    • boolean()

    • enum-value()

    String Handling
    • concat()

    • contains()

    • normalize-space()

    Model Navigation
    • current()

    • deref()

    • last()

    • in Manual Pages

    Other
    • compare() in Manual Pages

    • count()

    • max() in Manual Pages

    • in Manual Pages

    <config tags="merge">
      <interface xmlns="urn:ios">
      ...
        <GigabitEthernet tags="replace">
          <name>{link/interface-number}</name>
          <description tags="merge">Link to PE</description>
          ...
        <GigabitEthernet tags="create">
          <name>{link/interface-number}</name>
          <description tags="merge">Link to PE</description>
          ...
    <config-template xmlns="http://tail-f.com/ns/config/1.0">
      <devices xmlns="http://tail-f.com/ns/ncs">
        <device tags="nocreate">
          <name>{/name}</name>
          <config tags="merge">
            <!-- ... -->
          </config>
        </device>
      </devices>
    </config-template>
    admin@ncs(config)# devices device rtr01 config ...
    admin@ncs(config-device-rtr01)# commit dry-run outformat xml
    result-xml {
        local-node {
            data <devices xmlns="http://tail-f.com/ns/ncs">
                   <device>
                     <name>rtr01</name>
                     <config>
                       <!-- ... -->
                     </config>
                   </device>
                 </devices>
        }
    }
    admin@ncs(config-device-rtr01)# commit
    admin@ncs# show running-config devices device rtr01 config ... | display xml
    <config xmlns="http://tail-f.com/ns/config/1.0">
      <devices xmlns="http://tail-f.com/ns/ncs">
        <device>
          <name>rtr01</name>
          <config>
            <!-- ... -->
          </config>
        </device>
      </devices>
    </config>
    admin@ncs# show running-config devices device c0 config interface GigabitEthernet
    devices device c0
     config
      interface GigabitEthernet0/0/0/0
       ip address 10.1.2.3 255.255.255.0
      exit
      interface GigabitEthernet0/0/0/1
       ip address 10.1.4.3 255.255.255.0
      exit
      interface GigabitEthernet0/0/0/2
       ip address 10.1.9.3 255.255.255.0
      exit
     !
    !
    admin@ncs# templatize devices device c0 config interface GigabitEthernet
    Found potential templates at:
      devices device c0 \ config \ interface GigabitEthernet {$GigabitEthernet-name}
    
    Template path:
      devices device c0 \ config \ interface GigabitEthernet {$GigabitEthernet-name}
    Variables in template:
      {$GigabitEthernet-name}  {$address}
    
    <config xmlns="http://tail-f.com/ns/config/1.0">
      <devices xmlns="http://tail-f.com/ns/ncs">
        <device>
          <name>c0</name>
          <config>
            <interface xmlns="urn:ios">
              <GigabitEthernet>
                <name>{$GigabitEthernet-name}</name>
                <ip>
                  <address>
                    <primary>
                      <address>{$address}</address>
                      <mask>255.255.255.0</mask>
                    </primary>
                  </address>
                </ip>
              </GigabitEthernet>
            </interface>
          </config>
        </device>
      </devices>
    </config>
    $ cd $NCS_DIR/examples.ncs/implement-a-service/dns-v3
    $ make demo
    admin@ncs#  templatize devices device c*
    $ cd $NCS_DIR/packages/neds/cisco-ios-cli-3.8/
    $ yanger -f sample-xml-skeleton \
        --sample-xml-skeleton-doctype=config \
        --sample-xml-skeleton-path='/ip/name-server' \
        --sample-xml-skeleton-defaults \
        src/yang/tailf-ned-cisco-ios.yang
    <?xml version='1.0' encoding='UTF-8'?>
    <config xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
      <ip xmlns="urn:ios">
        <name-server>
          <name-server-list>
            <address/>
          </name-server-list>
          <vrf>
            <name/>
            <name-server-list>
              <address/>
            </name-server-list>
          </vrf>
        </name-server>
      </ip>
    </config>
          <name>rtr01</name>
          <name>{$CE}</name>
    /endpoint/ce/device
    ../ce/device
    <policy-map xmlns="urn:ios" tags="merge">
      <name>{$POLICY_NAME}</name>
      <class>
        <name>{$CLASS_NAME}</name>
        <?if {qos-class/priority = 'realtime'}?>
          <priority-realtime>
            <percent>{$CLASS_BW}</percent>
          </priority-realtime>
        <?elif {qos-class/priority = 'critical'}?>
          <priority-critical>
            <percent>{$CLASS_BW}</percent>
          </priority-critical>
        <?else?>
          <bandwidth>
            <percent>{$CLASS_BW}</percent>
          </bandwidth>
        <?end?>
        <set>
          <ip>
            <dscp>{$CLASS_DSCP}</dscp>
          </ip>
        </set>
      </class>
    </policy-map>
    <ip xmlns="urn:ios">
      <route>
      <?foreach {/tunnel}?>
        <ip-route-forwarding-list>
          <prefix>{network}</prefix>
          <mask>{netmask}</mask>
          <forwarding-address>{tunnel-endpoint}</forwarding-address>
        </ip-route-forwarding-list>
      <?end?>
      </route>
    </ip>
    <interface xmlns="urn:ios">
      <?for i=0; {$i < 4}; i={$i + 1}?>
        <FastEthernet>
          <name>0/{$i}</name>
          <shutdown/>
        </FastEthernet>
      <?end?>
    </interface>
    <rule insert="first">
      <name>{$FIRSTRULE}</name>
    </rule>
    <rule insert="last">
      <name>{$LASTRULE}</name>
    </rule>
    <rule insert="after" value={$FIRSTRULE}>
      <name>{$SECONDRULE}</name>
    </rule>
    <rule insert="before" value={$LASTRULE}>
      <name>{$SECONDTOLASTRULE}</name>
    </rule>
    <rule>
      <name>deny-all</name>
      <ip>0.0.0.0</ip>
      <mask>0.0.0.0</mask>
      <action>deny</action>
    </rule>
    <rule>
      <name>service-2</name>
      <ip>192.168.0.0</ip>
      <mask>255.255.255.0</mask>
      <action>permit</action>
    </rule>
    <rule>
      <name>service-1</name>
      <ip>10.0.0.0</ip>
      <mask>255.0.0.0</mask>
      <action>permit</action>
    </rule>
    <rule>
      <ip>0.0.0.0</ip>
      <mask>0.0.0.0</mask>
      <action>deny</action>
    </rule>
    <rule insert="first" guard="deny-all">
      <name>{$NAME}</name>
      <ip>{$IP}</ip>
      <mask>{$MASK}</mask>
      <action>permit</action>
    </rule>
    Example: Template with Macros
      1 <config-template xmlns="http://tail-f.com/ns/config/1.0">
          <?macro GbEth name='{/name}' ip mask='255.255.255.0'?>
            <GigabitEthernet>
              <name>$name</name>
      5       <ip>
                <address>
                  <primary>
                    <address>$ip</address>
                    <mask>$mask</mask>
     10           </primary>
                </address>
              </ip>
            </GigabitEthernet>
          <?endmacro?>
     15 
          <?macro GbEthDesc name='{/name}' ip mask='255.255.255.0' desc?>
            <?expand GbEth name='$name' ip='$ip' mask='$mask'?>
            <GigabitEthernet>
              <name>$name</name>
     20       <description>$desc</description>
            </GigabitEthernet>
          <?endmacro?>
        
          <devices xmlns="http://tail-f.com/ns/ncs">
     25     <device tags="nocreate">
              <name>{/device}</name>
              <config tags="merge">
                <interface xmlns="urn:ios">
                  <?expand GbEthDesc name='0/0/0/0' ip='10.250.1.1'
     30                              desc='Link to core'?>
                </interface>
              </config>
            </device>
          </devices>
     35 </config-template>}
      list interface {
        key "name";
        leaf name {
          type string;
        }
        leaf address {
          type inet:ip-address;
        }
      }
      // ...
      container links {
        list link {
          key "intf-name";
          leaf intf-name {
            type string;
          }
          leaf intf-addr {
            type inet:ip-address;
          }
        }
      }
      <interface>
        <name>{/links/link[0]/intf-name}</name>
        <address>{intf-addr}</address>
      </interface>
      <interface>
        <name>{/links/link/intf-name}</name>
        <address>{intf-addr}</address>
      </interface>
      <interface>
        <name>{string(/links-list/intf-name)}</name>
      </interface>
    Example: Package Declaring supported-ned-id
    <ncs-package xmlns="http://tail-f.com/ns/ncs-packages">
      <name>mypackage</name>
      <!-- ... -->
    
      <!-- Exact NED id match, requires namespace -->
      <supported-ned-id xmlns:id="http://tail-f.com/ns/ned-id/cisco-ios-cli-3.0">
        id:cisco-ios-cli-3.0
      </supported-ned-id>
    
      <!-- Regex-based NED id match -->
      <supported-ned-id-match>router-nc-1</supported-ned-id-match>
    </ncs-package>
    list interface {
      key name;
      leaf name {
        type string;
      }
      leaf address {
        type tailf:ipv4-address-and-prefix-length;
        description
          "IP address with prefix in the following format, e.g.: 10.2.3.4/24";
      }
      leaf mask {
        config false;
        type inet:ipv4-address;
        description
          "Auxiliary data populated by service code, represents network mask
           corresponding to the prefix in the address field, e.g.: 255.255.255.0";
      }
    }
        def cb_create(self, tctx, root, service, proplist):
            interface_list = service.interface
            for intf in interface_list:
                prefix = intf.address.split('/')[1]
                intf.mask = ipaddress.IPv4Network(0, int(prefix)).netmask
    
            # Template variables don't need to contain mask
            # as it is passed via (operational) database
            template = ncs.template.Template(service)
            template.apply('iface-template')
          <interface>
            <name>{/interface/name}</name>
            <ip-address>{substring-before(address, '/')}</ip-address>
            <ip-mask>{mask}</ip-mask>
          </interface>
    <config-template xmlns="http://tail-f.com/ns/config/1.0"
                     servicepoint="some-service">
      <!-- ... -->
    </config-template>
    Example: Post-modification Template
    <config-template xmlns="http://tail-f.com/ns/config/1.0"
                     servicepoint="some-service"
                     cbtype="post-modification">
      <?if {$OPERATION = 'create'}?>
        <devices xmlns="http://tail-f.com/ns/ncs">
          <device>
            <name>{/device}</name>
            <config>
              <!-- ... -->
            </config>
          </device>
        </devices>
      <?elif {$OPERATION = 'update'}?>
        <!-- ... -->
      <?else?>
        <!-- $OPERATION = 'delete' -->
        <!-- ... -->
      <?end?>
    </config-template>
    admin@ncs(config)# commit dry-run | debug template
    admin@ncs(config)# commit dry-run | debug xpath
    admin@ncs(config)# commit dry-run | debug template l3vpn
    admin@ncs(config)# commit dry-run | debug template | debug xpath
    admin@ncs# show running-config devices device c0 config ios:interface | display xpath
    /devices/device[name='c0']/config/ios:interface/FastEthernet[name='1/0']
    /devices/device[name='c0']/config/ios:interface/FastEthernet[name='1/1']
    /devices/device[name='c0']/config/ios:interface/FastEthernet[name='1/2']
    /devices/device[name='c0']/config/ios:interface/FastEthernet[name='2/1']
    /devices/device[name='c0']/config/ios:interface/FastEthernet[name='2/2']
    $ ncs_cmd -c "x /devices/device[name='c0']/config/ios:interface/FastEthernet/name"
    /devices/device{c0}/config/interface/FastEthernet{1/0}/name [1/0]
    /devices/device{c0}/config/interface/FastEthernet{1/1}/name [1/1]
    /devices/device{c0}/config/interface/FastEthernet{1/2}/name [1/2]
    /devices/device{c0}/config/interface/FastEthernet{2/1}/name [2/1]
    /devices/device{c0}/config/interface/FastEthernet{2/2}/name [2/2]
    admin@ncs# config
    admin@ncs(config)# load merge example.cfg
    admin@ncs(config)# commit dry-run | debug template
      1 <config-template xmlns="http://tail-f.com/ns/config/1.0"
                         servicepoint="dns">
          <devices xmlns="http://tail-f.com/ns/ncs">
            <?foreach {/target-device}?>
      5     <device>
              <name>{.}</name>
              <config>
                <ip xmlns="urn:ios">
                  <?if {/dns-server-ip}?>
     10             <!-- If dns-server-ip is set, use that. -->
                    <name-server>{/dns-server-ip}</name-server>
                  <?else?>
                    <!-- Otherwise, use the default one. -->
                    <name-server>192.0.2.1</name-server>
     15           <?end?>
                </ip>
              </config>
            </device>
            <?end?>
     20   </devices>
        </config-template>
    Processing instruction 'foreach': evaluating the node-set \
        (from file "dns-template.xml", line 4)
    Evaluating "/target-device" (from file "dns-template.xml", line 4)
    Context node: /dns[name='instance1']
    Result:
    For /dns[name='instance1']/target-device[.='c1'], it evaluates to []
    For /dns[name='instance1']/target-device[.='c2'], it evaluates to []
    admin@ncs(config)# show full-configuration dns instance1 target-device | display xpath
    /dns[name='instance1']/target-device [ c1 c2 ]
    Processing instruction 'foreach': next iteration: \
        context /dns[name='instance1']/target-device[.='c1'] \
        (from file "dns-template.xml", line 4)
    Evaluating "." (from file "dns-template.xml", line 6)
    Context node: /dns[name='instance1']/target-device[.='c1']
    Result:
    For /dns[name='instance1']/target-device[.='c1'], it evaluates to "c1"
    Operation 'merge' on existing node: /devices/device[name='c1'] \
        (from file "dns-template.xml", line 6)
    Processing instruction 'if': evaluating the condition \
        (from file "dns-template.xml", line 9)
    Evaluating conditional expression "boolean(/dns-server-ip)" \
        (from file "dns-template.xml", line 9)
    Context node: /dns[name='instance1']/target-device[.='c1']
    Result: true - continuing
    Processing instruction 'if': recursing (from file "dns-template.xml", line 9)
    Evaluating "/dns-server-ip" (from file "dns-template.xml", line 11)
    Context node: /dns[name='instance1']/target-device[.='c1']
    Result:
    For /dns[name='instance1'], it evaluates to "192.0.2.110"
    Operation 'merge' on non-existing node: \
        /devices/device[name='c1']/config/ios:ip/name-server[.='192.0.2.110'] \
        (from file "dns-template.xml", line 11)
    Processing instruction 'else': skipping (from file "dns-template.xml", line 12)
    Processing instruction 'foreach': next iteration: \
        context /dns[name='instance1']/target-device[.='c2'] \
        (from file "dns-template.xml", line 4)
    Evaluating "." (from file "dns-template.xml", line 6)
    Context node: /dns[name='instance1']/target-device[.='c2']
    Result:
    For /dns[name='instance1']/target-device[.='c2'], it evaluates to "c2"
    Operation 'merge' on existing node: /devices/device[name='c2'] \
        (from file "dns-template.xml", line 6)
    Processing instruction 'if': evaluating the condition \
        (from file "dns-template.xml", line 9)
    Evaluating conditional expression "boolean(/dns-server-ip)" \
        (from file "dns-template.xml", line 9)
    Context node: /dns[name='instance1']/target-device[.='c2']
    Result: true - continuing
    Processing instruction 'if': recursing (from file "dns-template.xml", line 9)
    Evaluating "/dns-server-ip" (from file "dns-template.xml", line 11)
    Context node: /dns[name='instance1']/target-device[.='c2']
    Result:
    For /dns[name='instance1'], it evaluates to "192.0.2.110"
    Operation 'merge' on non-existing node: \
        /devices/device[name='c2']/config/ios:ip/name-server[.='192.0.2.110'] \
        (from file "dns-template.xml", line 11)
    Processing instruction 'else': skipping (from file "dns-template.xml", line 12)
    cli {
        local-node {
            data  devices {
                      device c1 {
                          config {
                              ip {
                 -                name-server 192.0.2.1;
                 +                name-server 192.0.2.1 192.0.2.110;
                              }
                          }
                      }
                      device c2 {
                          config {
                              ip {
                 +                name-server 192.0.2.110;
                              }
                          }
                      }
                  }
                 +dns instance1 {
                 +    target-device [ c1 c2 ];
                 +    dns-server-ip 192.0.2.110;
                 +}
        }
    }

    Allows you to manipulate the current context node used to evaluate XPath expressions in the template. The expression is evaluated within the current XPath context and must evaluate to exactly one node in the data tree.

    Store both the current context node and the root node of the XPath accessible tree with name being the key to access it later. It is possible to switch to this context later using switch-context with the name. Multiple contexts can be stored simultaneously under different names. Using save-context with the same name multiple times will result in the stored context being overwritten.

    Used to switch to a context stored using save-context with the specified name. This means that both the current context node and the root node of the XPath accessible tree will be changed to the stored values. switch-context does not remove the context from the storage and can be used as many times as needed, however using it with a name that does not exist in the storage causes an error.

    If there are multiple versions of the same NED expected to be loaded in the system, which define different versions of the same namespace, this processing instruction helps to resolve ambiguities in the schema between different versions of the NED. The part of the template following this processing instruction, up to matching elif-ned-id, else or end processing instruction is only applied to devices with the ned-id matching one of the ned-ids specified as a parameter to this processing instruction. If there are no ambiguities to resolve, then this processing instruction is not required. The ned-ids must contain one or more qualified NED ID identities separated by spaces. The elif-ned-id is optional and used to define a part of the template that applies to devices with another set of ned-ids than previously specified. Multiple elif-ned-id instructions are allowed in a single block of if-ned-id instructions. The set of ned-ids specified as a parameter to elif-ned-id instruction must be non-intersecting with the previously specified ned-ids in this block.

    The else processing instruction should be used with care in this context, as the set of the ned-ids it handles depends on the set of ned-ids loaded in the system, which can be hard to predict at the time of developing the template. To mitigate this problem it is recommended that the package containing this template defines a set of supported-ned-ids as described in .

    The if-ned-id-match and elif-ned-id-match processing instructions work similarly to if-ned-id and elif-ned-id but they accept a regular expression as an argument instead of a list of ned-ids. The regular expression is matched against all of the ned-ids supported by the package. If the if-ned-id-match processing instruction is nested inside of another if-ned-id-match or if-ned-id processing instruction, then the regular expression will only be matched against the subset of ned-ids matched by the encompassing processing instruction. The if-ned-id-match and elif-ned-id-match processing instructions are only allowed inside a device's mounted configuration subtree rooted at /devices/device/config.

    Define a new macro with the specified name and optional parameters. Macro definitions must come at the top of the template, right after the config-template tag. For a detailed description see Macros in Templates.

    Insert and expand the named macro, using the specified values for parameters. For a detailed description, see Macros in Templates.

    Service Callpoints and Templates
    number()
    string()
    re-match()
    starts-with()
    substring()
    substring-after()
    substring-before()
    translate()
    sort-by()
    min()
    not()
    sum()
    Conditional Statements
    Loop Statements
    Loop Statements
        <GigabitEthernet tags="nocreate">
          <name>{link/interface-number}</name>
          <description tags="merge">Link to PE</description>
          ...
        <GigabitEthernet tags="delete">
          <name>{link/interface-number}</name>
          <description tags="merge">Link to PE</description>
          ...
      <?macro GbEth name='{/name}' ip mask='255.255.255.0'?>
        <?set-context-node {expression}?>
        <?save-context name?>
        <?switch-context name?>
        <?if-ned-id ned-ids?>
            ...
        <?elif-ned-id ned-ids?>
            ...
        <?else?>
            ...
        <?end?>
        <?if-ned-id-match regex?>
            ...
        <?elif-ned-id-match regex?>
            ...
        <?else?>
            ...
        <?end?>
        <?macro name params...?>
            ...
        <?endmacro?>
        <?expand name params...?>
        <?set v = value?>
        <?if {expression}?>
            ...
        <?elif {expression}?>
            ...
        <?else?>
            ...
        <?end?>
        <?foreach {expression}?>
            ...
        <?end?>
        <?for v = start_value; {progress condition}; v = next_value?>
            ...
        <?end?>
        <?for ; {condition}; ?>
       <?copy-tree {source}?>
        <?set-root-node {expression}?>
    Namespaces and Multi-NED Support

    Implementing Services

    Explore service development in detail.

    A Template is All You Need

    To demonstrate the simplicity a pure model-to-model service mapping affords, let us consider the most basic approach to providing the mapping: the service XML template. The XML template is an XML-encoded file that tells NSO what configuration to generate when someone requests a new service instance.

    The first thing you need is the relevant device configuration (or configurations if multiple devices are involved). Suppose you must configure 192.0.2.1 as a DNS server on the target device. Using the NSO CLI, you first enter the device configuration, then add the DNS server. For a Cisco IOS-based device:

    Note here that the configuration is not yet committed and you can use the commit dry-run outformat xml command to produce the configuration in the XML format. This format is an excellent starting point for creating an XML template.

    The interesting portion is the part between <devices> and </devices> tags.

    Another way to get the XML output is to list the existing device configuration in NSO by piping it through the display xml filter:

    If there is a lot of data, it is easy to save the output to a file using the save pipe in the CLI, instead of copying and pasting it by hand:

    The last command saves the configuration for a device in the dns-template.xml file using XML format. To use it in a service, you need a service package.

    You create an empty, skeleton service with the ncs-make-package command, such as:

    The command generates the minimal files necessary for a service package, here named dns. One of the files is dns/templates/dns-template.xml, which is where the configuration in the XML format goes.

    If you look closely, there is one significant difference from the show running-config output: the template uses the config-template XML root tag, instead of config. This tag also has the servicepoint attribute. Other than that, you can use the XML formatted configuration from the CLI as-is.

    Bringing the two XML documents together gives the final dns/templates/dns-template.xml XML template:

    Static DNS Configuration Template Example:

    The service is now ready to use in NSO. Start the examples.ncs/implement-a-service/dns-v1 example to set up a live NSO system with such a service and inspect how it works. Try configuring two different instances of the dns service.

    The problem with this service is that it always does the same thing because it always generates exactly the same configuration. It would be much better if the service could configure different devices. The updated version, v1.1, uses a slightly modified template:

    The changed part is <name>{/name}</name>, which now uses the {/name} code instead of a hard-coded c1 value. The curly braces indicate that NSO should evaluate the enclosed expression and use the resulting value in its place. The /name expression is an XPath expression, referencing the service YANG model. In the model, name is the name you give each service instance. In this case, the instance name doubles for identifying the target device.

    In the output, the instance name used was c2 and that is why the service performs DNS configuration for the c2 device.

    The template actually allows a decent amount of programmability through XPath and special XML processing instructions. For example:

    In the preceding printout, the XPath starts-with() function is used to check if the device name starts with a specific prefix. Then one set of configuration items is used, and a different one otherwise. For additional available instructions and the complete set of template features, see .

    However, most provisioning tasks require some kind of input to be useful. Fortunately, you can define any number of input parameters in the service model that you can then reference from the template; either to use directly in the configuration or as something to base provisioning decisions on.

    Service Model Captures Inputs

    The YANG service model specifies the input parameters a service in NSO takes. For a specific service model think of the parameters that a northbound system sends to NSO or the parameters that a network engineer needs to enter in the NSO CLI.

    Even a service as simple as the DNS configuration service usually needs some parameters, such as the target device. The service model gives each parameter a name and defines validation rules, ensuring the client-provided values fit what the service expects.

    Suppose you want to add a parameter for the target device to the simple DNS configuration service. You need to construct an appropriate service model, adding a YANG leaf to capture this input.

    This task requires some basic YANG knowledge. Review the section for a primer on the main building blocks of the YANG language.

    The service model is located in the src/yang/servicename.yang file in the package. It typically resembles the following structure:

    The list named after the package (servicename in the example) is the interesting part.

    The uses ncs:service-data and ncs:servicepoint statements differentiate this list from any standard YANG list and make it a service. Each list item in NSO represents a service instance of this type.

    The uses ncs:service-data part allows the system to store internal state and provide common service actions, such as re-deploy and get-modifications for each service instance.

    The ncs:servicepoint identifies which part of the system is responsible for the service mapping. For a template-only service, it is the XML template that uses the same service point value in the config-template element.

    The name leaf serves as the key of the list and is primarily used to distinguish service instances from each other.

    The remaining statements describe the functionality and input parameters that are specific to this service. This is where you add the new leaf for the target device parameter of the DNS service:

    Use the examples.ncs/implement-a-service/dns-v2 example to explore how this model works and try to discover what deficiencies it may have.

    In its current form, the model allows you to specify any value for target-device, including none at all! Obviously, this is not good as it breaks the provisioning of the service. But even more importantly, not validating the input may allow someone to use the service in the way you have not intended and perhaps bring down the network.

    You can guard against invalid input with the help of additional YANG statements. For example:

    Now this parameter is mandatory for every service instance and must be one of the string literals: c0, c1, or c2. This format is defined by the regular expression in the pattern statement. In this particular case, the length restriction is redundant but demonstrates how you can combine multiple restrictions. You can even add multiple pattern statements to handle more complex cases.

    What if you wanted to make the DNS server address configurable too? You can add another leaf to the service model:

    There are three notable things about this leaf:

    • There is no mandatory statement, meaning the value for this leaf is optional. The XML template will be designed to provide some default value if none is given.

    • The type of the leaf is inet:ipv4-address, which restricts the value for this leaf to an IP address.

    • The inet:ipv4-address type is further restricted using a regular expression to only allow IP addresses from the 192.0.2.0/24 range.

    YANG is very powerful and allows you to model all kinds of values and restrictions on the data. In addition to the ones defined in the YANG language (), predefined types describing common networking concepts, such as those from the inet namespace (), are available to you out of the box. It is much easier to validate the inputs when so many options are supported.

    The one missing piece for the service is the XML template. You can take the Example as a base and tweak it to reference the defined inputs.

    Using the code {XYZ} or {/XYZ} in the template, instructs NSO to look for the value in the service instance data, in the node with the name XYZ. So, you can refer to the target-device input parameter as defined in YANG with the {/target-device} code in the XML template.

    The code inside the curly brackets actually contains an XPath 1.0 expression with the service instance data as its root, so an absolute path (with a slash) and a relative one (without it) refer to the same node in this case, and you can use either.

    The final, improved version of the DNS service template that takes into account the new model, is:

    The following figure captures the relationship between the YANG model and the XML template that ultimately produces the desired device configuration.

    The complete service is available in the examples.ncs/implement-a-service/dns-v2.1 example. Feel free to investigate on your own how it differs from the initial, no-validation service.

    Extracting the Service Parameters

    When the service is simple, constructing the YANG model and creating the service mapping (the XML template) is straightforward. Since the two components are mostly independent, you can start your service design with either one.

    If you write the YANG model first, you can load it as a service package into NSO (without having any mapping defined) and iterate on it. This way, you can try the model, which is the interface to the service, with network engineers or northbound systems before investing the time to create the mapping. This model-first approach is also sometimes called top-down.

    The alternative is to create the mapping first. Especially for developers new to NSO, the template-first, or bottom-up, approach is often easier to implement. With this approach, you templatize the configuration and extract the required service parameters from the template.

    Experienced NSO developers naturally combine the two approaches, without much thinking. However, if you have trouble modeling your service at first, consider following the template-first approach demonstrated here.

    For the following example, suppose you want the service to configure IP addressing on an ethernet interface. You know what configuration is required to do this manually for a particular ethernet interface. For a Cisco IOS-based device you would use the commands, such as:

    To transform this configuration into a reusable service, complete the following steps:

    • Create an XML template with hard-coded values.

    • Replace each value specific to this instance with a parameter reference.

    • Add each parameter to the YANG model.

    • Add parameter validation.

    Start by generating the configuration in the XML format, making use of the display xml filter. Note that the XML output will not necessarily be a one-to-one mapping of the CLI commands; the XML reflects the device YANG model which can be more complex but the commands on the CLI can hide some of this complexity.

    The transformation to a template also requires you to change the root tag, which produces the resulting XML template:

    However, this template has all the values hard-coded and only configures one specific interface on one specific device.

    Now you must replace all the dynamic parts that vary from service instance to service instance with references to the relevant parameters. In this case, it is data specific to each device: which interface and which IP address to use.

    Suppose you pick the following names for the variable parameters:

    1. device: The network device to configure.

    2. interface: The network interface on the selected device.

    3. ip-address: The IP address to use on the selected interface.

    Generally, you can make up any name for a parameter but it is best to follow the same rules that apply for naming variables in programming languages, such as making the name descriptive but not excessively verbose. It is customary to use a hyphen (minus sign) to concatenate words and use all-lowercase (“kebab-case”), which is the convention used in the YANG language standards.

    The corresponding template then becomes:

    Having completed the template, you can add all the parameters, three in this case, to the service model.

    The partially completed model is now:

    Missing are the data type and other validation statements. At this point, you could fill out the model with generic type string statements, akin to the name leaf. This is a useful technique to test out the service in early development. But here you can complete the model directly, as it contains only three parameters.

    You can use a leafref type leaf to refer to a device by its name in the NSO. This type uses dynamic lookup at the specified path to enumerate the available values. For the device leaf, it lists every value for a device name that NSO knows about. If there are two devices managed by NSO, named rtr-sjc-01 and rtr-sto-01, either “rtr-sjc-01” or “rtr-sto-01” are valid values for such a leaf. This is a common way to refer to devices in NSO services.

    In a similar fashion, restrict the valid values of the other two parameters.

    You would typically create the service package skeleton with the ncs-make-package command and update the model in the .yang file. The model in the skeleton might have some additional example leafs that you do not need and should remove to finalize the model. That gives you the final, full-service model:

    The examples.ncs/implement-a-service/iface-v1 example contains the complete YANG module with this service model in the packages/iface-v1/src/yang/iface.yang file, as well as the corresponding service template in packages/iface-v1/templates/iface-template.xml.

    FASTMAP and Service Life Cycle

    The YANG model and the mapping (the XML template) are the two main components required to implement a service in NSO. The hidden part of the system that makes such an approach feasible is called FASTMAP.

    FASTMAP covers the complete service life cycle: creating, changing, and deleting the service. It requires a minimal amount of code for mapping from a service model to a device model.

    FASTMAP is based on generating changes from an initial create operation. When the service instance is created the reverse of the resulting device configuration is stored together with the service instance. If an NSO user later changes the service instance, NSO first applies (in an isolated transaction) the reverse diff of the service, effectively undoing the previous create operation. Then it runs the logic to create the service again and finally performs a diff against the current configuration. Only the result of the diff is then sent to the affected devices.

    It is therefore very important that the service create code produces the same device changes for a given set of input parameters every time it is executed. See for techniques to achieve this.

    If the service instance is deleted, NSO applies the reverse diff of the service, effectively removing all configuration changes the service did on the devices.

    Assume we have a service model that defines a service with attributes X, Y, and Z. The mapping logic calculates that attributes A, B, and C must be set on the devices. When the service is instantiated, the previous values of the corresponding device attributes A, B, and C are stored with the service instance in the CDB. This allows NSO to bring the network back to the state before the service was instantiated.

    Now let us see what happens if one service attribute is changed. Perhaps the service attribute Z is changed. NSO will execute the mapping as if the service was created from scratch. The resulting device configurations are then compared with the actual configuration and the minimal diff is sent to the devices. Note that this is managed automatically, there is no code to handle the specific "change Z" operation.

    When a user deletes a service instance, NSO retrieves the stored device configuration from the moment before the service was created and reverts to it.

    Templates and Code

    For a complex service, you may realize that the input parameters for a service are not sufficient to render the device configuration. Perhaps the northbound system only provides a subset of the required parameters. For example, the other system wants NSO to pick an IP address and does not pass it as an input parameter. Then, additional logic or API calls may be necessary but XML templates provide no such functionality on their own.

    The solution is to augment XML templates with custom code. Or, more accurately, create custom provisioning code that leverages XML templates. Alternatively, you can also implement the mapping logic completely in the code and not use templates at all. The latter, forgoing the templates altogether, is less common, since templates have a number of beneficial properties.

    Templates separate the way parameters are applied, which depends on the type of target device, from calculating the parameter values. For example, you would use the same code to find the IP address to apply on a device, but the actual configuration might differ whether it is a Cisco IOS (XE) device, an IOS XR, or another vendor entirely.

    Moreover, if you use templates, NSO can automatically validate the templates being compatible with the used NEDs, which allows you to sidestep whole groups of bugs.

    NSO offers multiple programming languages to implement the code. The --service-skeleton option of the ncs-make-package command influences the selection of the programming language and if the generated code should contain sample calls for applying an XML template.

    Suppose you want to extend the template-based ethernet interface addressing service to also allow specifying the netmask. You would like to do this in the more modern, CIDR-based single number format, such as is used in the 192.168.5.1/24 format (the /24 after the address). However, the generated device configuration takes the netmask in the dot-decimal format, such as 255.255.255.0, so the service needs to perform some translation. And that requires a custom service code.

    Such a service will ultimately contain three parts: the service YANG model, the translation code, and the XML template. The model and the template serve the same purpose as before, while custom code provides fine-grained control over how templates are applied and the data available to them.

    Since the service is based on the previous interface addressing service, you can save yourself a lot of work by starting with the existing YANG model and XML template.

    The service YANG model needs an additional cidr-netmask leaf to hold the user-provided netmask value:

    This leaf stores a small number (of uint8 type), with values between 0 and 32. It also specifies a default of 24, which is used when the client does not supply a value for this parameter.

    The previous XML template also requires only minor tweaks. A small but important change is the removal of the servicepoint attribute on the top element. Since it is gone, NSO does not apply the template directly for each service instance. Instead, your custom code registers itself on this servicepoint and is responsible for applying the template.

    The reason for it being this way is that the code will supply the value for the additional variable, here called NETMASK. This is the other change that is necessary in the template: referencing the NETMASK variable for the netmask value:

    Unlike references to other parameters, NETMASK does not represent a data path but a variable. It must start with a dollar character ($) to distinguish it from a path. As shown here, variables are often written in all-uppercase, making it easier to quickly tell whether something is a variable or a data path.

    Variables get their values from different sources but the most common one is the service code. You implement the service code using a programming language, such as Java or Python.

    The following two procedures create an equivalent service that acts identically from a user's perspective. They only differ in the language used; they use the same logic and the same concepts. Still, the final code differs quite a bit due to the nature of each programming language. Generally, you should pick one language and stick with it. If you are unsure which one to pick, you may find Python slightly easier to understand because it is less verbose.

    Templates and Python Code

    The usual way to start working on a new service is to first create a service skeleton with the ncs-make-package command. To use Python code for service logic and XML templates for applying configuration, select the python-and-template option. For example:

    To use the prepared YANG model and XML template, save them into the iface/src/yang/iface.yang and iface/templates/iface-template.xml files. This is exactly the same as for the template-only service.

    What is different, is the presence of the python/ directory in the package file structure. It contains one or more Python packages (not to be confused with NSO packages) that provide the service code.

    The function of interest is the cb_create() function, located in the main.py file that the package skeleton created. Its purpose is the same as that of the XML template in the template-only service: generate configuration based on the service instance parameters. This code is also called 'the create code'.

    The create code usually performs the following tasks:

    • Read service instance parameters.

    • Prepare configuration variables.

    • Apply one or more XML templates.

    Reading instance parameters is straightforward with the help of the service function parameter, using the Maagic API. For example:

    Note that the hyphen in cidr-netmask is replaced with the underscore in service.cidr_netmask as documented in .

    The way configuration variables are prepared depends on the type of the service. For the interface addressing service with netmask, the netmask must be converted into dot-decimal format:

    The code makes use of the built-in Python ipaddress package for conversion.

    Finally, the create code applies a template, with only minimal changes to the skeleton-generated sample; the names and values for the vars.add() function, which are specific to this service.

    If required, your service code can call vars.add() multiple times, to add as many variables as the template expects.

    The first argument to the template.apply() call is the file name of the XML template, without the .xml suffix. It allows you to apply multiple, different templates for a single service instance. Separating the configuration into multiple templates based on functionality, called feature templates, is a great practice with bigger, complex configurations.

    The complete create code for the service is:

    You can test it out in the examples.ncs/implement-a-service/iface-v2-py example.

    Templates and Java Code

    The usual way to start working on a new service is to first create a service skeleton with the ncs-make-package command. To use Java code for service logic and XML templates for applying the configuration, select the java-and-template option. For example:

    To use the prepared YANG model and XML template, save them into the iface/src/yang/iface.yang and iface/templates/iface-template.xml files. This is exactly the same as for the template-only service.

    What is different, is the presence of the src/java directory in the package file structure. It contains a Java package (not to be confused with NSO packages) that provides the service code and build instructions for the ant tool to compile the Java code.

    The function of interest is the create() function, located in the ifaceRFS.java file that the package skeleton created. Its purpose is the same as that of the XML template in the template-only service: generate configuration based on the service instance parameters. This code is also called 'the create code'.

    The create code usually performs the following tasks:

    • Read service instance parameters.

    • Prepare configuration variables.

    • Apply one or more XML templates.

    Reading instance parameters is done with the help of the service function parameter, using . For example:

    The way configuration variables are prepared depends on the type of the service. For the interface addressing service with netmask, the netmask must be converted into dot-decimal format:

    The create code applies a template, with only minimal changes to the skeleton-generated sample; the names and values for the myVars.putQuoted() function are different since they are specific to this service.

    If required, your service code can call myVars.putQuoted() multiple times, to add as many variables as the template expects.

    The second argument to the Template constructor is the file name of the XML template, without the .xml suffix. It allows you to instantiate and apply multiple, different templates for a single service instance. Separating the configuration into multiple templates based on functionality, called feature templates, is a great practice with bigger, complex configurations.

    Finally, you must also return the opaque object and handle various exceptions for the function. If exceptions are propagated out of the create code, you should transform them into NSO specific ones first, so the UI can present the user with a meaningful error message.

    The complete create code for the service is then:

    You can test it out in the examples.ncs/implement-a-service/iface-v2-java example.

    Configuring Multiple Devices

    A service instance may require configuration on more than just a single device. In fact, it is quite common for a service to configure multiple devices.

    There are a few ways in which you can achieve this for your services:

    • In code: Using API, such as Python Maagic or Java NAVU, navigate the data model to individual device configurations under each devices device DEVNAME config and set the required values.

    • In code with templates: Apply the template multiple times with different values, such as the device name.

    • With templates only: use foreach or automatic (implicit) loops.

    The generally recommended approach is to use either code with templates or templates with foreach loops. They are explicit and also work well when you configure devices of different types. Using only code extends less well to the latter case, as it requires additional logic and checks for each device type.

    Automatic, implicit loops in templates are harder to understand since the syntax looks like the one for normal leafs. A common example is a device definition as a leaf-list in the service YANG model, such as:

    Because it is a leaf-list, the following template applies to all the selected devices, using an implicit loop:

    It performs the same as the one, which loops through the devices explicitly:

    Being explicit, the latter is usually much easier to understand and maintain for most developers. The examples.ncs/implement-a-service/dns-v3 demonstrates this syntax in the XML template.

    Supporting Different Device Types

    Applying the same template works fine as long as you have a uniform network with similar devices. What if two different devices can provide the same service but require different configuration? Should you create two different services in NSO? No. Services allow you to abstract and hide the device specifics through a device-independent service model, while still allowing customization of device configuration per device type.

    One way to do this is to apply a different XML template from the service code, depending on the device type. However, the same is also possible through XML templates alone.

    When NSO applies configuration elements in the template, it checks the XML namespaces that are used. If the target device does not support a particular namespace, NSO simply skips that part of the template. Consequently, you can put configuration for different device types in the same XML template and only the relevant parts will be applied.

    Consider the following example:

    Due to the xmlns="urn:ios" attribute, the first part of the template (the interface GigabitEthernet) will only apply to Cisco IOS-based device. While the second part (the sys interfaces interface) will only apply to the netsim-based router-nc-type devices, as defined by the xmlns attribute on the sys element.

    In case you need to further limit what configuration applies where and namespace-based filtering is too broad, you can also use the if-ned-id XML processing instruction. Each NED package in NSO defines a unique NED-ID, which distinguishes between different device types (and possibly firmware versions). Based on the configured ned-id of the device, you can apply different parts of the XML template. For example:

    The preceding template applies configuration for the interface only if the selected device uses the cisco-ios-cli-3.0 NED-ID. You can find the full code as part of the examples.ncs/implement-a-service/iface-v3 example.

    Shared Service Settings and Auxiliary Data

    In the previous sections, we have looked at service mapping when the input parameters are enough to generate the corresponding device configurations. In many situations, this is not the case. The service mapping logic may need to reach out to other data in order to generate the device configuration. This is common in the following scenarios:

    • Policies: Often a set of policies is defined that is shared between service instances. The policies, such as QoS, have data models of their own (not service models) and the mapping code reads data from those.

    • Topology information: the service mapping might need to know how devices are connected, such as which network switches lie between two routers.

    • Resources such as VLAN IDs or IP addresses, which might not be given as input parameters. They may be modeled separately in NSO or fetched from an external system.

    It is important to design the service model considering the above requirements: what is input and what is available from other sources. In the latter case, in terms of implementation, an important distinction is made between accessing the existing data and allocating new resources. You must take special care for resource allocation, such as VLAN or IP address assignment, as discussed later on. For now, let us focus on using pre-existing shared data.

    One example of such use is to define QoS policies "on the side." Only a reference to an existing QoS policy is supplied as input. This is a much better approach than giving all QoS parameters to every service instance. But note that, if you modify the QoS definitions the services are referring to, this will not immediately change the existing deployed service instances. In order to have the service implement the changed policies, you need to perform a re-deploy of the service.

    A simpler example is a modified DNS configuration service that allows selecting from a predefined set of DNS servers, instead of supplying the DNS server directly as a service parameter. The main benefit in this case is that clients have no need to be aware of the actual DNS servers (and their IPs). In addition, this approach simplifies the management for the network operator, as all the servers are kept in a single place.

    What is required to implement such as service? There are two parts. The first is the model and data that defines the available DNS server options, which are shared (used) across all the DNS service instances. The second is a modification to the service inputs and mapping logic to use this data.

    For the first part, you must create a data model. If the shared data is specific to one service type, such as the DNS configuration, you can define it alongside the service instance model, in the service package. But sometimes this data may be shared between multiple types of service. Then it makes more sense to create a separate package for the shared data models.

    In this case, define a new top-level container in the service's YANG file as:

    Note that the container is defined outside the service list because this data is not specific to individual service instances:

    The dns-options container includes a list of dns-option items. Each item defines a set of DNS servers (leaf-list) and a name for this set.

    Once the shared data model is compiled and loaded into NSO, you can define the available DNS server sets:

    You must also update the service instance model to allow clients to pick one of these DNS servers:

    Different ways exist to model the service input for dns-servers. The first option you might think about might be using a string type and a pattern to limit the inputs to one of lon, sto, or sjc. Another option would be to use a YANG enum type. But both of these have the drawback that you need to change the YANG model if you add or remove available dns-option items.

    Using a leafref allows NSO to validate inputs for this leaf by comparing them to the values, returned by the path XPath expression. So, whenever you update the /dns-options/dns-option items, the change is automatically reflected in the valid dns-server values.

    At the same time, you must also update the mapping to take advantage of this service input parameter. The service XML template is very similar to the previous one. The main difference is the way in which the DNS addresses are read from the CDB, using the special deref() XPath function:

    The deref() function “jumps” to the item selected by the leafref. Here, leafref's path points to /dns-options/dns-option/name, so this is where deref(/dns-servers) ends: at the name leaf of the selected dns-option item.

    The following code, which performs the same thing but in a more verbose way, further illustrates how the DNS server value is obtained:

    The complete service is available in the examples.ncs/implement-a-service/dns-v3 example.

    Service Actions

    NSO provides some service actions out of the box, such as re-deploy or check-sync. You can also add others. A typical use case is to implement some kind of a self-test action that tries to verify the service is operational. The latter could use ping or similar network commands, as well as verify device operational data, such as routing table entries.

    This action supplements the built-in check-sync or deep-check-sync action, which checks for the required device configuration.

    For example, a DNS configuration service might perform a domain lookup to verify the Domain Name System is working correctly. Likewise, an interface configuration service could ping an IP address or check the interface status.

    The action consists of the YANG model for action inputs and outputs, as well as the action code that is executed when a client invokes the action.

    Typically, such actions are defined per service instance, so you model them under the service list:

    The action needs no special inputs; because it is defined on the service instance, it can find the relevant interface to query. The output has a single leaf, called status, which uses an enumeration type for explicitly defining all the possible values it can take (up, down, or unknown).

    Note that using the action statement requires you to also use the yang-version 1.1 statement in the YANG module header (see ).

    Action Code in Python

    NSO Python API contains a special-purpose base class, ncs.dp.Action, for implementing actions. In the main.py file, add a new class that inherits from it, and implements an action callback:

    The callback receives a number of arguments, one of them being kp. It contains a keypath value, identifying the data model path, to the service instance in this case, it was invoked on.

    The keypath value uniquely identifies each node in the data model and is similar to an XPath path, but encoded a bit differently. You can use it with the ncs.maagic.cd() function to navigate to the target node.

    The newly defined service variable allows you to access all of the service data, such as device and interface parameters. This allows you to navigate to the configured device and verify the status of the interface. The method likely depends on the device type and is not shown in this example.

    The action class implementation then resembles the following:

    Finally, do not forget to register this class on the action point in the Main application.

    You can test the action in the examples.ncs/implement-a-service/iface-v4-py example.

    Action Code in Java

    Using the Java programming language, all callbacks, including service and action callback code, are defined using annotations on a callback class. The class NSO looks for is specified in the package-meta-data.xml file. This class should contain an @ActionCallback() annotated method that ties it back to the action point in the YANG model:

    The callback receives a number of arguments, one of them being kp. It contains a keypath value, identifying the data model path, to the service instance in this case, it was invoked on.

    The keypath value uniquely identifies each node in the data model and is similar to an XPath path, but encoded a bit differently. You can use it with the com.tailf.navu.KeyPath2NavuNode class to navigate to the target node.

    The newly defined service variable allows you to access all of the service data, such as device and interface parameters. This allows you to navigate to the configured device and verify the status of the interface. The method likely depends on the device type and is not shown in this example.

    The complete implementation requires you to supply your own Maapi read transaction and resembles the following:

    You can test the action in the examples.ncs/implement-a-service/iface-v4-java example.

    Operational Data

    In addition to device configuration, services may also provide operational status or statistics. This is operational data, modeled with config false statements in YANG, and cannot be directly set by clients. Instead, clients can only read this data, for example to check service health.

    What kind of data a service exposes depends heavily on what the service does. Perhaps the interface configuration service needs to provide information on whether a network interface was enabled and operational at the time of the last check (because such a check could be expensive).

    Taking iface service as a base, consider how you can extend the instance model with another operational leaf to hold the interface status data as of the last check.

    The new leaf last-test-result is designed to store the same data as the test-enabled action returns. Importantly, it also contains a config false substatement, making it operational data.

    When faced with duplication of type definitions, as seen in the preceding code, the best practice is to consolidate the definition in a single place and avoid potential discrepancies in the future. You can use a typedef statement to define a custom YANG data type.

    The typedef statements should come before data statements, such as containers and lists in the model.

    Once defined, you can use the new type as you would any other YANG type. For example:

    Users can then view operational data with the help of the show command. The data is also available through other NB interfaces, such as NETCONF and RESTCONF.

    But where does the operational data come from? The service application code provides this data. In this example, the last-test-status leaf captures the result of the enabled check, which is implemented as a custom action. So, here it is the action code that sets the leaf's value.

    This approach works well when operational data is updated based on some event, such as a received notification or a user action, and NSO is used to cache its value.

    For cases, where this is insufficient, NSO also allows producing operational data on demand, each time a client requests it, through the Data Provider API. See for this alternative approach.

    Writing Operational Data in Python

    Unlike configuration data, which always requires a transaction, you can write operational data to NSO with or without a transaction. Using a transaction allows you to easily compose multiple writes into a single atomic operation but has some small performance penalty due to transaction overhead.

    If you avoid transactions and write data directly, you must use the low-level CDB API, which requires manual connection management and does not support Maagic API for data model navigation.

    The alternative, transaction-based approach uses high-level MAAPI and Maagic objects:

    When used as part of the action, the action code might be as follows:

    Note that you have to start a new transaction in the action code, even though trans is already supplied, since trans is read-only and cannot be used for writes.

    Another thing to keep in mind with operational data is that NSO by default does not persist it to storage, only keeps it in RAM. One way for the data to survive NSO restarts is to use the tailf:persistent statement, such as:

    You can also register a function with the service application class to populate the data on package load, if you are not using tailf:persistent.

    The examples.ncs/implement-a-service/iface-v5-py example implements such code.

    Writing Operational Data in Java

    Unlike configuration data, which always requires a transaction, you can write operational data to NSO with or without a transaction. Using a transaction allows you to easily compose multiple writes into a single atomic operation but has some small performance penalty due to transaction overhead.

    If you avoid transactions and write data directly, you must use the low-level CDB API, which does not support NAVU for data model navigation.

    The alternative, transaction-based approach uses high-level MAAPI and NAVU objects:

    Note the use of the context.startOperationalTrans() function to start a new transaction against the operational data store. In other respects, the code is the same as for writing configuration data.

    Another thing to keep in mind with operational data is that NSO by default does not persist it to storage, only keeps it in RAM. One way for the data to survive NSO restarts is to model the data with the tailf:persistent statement, such as:

    You can also register a custom com.tailf.ncs.ApplicationComponent class with the service application to populate the data on package load, if you are not using tailf:persistent. Please refer to for details.

    The examples.ncs/implement-a-service/iface-v5-java example implements such code.

    Nano Services for Provisioning with Side Effects

    A FASTMAP service cannot perform explicit function calls with side effects. The only action a service is allowed to take is to modify the configuration of the current transaction. For example, a service may not invoke an action to generate authentication key files or start a virtual machine. All such actions must occur before the service is created and provided as input parameters. This restriction is because the FASTMAP code may be executed as part of a commit dry-run, or the commit may fail, in which case the side effects would have to be undone.

    Nano services use a technique called reactive FASTMAP (RFM) and provide a framework to safely execute actions with side effects by implementing the service as several smaller (nano) steps or stages. Reactive FASTMAP can also be implemented directly using the CDB subscribers, but nano services offer a more streamlined and robust approach for staged provisioning.

    The services discussed previously in this section were modeled to give all required parameters to the service instance. The mapping logic code could immediately do its work. Sometimes this is not possible. Two examples that require staged provisioning where a nano service step executing an action is the best practice solution:

    • Allocating a resource from an external system, such as an IP address, or generating an authentication key file using an external command. It is impossible to do this allocation from within the normal FASTMAP create() code since there is no way to deallocate the resource on commit, abort, or failure and when deleting the service. Furthermore, the create() code runs within the transaction lock. The time spent in services create() code should be as short as possible.

    • The service requires the start of one or more Virtual Machines, Virtual Network Functions. The VMs do not yet exist, and the create() code needs to trigger something that starts the VMs, and then later, when the VMs are operational, configure them.

    The basic concepts of nano services are covered in detail by . The example in examples.ncs/development-guide/nano-services/netsim-sshkey implements SSH public key authentication setup using a nano service. The nano service uses the following steps in a plan that produces the generated, distributed, and configured states:

    1. Generates the NSO SSH client authentication key files using the OpenSSH ssh-keygen utility from a nano service side-effect action implemented in Python.

    2. Distributes the public key to the netsim (ConfD) network elements to be stored as an authorized key using a Python service create() callback.

    3. Configures NSO to use the public key for authentication with the netsim network elements using a Python service create() callback and service template.

    Upon deletion of the service instance, NSO restores the configuration. The only delete step in the plan is the generated state side-effect action that deletes the key files. The example is described in more detail in .

    The basic-vrouter, netsim-vrouter, and mpls-vpn-vrouter examples in the examples.ncs/development-guide/nano-services directory start, configure, and stop virtual devices. In addition, the mpls-vpn-vrouter example manages Layer3 VPNs in a service provider MPLS network consisting of physical and virtual devices. Using a Network Function Virtualization (NFV) setup, the L3VPN nano service instructs a VM manager nano service to start a virtual device in a multi-step process consisting of the following:

    1. When the L3VPN nano service pe-create state step create or delete a /vm-manager/start service configuration instance, the VM manager nano service instructs a VNF-M, called ESC, to start or stop the virtual device.

    2. Wait for the ESC to start or stop the virtual device by monitoring and handling events. Here NETCONF notifications.

    3. Mount the device in the NSO device tree.

    See the mpls-vpn-vrouter example for details on how the l3vpn.yang YANG model l3vpn-plan pe-created state and vm-manager.yang vm-plan for more information. vm-manager plan states with a nano-callback have their callbacks implemented by the escstart.java escstart class. Nano services are documented in .

    Service Troubleshooting

    Service troubleshooting is an inevitable part of any NSO development process and eventually a part of their operational tasks as well. By their nature, NSO services are composed primarily out of user-defined code, models, and templates. This gives you plenty of opportunities to make unintended mistakes in mapping code, use incorrect indentations, create invalid configuration templates, and much more. Not only that, they also rely on southbound communication with devices of many different versions and vendors, which presents you with yet another domain that can cause issues in your NSO services.

    This is why it is important to have a systematic approach when debugging and troubleshooting your services:

    • Understand the problem - First, you need to make sure that you fully understand the issue you are trying to troubleshoot. Why is this issue happening? When did it first occur? Does it happen only on specific deployments or devices? What is the error message like? Is it consistent and can it be replicated? What do the logs say?

    • Identify the root cause - When you understand the issues, their triggers, conditions, and any additional insights that NSO allows you to inspect, you can start breaking down the problem to identify its root cause.

    • Form and implement the solution - Once the root cause (or several of them) is found, you can focus on producing a suitable solution. This might be a simple NSO operation, modification of service package codebase, a change in southbound connectivity of managed devices, and any other action or combination required to achieve a working service.

    Common Troubleshooting Steps

    You can use these general steps to give you a high-level idea of how to approach troubleshooting your NSO services:

    1. Ensure that your NSO instance is installed and running properly. You can verify the overall status with ncs --status shell command. To find out more about installation problems and potential runtime issues, check in Administration. If you encounter a blank CLI when you connect to NSO you must also make sure that your user is added to the correct NACM group (for example ncsadmin) and that the rules for this group allow the user to view and edit your service through CLI. You can find out more about groups and authorization rules in in Administration.

    2. Verify that you are using the latest version of your packages. This means copying the latest packages into load path, recompiling the package YANG models and code with the make command, and reloading the packages. In the end, you must expect the NSO packages to be successfully reloaded to proceed with troubleshooting. You can read more about loading packages in . If nothing else, successfully reloading packages will at least make sure that you can use and try to create service instances through NSO. Compiling packages uses the ncsc


    Next Steps

    admin@ncs# config
    Entering configuration mode terminal
    admin@ncs(config)# devices device c1 config
    admin@ncs(config-config)# ip name-server 192.0.2.1
    admin@ncs(config-config)# top
    admin@ncs(config)#

    Consolidate and clean up the YANG model as necessary.

    Test the connection using the public key through a nano service side-effect executed by the NSO built-in connect action.

    Fetch the ssh-keys and perform a sync-from on the newly created device.
    compiler internally, which means that this part of the process reveals any syntax errors that might exist in YANG models or Java code. You do not need to rely on
    ncsc
    for compile-level errors though and should use specialized tools such as
    pyang
    or
    yanger
    for YANG, and one of the many IDEs and syntax validation tools for Java.

    Additionally, reloading packages can also supply you with some valuable information. For example, it can tell you that the package requires a higher version of NSO which is specified in the package-meta-data.xml file, or about any Python-related syntax errors.

    Last but not least, package reloading also provides some information on the validity of your XML configuration templates based on the NED namespace you are using for a specific part of the configuration, or just general syntactic errors in your template.

  • Examine what the template and XPath expressions evaluate to. If some service instance parameters are missing or are mapped incorrectly, there might be an error in the service template parameter mapping or in their XPath expressions. Use the CLI pipe command debug template to show all the XPath expression results from your service configuration templates or debug xpath to output all XPath expression results for the current transaction (e.g., as a part of the YANG model as well).

    In addition, you can use the xpath eval command in CLI configuration mode to test and evaluate arbitrary XPath expressions. The same can be done with ncs_cmd from the command shell. To see all the XPath expression evaluations in your system, you can also enable and inspect the xpath.trace log. You can read more about debugging templates and XPath in Debugging Templates. If you are using multiple versions of the same NED, make sure that you are using the correct processing instructions as described in Namespaces and Multi-NED Support when applying different bits of configuration to different versions of devices.

  • Validate that your custom service code is performing as intended. Depending on your programming language of choice, there might be different options to do that. If you are using Java, you can find out more on how to configure logging for the internal Java VM Log4j in Logging. You can use a debugger as well, to see the service code execution line by line. To learn how to use Eclipse IDE to debug Java package code, read Using Eclipse to Debug the Package Java Code. The same is true for Python. NSO uses the standard logging module for logging, which can be configured as per instructions in Debugging of Python Packages. Python debugger can be set up as well with debugpy or pydevd-pycharm modules.

  • Inspect NSO logs for hints. NSO features extensive logging functionality for different components, where you can see everything from user interactions with the system to low-level communications with managed devices. For best results, set the logging level to DEBUG or lower. To learn what types of logs there are and how to enable them, consult Logging in Administration.

    Another useful option is to append a custom trace ID to your service commits. The trace ID can be used to follow the request in logs from its creation all the way to the configuration changes that get pushed to the device. In case no trace ID is specified, NSO will generate a random one, but custom trace IDs are useful for focused troubleshooting sessions.

    Trace ID can also be provided as a commit parameter in your service code, or as a RESTCONF query parameter. See examples.ncs/development-guide/commit-parameters for an example.

  • Measuring the time it takes for specific commands to complete can also give you some hints about what is going on. You can do this by using the timecmd, which requires the dev tools to be enabled.

    Another useful tool to examine how long a specific event or command takes is the progress trace. See how it is used in Progress Trace.

  • Double-check your service points in the model, templates, and in code. Since configuration templates don't get applied if the servicepoint attribute doesn't match the one defined in the service model or are not applied from the callbacks registered to specific service points, make sure they match and that they are not missing. Otherwise, you might notice errors such as the following ones.

  • Verify YANG imports and namespaces. If your service depends on NED or other YANG files, make sure their path is added to where the compiler can find them. If you are using the standard service package skeleton, you can add to that path by editing your service package Makefile and adding the following line.

    Likewise, when you use data types from other YANG namespaces in either your service model definition or by referencing them in XPath expressions.

  • Trace the southbound communication. If the service instance creation results in a different configuration than would be expected from the NSO point of view, especially with custom NED packages, you can try enabling the southbound tracing (either per device or globally).

  • Templates
    Data Modeling Basics
    RFC 7950, section 9
    RFC 6991
    Static DNS Configuration Template
    Persistent Opaque Data
    Python API Overview
    NAVU API
    Actions
    DP API
    The Application Component Type
    Nano Services for Staged Provisioning
    Developing and Deploying a Nano Service
    Nano Services for Staged Provisioning
    Troubleshooting
    AAA Infrastructure
    Loading Packages
    Services Deep Dive
    XML Template and Model Relationship
    Making a Configuration Template
    Extracting Service Model from Template in a Bottom-up Approach
    FASTMAP Create a Service
    FASTMAP Change a Service
    FASTMAP Delete a Service
    Code and Template Service Compared to Template-only Service
    Service Provisioning Multiple Devices
    Service Provisioning Multiple Device Types
    admin@ncs# devtools true
    admin@ncs# config
    Entering configuration mode terminal
    admin@ncs(config)# xpath eval /devices/device
    admin@ncs(config)# xpath eval /devices/device[name='r0']
    admin@ncs(config)# commit trace-id myTrace1
    Commit complete.
    admin@ncs# devtools true
    admin@ncs(config)# timecmd commit
    Commit complete.
    Command executed in 5.31 seconds.
    admin@ncs# packages reload
    reload-result {
        package demo
        result false
        info demo-template.xml:2 Unknown servicepoint: notdemo
    }
    admin@ncs(config-demo-s1)# commit dry-run
    Aborted: no registration found for callpoint demo/service_create of type=external
    YANGPATH += ../../my-dependency/src/yang \
    // Following XPath might trigger an error if there is collision for the 'interfaces' node with other modules
    path "/ncs:devices/ncs:device['r0']/config/interfaces/interface";
    yang/demo.yang:25: error: the node 'interfaces' from module 'demo' (in node 'config' from 'tailf-ncs') is not found
    
    // And the following XPath will not, since it uses namespace prefixes
    path "/ncs:devices/ncs:device['r0']/config/iosxr:interfaces/iosxr:interface";
    admin@ncs(config)# devices global-settings trace pretty
    admin@ncs(config)# devices global-settings trace-dir ./my-trace
    admin@ncs(config)# commit
    admin@ncs(config)# commit dry-run outformat xml
    
    result-xml {
        local-node {
            data <devices xmlns="http://tail-f.com/ns/ncs">
                   <device>
                     <name>c1</name>
                     <config>
                       <ip xmlns="urn:ios">
                         <name-server>192.0.2.1</name-server>
                       </ip>
                     </config>
                   </device>
                 </devices>
        }
    }
    admin@ncs# show running-config devices device c1 config ip name-server | display xml
    
    <config xmlns="http://tail-f.com/ns/config/1.0">
      <devices xmlns="http://tail-f.com/ns/ncs">
        <device>
          <name>c1</name>
          <config>
            <ip xmlns="urn:ios">
              <name-server>192.0.2.1</name-server>
            </ip>
          </config>
        </device>
      </devices>
    </config>
    admin@ncs# show running-config devices device c1 config ip name-server | display xml\
     | save dns-template.xml
    ncs-make-package --build --no-test --service-skeleton template dns
    <config-template xmlns="http://tail-f.com/ns/config/1.0"
                     servicepoint="dns">
      <devices xmlns="http://tail-f.com/ns/ncs">
        <!-- ... more statements here ... -->
      </devices>
    </config-template>
    Example: Static DNS Configuration Template
    <config-template xmlns="http://tail-f.com/ns/config/1.0"
                     servicepoint="dns">
      <devices xmlns="http://tail-f.com/ns/ncs">
        <device>
          <name>c1</name>
          <config>
            <ip xmlns="urn:ios">
              <name-server>192.0.2.1</name-server>
            </ip>
          </config>
        </device>
      </devices>
    </config-template>
    $ cd $NCS_DIR/examples.ncs/implement-a-service/dns-v1
    $ make demo
    <config-template xmlns="http://tail-f.com/ns/config/1.0"
                     servicepoint="dns">
      <devices xmlns="http://tail-f.com/ns/ncs">
        <device>
          <name>{/name}</name>
          <config>
            <ip xmlns="urn:ios">
              <name-server>192.0.2.1</name-server>
            </ip>
          </config>
        </device>
      </devices>
    </config-template>
    admin@ncs# config
    Entering configuration mode terminal
    admin@ncs(config)# dns c2
    admin@ncs(config-dns-c2)# commit dry-run
    
    cli {
        local-node {
            data  devices {
                      device c2 {
                          config {
                              ip {
                 +                name-server 192.0.2.1;
                              }
                          }
                      }
                  }
                 +dns c2 {
                 +}
        }
    }
    <config-template xmlns="http://tail-f.com/ns/config/1.0"
                     servicepoint="dns">
      <devices xmlns="http://tail-f.com/ns/ncs">
        <device>
          <name>{/name}</name>
          <config>
            <ip xmlns="urn:ios">
              <?if {starts-with(/name, 'c1')}?>
                <name-server>192.0.2.1</name-server>
              <?else?>
                <name-server>192.0.2.2</name-server>
              <?end?>
            </ip>
          </config>
        </device>
      </devices>
    </config-template>
      list servicename {
        key name;
    
        uses ncs:service-data;
        ncs:servicepoint "servicename";
    
        leaf name {
          type string;
        }
    
        // ... other statements ...
      }
      list dns {
        key name;
    
        uses ncs:service-data;
        ncs:servicepoint "dns";
    
        leaf name {
          type string;
        }
    
        leaf target-device {
          type string;
        }
      }
    $ cd $NCS_DIR/examples.ncs/implement-a-service/dns-v2
    $ make demo
        leaf target-device {
          mandatory true;
          type string {
            length "2";
            pattern "c[0-2]";
          }
        }
        leaf dns-server-ip {
          type inet:ipv4-address {
            pattern "192\\.0\\.2\\..*";
          }
        }
    <config-template xmlns="http://tail-f.com/ns/config/1.0"
                     servicepoint="dns">
      <devices xmlns="http://tail-f.com/ns/ncs">
        <device>
          <name>{/target-device}</name>
          <config>
            <ip xmlns="urn:ios">
              <?if {/dns-server-ip}?>
                <!-- If dns-server-ip is set, use that. -->
                <name-server>{/dns-server-ip}</name-server>
              <?else?>
                <!-- Otherwise, use the default one. -->
                <name-server>192.0.2.1</name-server>
              <?end?>
            </ip>
          </config>
        </device>
      </devices>
    </config-template>
    $ cd $NCS_DIR/examples.ncs/implement-a-service/dns-v2.1
    $ make demo
    admin@ncs# config
    Entering configuration mode terminal
    admin@ncs(config)# devices device c1 config
    admin@ncs(config-config)# interface GigabitEthernet 0/0
    admin@ncs(config-if)# ip address 192.168.5.1 255.255.255.0
    <config-template xmlns="http://tail-f.com/ns/config/1.0"
                     servicepoint="iface-servicepoint">
      <devices xmlns="http://tail-f.com/ns/ncs">
        <device>
          <name>c1</name>
          <config>
            <interface xmlns="urn:ios">
              <GigabitEthernet>
                <name>0/0</name>
                <ip>
                  <address>
                    <primary>
                      <address>192.168.5.1</address>
                      <mask>255.255.255.0</mask>
                    </primary>
                  </address>
                </ip>
              </GigabitEthernet>
            </interface>
          </config>
        </device>
      </devices>
    </config-template>
    <config-template xmlns="http://tail-f.com/ns/config/1.0"
                     servicepoint="iface-servicepoint">
      <devices xmlns="http://tail-f.com/ns/ncs">
        <device>
          <name>{/device}</name>
          <config>
            <interface xmlns="urn:ios">
              <GigabitEthernet>
                <name>{/interface}</name>
                <ip>
                  <address>
                    <primary>
                      <address>{/ip-address}</address>
                      <mask>255.255.255.0</mask>
                    </primary>
                  </address>
                </ip>
              </GigabitEthernet>
            </interface>
          </config>
        </device>
      </devices>
    </config-template>
      list iface {
        key name;
    
        uses ncs:service-data;
        ncs:servicepoint "iface-servicepoint";
    
        leaf name {
          type string;
        }
    
        leaf device { ... }
    
        leaf interface { ... }
    
        leaf ip-address { ... }
      }
        leaf device {
          mandatory true;
          type leafref {
            path "/ncs:devices/ncs:device/ncs:name";
          }
        }
        leaf interface {
          mandatory true;
          type string {
            pattern "[0-9]/[0-9]+";
          }
        }
    
        leaf ip-address {
          mandatory true;
          type inet:ipv4-address;
        }
      }
      list iface {
        key name;
    
        uses ncs:service-data;
        ncs:servicepoint "iface-servicepoint";
    
        leaf name {
          type string;
        }
    
        leaf device {
          mandatory true;
          type leafref {
            path "/ncs:devices/ncs:device/ncs:name";
          }
        }
    
        leaf interface {
          mandatory true;
          type string {
            pattern "[0-9]/[0-9]+";
          }
        }
    
        leaf ip-address {
          mandatory true;
          type inet:ipv4-address;
        }
      }
      list iface {
        key name;
    
        uses ncs:service-data;
        ncs:servicepoint "iface-servicepoint";
    
        leaf name {
          type string;
        }
    
        leaf device {
          mandatory true;
          type leafref {
            path "/ncs:devices/ncs:device/ncs:name";
          }
        }
    
        leaf interface {
          mandatory true;
          type string {
            pattern "[0-9]/[0-9]+";
          }
        }
    
        leaf ip-address {
          mandatory true;
          type inet:ipv4-address;
        }
    
        leaf cidr-netmask {
          default 24;
          type uint8 {
            range "0..32";
          }
        }
      }
    <config-template xmlns="http://tail-f.com/ns/config/1.0">
      <devices xmlns="http://tail-f.com/ns/ncs">
        <device>
          <name>{/device}</name>
          <config>
            <interface xmlns="urn:ios">
              <GigabitEthernet>
                <name>{/interface}</name>
                <ip>
                  <address>
                    <primary>
                      <address>{/ip-address}</address>
                      <mask>{$NETMASK}</mask>
                    </primary>
                  </address>
                </ip>
              </GigabitEthernet>
            </interface>
          </config>
        </device>
      </devices>
    </config-template>
    ncs-make-package --no-test --service-skeleton python-and-template iface
        def cb_create(self, tctx, root, service, proplist):
            cidr_mask = service.cidr_netmask
            quad_mask = ipaddress.IPv4Network((0, cidr_mask)).netmask
            vars = ncs.template.Variables()
            vars.add('NETMASK', quad_mask)
            template = ncs.template.Template(service)
            template.apply('iface-template', vars)
        def cb_create(self, tctx, root, service, proplist):
            cidr_mask = service.cidr_netmask
    
            quad_mask = ipaddress.IPv4Network((0, cidr_mask)).netmask
    
            vars = ncs.template.Variables()
            vars.add('NETMASK', quad_mask)
            template = ncs.template.Template(service)
            template.apply('iface-template', vars)
    ncs-make-package --no-test --service-skeleton java-and-template iface
        public Properties create(ServiceContext context,
                                 NavuNode service,
                                 NavuNode ncsRoot,
                                 Properties opaque)
                                 throws ConfException {
    
            String cidr_mask_str = service.leaf("cidr-netmask").valueAsString();
            int cidr_mask = Integer.parseInt(cidr_mask_str);
            long tmp_mask = 0xffffffffL << (32 - cidr_mask);
            String quad_mask =
                ((tmp_mask >> 24) & 0xff) + "." +
                ((tmp_mask >> 16) & 0xff) + "." +
                ((tmp_mask >> 8) & 0xff) + "." +
                ((tmp_mask >> 0) & 0xff);
            Template myTemplate = new Template(context, "iface-template");
            TemplateVariables myVars = new TemplateVariables();
            myVars.putQuoted("NETMASK", quad_mask);
            myTemplate.apply(service, myVars);
        public Properties create(ServiceContext context,
                                 NavuNode service,
                                 NavuNode ncsRoot,
                                 Properties opaque)
                                 throws ConfException {
    
            try {
                String cidr_mask_str = service.leaf("cidr-netmask").valueAsString();
                int cidr_mask = Integer.parseInt(cidr_mask_str);
    
                long tmp_mask = 0xffffffffL << (32 - cidr_mask);
                String quad_mask = ((tmp_mask >> 24) & 0xff) +
                    "." + ((tmp_mask >> 16) & 0xff) +
                    "." + ((tmp_mask >> 8) & 0xff) +
                    "." + ((tmp_mask) & 0xff);
    
                Template myTemplate = new Template(context, "iface-template");
                TemplateVariables myVars = new TemplateVariables();
                myVars.putQuoted("NETMASK", quad_mask);
                myTemplate.apply(service, myVars);
            } catch (Exception e) {
                throw new DpCallbackException(e.getMessage(), e);
            }
            return opaque;
        }
        leaf-list device {
          type leafref {
            path "/ncs:devices/ncs:device/ncs:name";
          }
        }
    <config-template xmlns="http://tail-f.com/ns/config/1.0"
                     servicepoint="servicename">
      <devices xmlns="http://tail-f.com/ns/ncs">
        <device>
          <name>{/device}</name>
          <config>
            <!-- ... -->
         </config>
        </device>
      </devices>
    </config-template>
    <config-template xmlns="http://tail-f.com/ns/config/1.0"
                     servicepoint="servicename">
      <devices xmlns="http://tail-f.com/ns/ncs">
        <?foreach {/device}?>
          <device>
            <name>{.}</name>
            <config>
              <!-- ... -->
          </config>
          </device>
        <?end?>
      </devices>
    </config-template>
    <config-template xmlns="http://tail-f.com/ns/config/1.0">
      <devices xmlns="http://tail-f.com/ns/ncs">
        <device>
          <name>{/device}</name>
          <config>
            <!-- Part for device with the cisco-ios NED -->
            <interface xmlns="urn:ios">
              <GigabitEthernet>
                <name>{/interface}</name>
                <!-- ... -->
              </GigabitEthernet>
            </interface>
    
            <!-- Part for device with the router-nc NED -->
            <sys xmlns="http://example.com/router">
              <interfaces>
                <interface>
                  <name>{/interface}</name>
                  <!-- ... -->
                </interface>
              </interfaces>
            </sys>
         </config>
        </device>
      </devices>
    </config-template>
    <config-template xmlns="http://tail-f.com/ns/config/1.0">
      <devices xmlns="http://tail-f.com/ns/ncs">
        <device>
          <name>{/device}</name>
          <config>
            <?if-ned-id cisco-ios-cli-3.0:cisco-ios-cli-3.0?>
            <interface xmlns="urn:ios">
              <GigabitEthernet>
                <name>{/interface}</name>
                <!-- ... -->
              </GigabitEthernet>
            </interface>
            <?end?>
          </config>
        </device>
      </devices>
    </config-template>
      container dns-options {
        list dns-option {
          key name;
    
          leaf name {
            type string;
          }
    
          leaf-list servers {
            type inet:ipv4-address;
          }
        }
      }
      container dns-options {
        // ...
      }
    
      list dns {
        key name;
    
        uses ncs:service-data;
        ncs:servicepoint "dns";
    
        // ...
      }
    admin@ncs(config)# dns-options dns-option lon servers 192.0.2.3
    admin@ncs(config-dns-option-lon)# top
    admin@ncs(config)# dns-options dns-option sto servers 192.0.2.3
    admin@ncs(config-dns-option-sto)# top
    admin@ncs(config)# dns-options dns-option sjc servers [ 192.0.2.5 192.0.2.6 ]
    admin@ncs(config-dns-option-sjc)# commit
      list dns {
        key name;
    
        uses ncs:service-data;
        ncs:servicepoint "dns";
    
        leaf name {
          type string;
        }
    
        leaf target-device {
          type string;
        }
    
        // Replace the old, explicit IP with a reference to shared data
        // leaf dns-server-ip {
        //   type inet:ip-address {
        //     pattern "192\.0.\.2\..*";
        //   }
        // }
        leaf dns-servers {
          mandatory true;
          type leafref {
            path "/dns-options/dns-option/name";
          }
        }
      }
    <config-template xmlns="http://tail-f.com/ns/config/1.0"
                     servicepoint="dns">
      <devices xmlns="http://tail-f.com/ns/ncs">
        <device>
          <name>{/target-device}</name>
          <config>
            <ip xmlns="urn:ios">
              <name-server>{deref(/dns-servers)/../servers}</name-server>
            </ip>
          </config>
        </device>
      </devices>
    </config-template>
            <ip xmlns="urn:ios">
              <?set dns_option = {/dns-servers}?>   <!-- Set $dns_option to e.g. 'lon' -->
              <?set-root-node {/}?>                 <!-- Make '/' point to datastore root,
                                                         instead of service instance   -->
              <name-server>{/dns-options/dns-option[name=$dns_option]/servers}</name-server>
            </ip>
      list iface {
        key name;
    
        uses ncs:service-data;
        ncs:servicepoint "iface-servicepoint";
    
        leaf name { /* ... */ }
        leaf device { /* ... */ }
        leaf interface { /* ... */ }
        // ... other statements omitted ...
    
        action test-enabled {
          tailf:actionpoint iface-test-enabled;
          output {
            leaf status {
              type enumeration {
                enum up;
                enum down;
                enum unknown;
              }
            }
          }
        }
      }
    class IfaceActions(Action):
        @Action.action
        def cb_action(self, uinfo, name, kp, input, output, trans):
            ...
            root = ncs.maagic.get_root(trans)
            service = ncs.maagic.cd(root, kp)
    class IfaceActions(Action):
        @Action.action
        def cb_action(self, uinfo, name, kp, input, output, trans):
            root = ncs.maagic.get_root(trans)
            service = ncs.maagic.cd(root, kp)
    
            device = root.devices.device[service.device]
    
            status = 'unknown'    # Replace with your own code that checks
                                  # e.g. operational status of the interface
    
            output.status = status
    class Main(ncs.application.Application):
        def setup(self):
            ...
            self.register_action('iface-test-enabled', IfaceActions)
        @ActionCallback(callPoint="iface-test-enabled",
                        callType=ActionCBType.ACTION)
        public ConfXMLParam[] test_enabled(DpActionTrans trans, ConfTag name,
                                           ConfObject[] kp, ConfXMLParam[] params)
        throws DpCallbackException {
            // ...
        }
                NavuContext context = new NavuContext(maapi);
                NavuContainer service =
                    (NavuContainer)KeyPath2NavuNode.getNode(kp, context);
        @ActionCallback(callPoint="iface-test-enabled",
                        callType=ActionCBType.ACTION)
        public ConfXMLParam[] test_enabled(DpActionTrans trans, ConfTag name,
                                           ConfObject[] kp, ConfXMLParam[] params)
        throws DpCallbackException {
            int port = NcsMain.getInstance().getNcsPort();
    
            // Ensure socket gets closed on errors, also ending any ongoing
            // session and transaction
            try (Socket socket = new Socket("localhost", port)) {
                Maapi maapi = new Maapi(socket);
                maapi.startUserSession("admin", InetAddress.getByName("localhost"),
                    "system", new String[] {}, MaapiUserSessionFlag.PROTO_TCP);
    
                NavuContext context = new NavuContext(maapi);
                context.startRunningTrans(Conf.MODE_READ);
    
                NavuContainer root = new NavuContainer(context);
                NavuContainer service =
                    (NavuContainer)KeyPath2NavuNode.getNode(kp, context);
    
                String status = "unknown";    // Replace with your own code that
                                              // checks e.g. operational status of
                                              // the interface
    
                String nsPrefix = name.getPrefix();
                return new ConfXMLParam[] {
                    new ConfXMLParamValue(nsPrefix, "status", new ConfBuf(status)),
                };
            } catch (Exception e) {
                throw new DpCallbackException(name.toString() + " action failed",
                    e);
            }
        }
      list iface {
        key name;
    
        uses ncs:service-data;
        ncs:servicepoint "iface-servicepoint";
    
        // ... other statements omitted ...
    
        action test-enabled {
          tailf:actionpoint iface-test-enabled;
          output {
            leaf status {
              type enumeration {
                enum up;
                enum down;
                enum unknown;
              }
            }
          }
        }
    
        leaf last-test-result {
          config false;
          type enumeration {
                enum up;
                enum down;
                enum unknown;
          }
        }
      }
      typedef iface-status-type {
        type enumeration {
              enum up;
              enum down;
              enum unknown;
        }
      }
        leaf last-test-status {
          config false;
          type iface-status-type;
        }
    
        action test-enabled {
          tailf:actionpoint iface-test-enabled;
          output {
            leaf status {
              type iface-status-type;
          }
        }
    admin@ncs# show iface test-instance1 last-test-status
    iface test-instance1 last-test-status up
    with contextlib.closing(socket.socket()) as s:
        _ncs.cdb.connect(s, _ncs.cdb.DATA_SOCKET, ip='127.0.0.1', port=_ncs.PORT)
        _ncs.cdb.start_session(s, _ncs.cdb.OPERATIONAL)
        _ncs.cdb.set_elem(s, 'up', '/iface{test-instance1}/last-test-status')
    with ncs.maapi.single_write_trans('admin', 'python', db=ncs.OPERATIONAL) as t:
        root = ncs.maagic.get_root(t)
        root.iface['test-instance1'].last_test_status = 'up'
        t.apply()
        def cb_action(self, uinfo, name, kp, input, output, trans):
            with ncs.maapi.single_write_trans('admin', 'python',
                                              db=ncs.OPERATIONAL) as t:
                root = ncs.maagic.get_root(t)
                service = ncs.maagic.cd(root, kp)
    
                # ...
                service.last_test_status = status
                t.apply()
    
            output.status = status
        leaf last-test-status {
          config false;
          type iface-status-type;
          tailf:cdb-oper {
            tailf:persistent true;
          }
        }
    class ServiceApp(Application):
        def setup(self):
            ...
            self.register_fun(init_oper_data, lambda _: None)
    
    
    def init_oper_data(state):
        state.log.info('Populating operational data')
        with ncs.maapi.single_write_trans('admin', 'python',
                                          db=ncs.OPERATIONAL) as t:
            root = ncs.maagic.get_root(t)
            # ...
            t.apply()
    
        return state
    int port = NcsMain.getInstance().getNcsPort();
    
    // Ensure socket gets closed on errors, also ending any ongoing session/lock
    try (Socket socket = new Socket("localhost", port)) {
        Cdb cdb = new Cdb("IfaceServiceOperWrite", socket);
        CdbSession session = cdb.startSession(CdbDBType.CDB_OPERATIONAL);
    
        String status = "up";
        ConfPath path = new ConfPath("/iface{%s}/last-test-status",
            "test-instance1");
        session.setElem(ConfEnumeration.getEnumByLabel(path, status), path);
    
        session.endSession();
    }
    int port = NcsMain.getInstance().getNcsPort();
    
    // Ensure socket gets closed on errors, also ending any ongoing
    // session and transaction
    try (Socket socket = new Socket("localhost", port)) {
        Maapi maapi = new Maapi(socket);
        maapi.startUserSession("admin", InetAddress.getByName("localhost"),
            "system", new String[] {}, MaapiUserSessionFlag.PROTO_TCP);
    
        NavuContext context = new NavuContext(maapi);
        context.startOperationalTrans(Conf.MODE_READ_WRITE);
    
        NavuContainer root = new NavuContainer(context);
        NavuContainer service =
            (NavuContainer)KeyPath2NavuNode.getNode(kp, context);
    
        // ...
        service.leaf("last-test-status").set(status);
        context.applyClearTrans();
    }
        leaf last-check-status {
          config false;
          type iface-status-type;
          tailf:cdb-oper {
            tailf:persistent true;
          }
        }
    yang/demo.yang:32: error: expected keyword 'type' as substatement to 'leaf'
    make: *** [Makefile:41: ../load-dir/demo.fxs] Error 1
        [javac] /nso-run/packages/demo/src/java/src/com/example/demo/demoRFS.java:52: error: ';' expected
        [javac]         Template myTemplate = new Template(context, "demo-template")
        [javac]                                                                          ^
        [javac] 1 error
        [javac] 1 warning
    
    BUILD FAILED
    admin@ncs# packages reload
    Error: Failed to load NCS package: demo; requires NCS version 6.3
    admin@ncs# packages reload
    reload-result {
        package demo
        result false
        info SyntaxError: invalid syntax
    }
    admin@ncs# packages reload
    reload-result {
        package demo1
        result false
        info demo-template.xml:87 missing tag: name
    }
    reload-result {
        package demo2
        result false
        info demo-template.xml:11 Unknown namespace: 'ios-xr'
    }
    reload-result {
        package demo3
        result false
        info demo-template.xml:12: The XML stream is broken. Run-away < character found.
    }

    Python API Overview

    Learn about the NSO Python API and its usage.

    The NSO Python library contains a variety of APIs for different purposes. In this section, we introduce these and explain their usage. The NSO Python module deliverables are found in two variants, the low-level APIs and the high-level APIs.

    The low-level APIs are a direct mapping of the NSO C APIs, CDB, and MAAPI. These will follow the evolution of the C APIs. See man confd_lib_lib for further information.

    The high-level APIs are an abstraction layer on top of the low-level APIs to make them easier to use and to improve code readability and development rate for common use cases. E.g. services and action callbacks and common scripting towards NSO.

    Python API Overview

    MAAPI (Management Agent API) Northbound interface that is transactional and user session-based. Using this interface, both configuration and operational data can be read. Configuration and operational data can be written and committed as one transaction. The API is complete in the way that it is possible to write a new northbound agent using only this interface. It is also possible to attach to ongoing transactions to read uncommitted changes and/or modify data in these transactions.

    Python low-level CDB API The Southbound interface provides access to the CDB configuration database. Using this interface, configuration data can be read. In addition, operational data that is stored in CDB can be read and written. This interface has a subscription mechanism to subscribe to changes. A subscription is specified on a path that points to an element in a YANG model or an instance in the instance tree. Any change under this point will trigger the subscription. CDB also has functions to iterate through the configuration changes when a subscription has been triggered.

    Python low-level DP API Southbound interface that enables callbacks, hooks, and transforms. This API makes it possible to provide the service callbacks that handle service-to-device mapping logic. Other usual cases are external data providers for operational data or action callback implementations. There are also transaction and validation callbacks, etc. Hooks are callbacks that are fired when certain data is written and the hook is expected to do additional modifications of data. Transforms are callbacks that are used when complete mediation between two different models is necessary.

    Python high-level API: API that resides on top of the MAAPI, CDB, and DP APIs. It provides schema model navigation and instance data handling (read/write). Uses a MAAPI context as data access and incorporates its functionality. It is used in service implementations, action handlers, and Python scripting.

    Python scripting

    Scripting in Python is a very easy and powerful way of accessing NSO. This document has several examples of scripts showing various ways of accessing data and requesting actions in NSO.

    The examples are directly executable with the Python interpreter after sourcing the ncsrc file in the NSO installation directory. This sets up the PYTHONPATH environment variable, which enables access to the NSO Python modules.

    Edit a file and execute it directly on the command line like this:

    High-level MAAPI API

    The Python high-level MAAPI API provides an easy-to-use interface for accessing NSO. Its main targets are to encapsulate the sockets, transaction handles, data type conversions, and the possibility of using the Python with statement for proper resource cleanup.

    The simplest way to access NSO is to use the single_transaction helper. It creates a MAAPI context and a transaction in one step.

    This example shows its usage, connecting as user admin and python in the AAA context:

    The example code here shows how to start a transaction but does not properly handle the case of concurrency conflicts when writing data. See Handling Conflicts for details.

    When only reading data, always start a read transaction to read directly from the CDB datastore and data providers. write transactions cache repeated reads done by the same transaction.

    A common use case is to create a MAAPI context and reuse it for several transactions. This reduces the latency and increases the transaction throughput, especially for backend applications. For scripting the lifetime is shorter and there is no need to keep the MAAPI contexts alive.

    This example shows how to keep a MAAPI connection alive between transactions:

    Maagic API

    Maagic is a module provided as part of the NSO Python APIs. It reduces the complexity of programming towards NSO, is used on top of the MAAPI high-level API, and addresses areas which require more programming. First, it helps in navigating the model, using standard Python object dot notation, giving very clear and easily read code. The context handlers remove the need to close sockets, user sessions, and transactions and the problems when they are forgotten and kept open. Finally, it removes the need to know the data types of the leafs, helping you to focus on the data to be set.

    When using Maagic, you still do the same procedure of starting a transaction.

    To use the Maagic functionality, you get access to a Maagic object either pointing to the root of the CDB:

    In this case, it is a ncs.maagic.Node object with a ncs.maapi.Transaction backend.

    From here, you can navigate in the model. In the table, you can see examples of how to navigate.

    The table below lists Maagic object navigation.

    Action
    Returns

    root.devices

    Container

    root.devices.device

    List

    root.devices.device['ce0']

    ListElement

    root.devices.device['ce0'].device_type.cli

    PresenceContainer

    root.devices.device['ce0'].address

    str

    root.devices.device['ce0'].port

    int

    You can also get a Maagic object from a keypath:

    Namespaces

    Maagic handles namespaces by a prefix to the names of the elements. This is optional but recommended to avoid future side effects.

    The syntax is to prefix the names with the namespace name followed by two underscores, e.g., ns_name__ name.

    Examples of how to use namespaces:

    In cases where there is a name collision, the namespace prefix is required to access an entity from a module, except for the module that was first loaded. A namespace is always required for root entities when there is a collision. The module load order is found in the NCS log file: logs/ncs.log.

    Reading Data

    Reading data using Maagic is straightforward. You will just specify the leaf you are interested in and the data is retrieved. The data is returned in the nearest available Python data type.

    For non-existing leafs, None is returned.

    Writing Data

    Writing data using Maagic is straightforward. You will just specify the leaf you are interested in and assign a value. Any data type can sent as input, as the str function is called, converting it to a string. The format depends on the data type. If the type validation fails, an Error exception is thrown.

    Deleting Data

    Data is deleted the Python way of using the del function:

    Some entities have a delete method, this is explained under the corresponding type.

    Object Deletion

    The delete mechanism in Maagic is implemented using the __delattr__ method on the Node class. This means that executing the del function on a local or global variable will only delete the object from the Python local or global namespaces. E.g., del obj.

    Containers

    Containers are addressed using standard Python dot notation: root.container1.container2.

    Presence Containers

    A presence container is created using the create method:

    Existence is checked with the exists or bool functions:

    A presence container is deleted with the del or delete functions:

    Choices

    The case of a choice is checked by addressing the name of the choice in the model:

    Changing a choice is done by setting a value in any of the other cases:

    Lists and List Elements

    List elements are created using the create method on the List class:

    The objects ce5 and o above are of type ListElement which is actually an ordinary container object with a different name.

    Existence is checked with the exists or bool functions List class:

    A list element is deleted with the Python del function:

    To delete the whole list, use the Python del function or delete() on the list.

    Unions

    Unions are not handled in any specific way - you just read or write to the leaf and the data is validated according to the model.

    Enumeration

    Enumerations are returned as an Enum object, giving access to both the integer and string values.

    Writing values to enumerations accepts both the string and integer values.

    Leafref

    Leafrefs are read as regular leafs and the returned data type corresponds to the referred leaf.

    Leafrefs are set as the leaf they refer to. The data type is validated as it is set. The reference is validated when the transaction is committed.

    Identityref

    Identityrefs are read and written as string values. Writing an identityref without a prefix is possible, but doing so is error-prone and may stop working if another model is added which also has an identity with the same name. The recommendation is to always use a prefix when writing identityrefs. Reading an identityref will always return a prefixed string value.

    Instance Identifier

    Instance identifiers are read as xpath formatted string values.

    Instance identifiers are set as xpath formatted strings. The string is validated as it is set. The reference is validated when the transaction is committed.

    Leaf-list

    A leaf-list is represented by a LeafList object. This object behaves very much like a Python list. You may iterate it, check for the existence of a specific element using in, or remove specific items using the del operator. See examples below.

    From NSO version 4.5 and onwards, a Yang leaf-list is represented differently than before. Reading a leaf-list using Maagic used to result in an ordinary Python list (or None if the leaf-list was non-existent). Now, reading a leaf-list will give back a LeafList object whether it exists or not. The LeafList object may be iterated like a Python list and you may check for existence using the exists() method or the bool() operator. A Maagic leaf-list node may be assigned using a Python list, just like before, and you may convert it to a Python list using the as_list() method or by doing list(my_leaf_list_node).

    You should update your code to cope with the new behavior. If you for any reason are unable to do so, you can instruct Maagic to behave as in previous versions by setting the environment variable DEPRECATED_MAAGIC_WANT_LEAF_LIST_AS_LEAF to true, yes or 1 before starting your Python process (or NSO).

    Note that this environment variable is deprecated and will be removed in the future.

    Binary

    Binary values are read and written as byte strings.

    Bits

    Reading a bits leaf will give a Bits object back (or None if the bits leaf is non-existent). To get some useful information out of the Bits object, you can either use the bytearray() method to get a Python byte array object in return or the Python str() operator to get a space-separated string containing the bit names.

    There are four ways of setting a bits leaf: One is to set it using a string with space-separated bit names, the other one is to set it using a byte array, the third by using a Python binary string, and as a last option is it may be set using a Bits object. Note that updating a Bits object does not change anything in the database - for that to happen, you need to assign it to the Maagic node.

    Empty Leaf

    An empty leaf is created using the create method:

    Existence is checked with the exists or bool functions:

    An empty leaf is deleted with the del or delete functions:

    Maagic Examples

    Action Requests

    Requesting an action may not require an ongoing transaction and this example shows how to use Maapi as a transactionless back-end for Maagic.

    This example shows how to request an action that requires an ongoing transaction. It is also valid to request an action that does not require an ongoing transaction.

    Providing parameters to an action with Maagic is very easy: You request an input object, with get_input from the Maagic action object, and set the desired (or required) parameters as defined in the model specification.

    If you have a leaf-list, you need to prepare the input parameters

    A common use case is to script the creation of devices. With the Python APIs, this is easily done without the need to generate set commands and execute them in the CLI.

    PlanComponent

    This class is a helper to support service progress reporting using plan-data as part of a Reactive FASTMAP nano service. More info about plan-data is found in Nano Services for Staged Provisioning.

    The interface of the PlanComponent is identical to the corresponding Java class and supports the setup of plans and setting the transition states.

    See pydoc3 ncs.application.PlanComponent for further information about the Python class.

    The pattern is to add an overall plan (self) for the service and separate plans for each component that builds the service.

    When appending a new state to a plan the initial state is set to ncs:not-reached. At the completion of a plan the state is set to ncs:ready. In this case when the service is completely setup:

    Python Packages

    Action Handler

    The Python high-level API provides an easy way to implement an action handler for your modeled actions. The easiest way to create a handler is to use the ncs-make-package command. It creates some ready-to-use skeleton code.

    The generated package skeleton:

    This example action handler takes a number as input, doubles it, and returns the result.

    When debugging Python packages refer to Debugging of Python Packages.

    Test the action by doing a request from the NSO CLI:

    The input and output parameters are the most commonly used parameters of the action callback method. They provide the access objects to the data provided to the action request and the returning result.

    They are maagic.Node objects, which provide easy access to the modeled parameters.

    The table below lists the action handler callback parameters:

    Parameter
    Type
    Description

    self

    ncs.dp.Action

    The action object.

    uinfo

    ncs.UserInfo

    User information of the requester.

    name

    string

    The tailf:action name.

    kp

    ncs.HKeypathRef

    The keypath of the action.

    Service Handler

    The Python high-level API provides an easy way to implement a service handler for your modeled services. The easiest way to create a handler is to use the ncs-make-package command. It creates some skeleton code.

    The generated package skeleton:

    This example has some code added for the service logic, including a service template.

    When debugging Python packages, refer to Debugging of Python Packages.

    Add some service logic to the cb_create:

    Add a template to packages/pyservice/templates/service.template.xml:

    The table below lists the service handler callback parameters:

    Parameter
    Type
    Description

    self

    ncs.application.Service

    The service object.

    tctx

    ncs.TransCtxRef

    Transaction context.

    root

    ncs.maagic.Node

    An object pointing to the root with the current transaction context, using shared operations (create, set_elem, ...) for configuration modifications.

    service

    ncs.maagic.Node

    An object pointing to the service with the current transaction context, using shared operations (create, set_elem, ...) for configuration modifications.

    Validation Point Handler

    The Python high-level API provides an easy way to implement a validation point handler. The easiest way to create a handler is to use the ncs-make-package command. It creates ready-to-use skeleton code.

    The generated package skeleton:

    This example validation point handler accepts all values except invalid.

    When debugging Python packages refer to Debugging of Python Packages.

    Test the validation by setting the value to invalid and validating the transaction from the NSO CLI:

    The table below lists the validation point handler callback parameters:

    Parameter
    Type
    Description

    self

    ncs.dp.ValidationPoint

    The validation point object.

    tctx

    ncs.TransCtxRef

    Transaction context.

    kp

    ncs.HKeypathRef

    The keypath of the node being validated.

    value

    ncs.Value

    Current value of the node being validated.

    Low-level APIs

    The Python low-level APIs are a direct mapping of the C-APIs. A C call has a corresponding Python function entry. From a programmer's point of view, it wraps the C data structures into Python objects and handles the related memory management when requested by the Python garbage collector. Any errors are reported as error.Error.

    The low-level APIs will not be described in detail in this document, but you will find a few examples showing their usage in the coming sections.

    See pydoc3 _ncs and man confd_lib_lib for further information.

    Low-level MAAPI API

    This API is a direct mapping of the NSO MAAPI C API. See pydoc3 _ncs.maapi and man confd_lib_maapi for further information.

    Note that additional care must be taken when using this API in service code, as it also exposes functions that do not perform reference counting (see the section called “Reference Counting Overlapping Configuration”).

    In the service code, you should use the shared_* set of functions, such as:

    And, avoid the non-shared variants:

    The following example is a script to read and de-crypt a password using the Python low-level MAAPI API.

    This example is a script to do a check-sync action request using the low-level MAAPI API.

    Low-level CDB API

    This API is a direct mapping of the NSO CDB C API. See pydoc3 _ncs.cdb and man confd_lib_cdb for further information.

    Setting of operational data has historically been done using one of the CDB APIs (Python, Java, C). This example shows how to set a value and trigger subscribers for operational data using the Python low-level API. API.

    Advanced Topics

    Schema Loading - Internals

    When schemas are loaded, either upon direct request or automatically by methods and classes in the maapi module, they are statically cached inside the Python VM. This fact presents a problem if one wants to connect to several different NSO nodes with diverging schemas from the same Python VM.

    Take for example the following program that connects to two different NSO nodes (with diverging schemas) and shows their ned-id's.

    Running this program may produce output like this:

    The output shows identities in string format for the active NEDs on the different nodes. Note that for lsa-2, the last three lines do not show the name of the identity but instead the representation of a _ncs.Value. The reason for this is that lsa-2 has different schemas which do not include these identities. Schemas for this Python VM were loaded and cached during the first call to ncs.maapi.single_read_trans() so no schema loading occurred during the second call.

    The way to make the program above work as expected is to force the reloading of schemas by passing an optional argument to single_read_trans() like so:

    Running the program with this change may produce something like this:

    Now, this was just an example of what may happen when wrong schemas are loaded. Implications may be more severe though, especially if maagic nodes are kept between reloads. In such cases, accessing an "invalid" maagic object may in the best case result in undefined behavior making the program not work, but might even crash the program. So care needs to be taken to not reload schemas in a Python VM if there are dependencies to other parts in the same VM that need previous schemas.

    Functions and methods that accept the load_schemas argument:

    • ncs.maapi.Maapi() constructor

    • ncs.maapi.single_read_trans()

    • ncs.maapi.single_write_trans()

    The way of using multiprocessing.Process

    When using multiprocessing in NSO, the default start method is now spawn instead of fork. With the spawn method, a new Python interpreter process is started, and all arguments passed to multiprocessing.Process must be picklable.

    If you pass Python objects that reference low-level C structures (for example _ncs.dp.DaemonCtxRef or _ncs.UserInfo), Python will raise an error like:

    This happens because self and uinfo contain low-level C references that cannot be serialized (pickled) and sent to the child process.

    To fix this, avoid passing entire objects such as self or uinfo to the process. Instead, pass only simple or primitive data types (like strings, integers, or dictionaries) that can be pickled.

    $ python3 script.py
    Example: Single Transaction Helper
    import ncs
    
    with ncs.maapi.single_write_trans('admin', 'python') as t:
        t.set_elem2('Kilroy was here', '/ncs:devices/device{ce0}/description')
        t.apply()
    
    with ncs.maapi.single_read_trans('admin', 'python') as t:
        desc = t.get_elem('/ncs:devices/device{ce0}/description')
        print("Description for device ce0 = %s" % desc)
    Example: Reading of Configuration Data using High-level MAAPI
    import ncs
    
    with ncs.maapi.Maapi() as m:
        with ncs.maapi.Session(m, 'admin', 'python'):
    
            # The first transaction
            with m.start_read_trans() as t:
                address = t.get_elem('/ncs:devices/device{ce0}/address')
                print("First read: Address = %s" % address)
    
            # The second transaction
            with m.start_read_trans() as t:
                address = t.get_elem('/ncs:devices/device{ce1}/address')
                print("Second read: Address = %s" % address)
    with ncs.maapi.Maapi() as m:
      with ncs.maapi.Session(m, 'admin', 'python'):
        with m.start_write_trans() as t:
          # Read/write/request ...
    root = ncs.maagic.get_root(t)
    node = ncs.maagic.get_node(t, '/ncs:devices/device{ce0}')
    # The examples are equal unless there is a namespace collision.
    # For the ncs namespace it would look like this:
    
    root.ncs__devices.ncs__device['ce0'].ncs__address
    # equals
    root.devices.device['ce0'].address
    # This example have three namespaces referring to a leaf, value, with the same
    # name and this load order: /ex/a:value=11, /ex/b:value=22 and /ex/c:value=33
    
    root.ex.value # returns 11
    root.ex.a__value # returns 11
    root.ex.b__value # returns 22
    root.ex.c__value # returns 33
    dev_name = root.devices.device['ce0'].name # 'ce0'
    dev_address = root.devices.device['ce0'].address # '127.0.0.1'
    dev_port = root.devices.device['ce0'].port # 10022
    root.devices.device['ce0'].name  = 'ce0'
    root.devices.device['ce0'].address  = '127.0.0.1'
    root.devices.device['ce0'].port = 10022
    root.devices.device['ce0'].port = '10022' # Also valid
    
    # This will raise an Error exception
    root.devices.device['ce0'].port = 'netconf'
    del root.devices.device['ce0'] # List element
    del root.devices.device['ce0'].name # Leaf
    del root.devices.device['ce0'].device_type.cli # Presence container
    pc = root.container.presence_container.create()
    root.container.presence_container.exists() # Returns True or False
    bool(root.container.presence_container) # Returns True or False
    del root.container.presence_container
    root.container.presence_container.delete()
    ne_type = root.devices.device['ce0'].device_type.ne_type
    if ne_type == 'cli':
      # Handle CLI
    elif ne_type == 'netconf':
      # Handle NETCONF
    elif ne_type == 'generic':
      # Handle generic
    else:
      # Don't handle
    root.devices.device['ce0'].device_type.netconf.create()
    str(root.devices.device['ce0'].device_type.ne_type) # Returns 'netconf'
    # Single value key
    ce5 = root.devices.device.create('ce5')
    
    # Multiple values key
    o = root.container.list.create('foo', 'bar')
    'ce0' in root.devices.device # Returns True or False
    # Single value key
    del root.devices.device['ce5']
    
    # Multiple values key
    del root.container.list['foo', 'bar']
    # use Python's del function
    del root.devices.device
    
    # use List's delete() method
    root.container.list.delete()
    str(root.devices.device['ce0'].state.admin_state) # May return 'unlocked'
    root.devices.device['ce0'].state.admin_state.string # May return 'unlocked'
    root.devices.device['ce0'].state.admin_state.value # May return 1
    root.devices.device['ce0'].state.admin_state = 'locked'
    root.devices.device['ce0'].state.admin_state = 0
    
    # This will raise an Error exception
    root.devices.device['ce0'].state.admin_state = 3 # Not a valid enum
    # /model/device is a leafref to /devices/device/name
    
    dev = root.model.device # May return 'ce0'
    # /model/device is a leafref to /devices/device/name
    
    root.model.device = 'ce0'
    # Read
    root.devices.device['ce0'].device_type.cli.ned_id # May return 'ios-id:cisco-ios'
    
    # Write when identity cisco-ios is unique throughout the system (not recommended)
    root.devices.device['ce0'].device_type.cli.ned_id = 'cisco-ios'
    
    # Write with unique identity
    root.devices.device['ce0'].device_type.cli.ned_id = 'ios-id:cisco-ios'
    # /model/iref is an instance-identifier
    
    root.model.iref # May return "/ncs:devices/ncs:device[ncs:name='ce0']"
    # /model/iref is an instance-identifier
    
    root.devices.device['ce0'].device_type.cli.ned_id = "/ncs:devices/ncs:device[ncs:name='ce0']"
    # /model/ll is a leaf-list with the type string
    
    # read a LeafList object
    ll = root.model.ll
    
    # iteration
    for item in root.model.ll:
        do_stuff(item)
    
    # check if the leaf-list exists (i.e. is non-empty)
    if root.model.ll:
        do_stuff()
    if root.model.ll.exists():
        do_stuff()
    
    # check the leaf-list contains a specific item
    if 'foo' in root.model.ll:
        do_stuff()
    
    # length
    len(root.model.ll)
    
    # create a new item in the leaf-list
    root.model.ll.create('bar')
    
    # set the whole leaf-list in one operation
    root.model.ll = ['foo', 'bar', 'baz']
    
    # remove a specific item from the list
    del root.model.ll['bar']
    root.model.ll.remove('baz')
    
    # delete the whole leaf-list
    del root.model.ll
    root.model.ll.delete()
    
    # get the leaf-list as a Python list
    root.model.ll.as_list()
    # Read
    root.model.bin # May return '\x00foo\x01bar'
    
    # Write
    root.model.bin = b'\x00foo\x01bar'
    # read a bits leaf - a Bits object may be returned (None if non-existent)
    root.model.bits
    
    # get a bytearray
    root.model.bits.bytearray()
    
    # get a space separated string with bit names
    str(root.model.bits)
    # set a bits leaf using a string of space separated bit names
    root.model.bits = 'turboMode enableEncryption'
    
    # set a bits leaf using a Python bytearray
    root.model.bits = bytearray(b'\x11')
    
    # set a bits leaf using a Python binary string
    root.model.bits = b'\x11'
    
    # read a bits leaf, update the Bits object and set it
    b = x.model.bits
    b.clr_bit(0)
    x.model.bits = b
    pc = root.container.empty_leaf.create()
    root.container.empty_leaf.exists() # Returns True or False
    bool(root.container.empty_leaf) # Returns True or False
    del root.container.empty_leaf
    root.container.empty_leaf.delete()
    Example: Action Request without Transaction
    import ncs
    
    with ncs.maapi.Maapi() as m:
        with ncs.maapi.Session(m, 'admin', 'python'):
            root = ncs.maagic.get_root(m)
    
            output = root.devices.check_sync()
    
            for result in output.sync_result:
                print('sync-result {')
                print('    device %s' % result.device)
                print('    result %s' % result.result)
                print('}')
    Example: Action Request with Transaction
    import ncs
    
    with ncs.maapi.Maapi() as m:
        with ncs.maapi.Session(m, 'admin', 'python'):
            with m.start_read_trans() as t:
                root = ncs.maagic.get_root(t)
    
                output = root.devices.check_sync()
    
                for result in output.sync_result:
                    print('sync-result {')
                    print('    device %s' % result.device)
                    print('    result %s' % result.result)
                    print('}')
    Example: Action Request with Input Parameters
    import ncs
    
    with ncs.maapi.Maapi() as m:
        with ncs.maapi.Session(m, 'admin', 'python'):
            root = ncs.maagic.get_root(m)
    
            input = root.action.double.get_input()
            input.number = 21
            output = root.action.double(input)
    
            print(output.result)
    Example: Action Request with leaf-list Input Parameters
    import ncs
    
    with ncs.maapi.Maapi() as m:
        with ncs.maapi.Session(m, 'admin', 'python'):
            root = ncs.maagic.get_root(m)
    
            input = root.leaf_list_action.llist.get_input()
            input.args = ['testing action']
            output = root.leaf_list_action.llist(input)
    
            print(output.result)
    Example: Create Device, Fetch Host Keys, and Synchronize Configuration
    import argparse
    import ncs
    
    
    def parseArgs():
        parser = argparse.ArgumentParser()
        parser.add_argument('--name', help="device name", required=True)
        parser.add_argument('--address', help="device address", required=True)
        parser.add_argument('--port', help="device address", type=int, default=22)
        parser.add_argument('--desc', help="device description",
                            default="Device created by maagic_create_device.py")
        parser.add_argument('--auth', help="device authgroup", default="default")
        return parser.parse_args()
    
    
    def main(args):
        with ncs.maapi.Maapi() as m:
            with ncs.maapi.Session(m, 'admin', 'python'):
                with m.start_write_trans() as t:
                    root = ncs.maagic.get_root(t)
    
                    print("Setting device '%s' configuration..." % args.name)
    
                    # Get a reference to the device list
                    device_list = root.devices.device
    
                    device = device_list.create(args.name)
                    device.address = args.address
                    device.port = args.port
                    device.description = args.desc
                    device.authgroup = args.auth
                    dev_type = device.device_type.cli
                    dev_type.ned_id = 'cisco-ios-cli-3.0'
                    device.state.admin_state = 'unlocked'
    
                    print('Committing the device configuration...')
                    t.apply()
                    print("Committed")
    
                    # This transaction is no longer valid
    
                #
                # fetch-host-keys and sync-from does not require a transaction
                # continue using the Maapi object
                #
                root = ncs.maagic.get_root(m)
                device = root.devices.device[args.name]
    
                print("Fetching SSH keys...")
                output = device.ssh.fetch_host_keys()
                print("Result: %s" % output.result)
    
                print("Syncing configuration...")
                output = device.sync_from()
                print("Result: %s" % output.result)
                if not output.result:
                    print("Error: %s" % output.info)
    
    
    if __name__ == '__main__':
        main(parseArgs())
    class PlanComponent(object):
        """Service plan component.
    
        The usage of this class is in conjunction with a nano service that
        uses a reactive FASTMAP pattern.
        With a plan the service states can be tracked and controlled.
    
        A service plan can consist of many PlanComponent's.
        This is operational data that is stored together with the service
        configuration.
        """
    
        def __init__(self, service, name, component_type):
            """Initialize a PlanComponent."""
    
        def append_state(self, state_name):
            """Append a new state to this plan component.
    
            The state status will be initialized to 'ncs:not-reached'.
            """
    
        def set_reached(self, state_name):
            """Set state status to 'ncs:reached'."""
    
        def set_failed(self, state_name):
            """Set state status to 'ncs:failed'."""
    
        def set_status(self, state_name, status):
            """Set state status."""
    self_plan = PlanComponent(service, 'self', 'ncs:self')
    self_plan.append_state('ncs:init')
    self_plan.append_state('ncs:ready')
    self_plan.set_reached('ncs:init')
    
    route_plan = PlanComponent(service, 'router', 'myserv:router')
    route_plan.append_state('ncs:init')
    route_plan.append_state('myserv:syslog-initialized')
    route_plan.append_state('myserv:ntp-initialized')
    route_plan.append_state('myserv:dns-initialized')
    route_plan.append_state('ncs:ready')
    route_plan.set_reached('ncs:init')
    self_plan.set_reached('ncs:ready')
    $ cd packages
    $ ncs-make-package --service-skeleton python pyaction --component-class
     action.Action \
     --action-example
    $ tree pyaction
    pyaction/
    +-- README
    +-- doc/
    +-- load-dir/
    +-- package-meta-data.xml
    +-- python/
    |   +-- pyaction/
    |       +-- __init__.py
    |       +-- action.py
    +-- src/
    |   +-- Makefile
    |   +-- yang/
    |       +-- action.yang
    +-- templates/
    Example: Action Server Implementation
    # -*- mode: python; python-indent: 4 -*-
    
    from ncs.application import Application
    from ncs.dp import Action
    
    # ---------------
    # ACTIONS EXAMPLE
    # ---------------
    class DoubleAction(Action):
        @Action.action
        def cb_action(self, uinfo, name, kp, input, output):
            self.log.info('action name: ', name)
            self.log.info('action input.number: ', input.number)
    
            output.result = input.number * 2
    
    class LeafListAction(Action):
        @Action.action
        def cb_action(self, uinfo, name, kp, input, output):
            self.log.info('action name: ', name)
            self.log.info('action input.args: ', input.args)
            output.result = [ w.upper() for w in input.args]
    
    # ---------------------------------------------
    # COMPONENT THREAD THAT WILL BE STARTED BY NCS.
    # ---------------------------------------------
    class Action(Application):
        def setup(self):
            self.log.info('Worker RUNNING')
            self.register_action('action-action', DoubleAction)
            self.register_action('llist-action', LeafListAction)
    
        def teardown(self):
            self.log.info('Worker FINISHED')
    admin@ncs> request action double number 21
    result 42
    [ok][2016-04-22 10:30:39]
    $ cd packages
    $ ncs-make-package --service-skeleton python pyservice \
     --component-class service.Service
    $ tree pyservice
    pyservice/
    +-- README
    +-- doc/
    +-- load-dir/
    +-- package-meta-data.xml
    +-- python/
    |   +-- pyservice/
    |       +-- __init__.py
    |       +-- service.py
    +-- src/
    |   +-- Makefile
    |   +-- yang/
    |       +-- service.yang
    +-- templates/
    Example: High-level Python Service Implementation
    # -*- mode: python; python-indent: 4 -*-
    
    from ncs.application import Application
    from ncs.application import Service
    import ncs.template
    
    # ------------------------
    # SERVICE CALLBACK EXAMPLE
    # ------------------------
    class ServiceCallbacks(Service):
        @Service.create
        def cb_create(self, tctx, root, service, proplist):
            self.log.info('Service create(service=', service._path, ')')
    
            # Add this service logic >>>>>>>
            vars = ncs.template.Variables()
            vars.add('MAGIC', '42')
            vars.add('CE', service.device)
            vars.add('INTERFACE', service.unit)
            template = ncs.template.Template(service)
            template.apply('pyservice-template', vars)
    
            self.log.info('Template is applied')
    
            dev = root.devices.device[service.device]
            dev.description = "This device was modified by %s" % service._path
            # <<<<<<<<< service logic
    
        @Service.pre_modification
        def cb_pre_modification(self, tctx, op, kp, root, proplist):
            self.log.info('Service premod(service=', kp, ')')
    
        @Service.post_modification
        def cb_post_modification(self, tctx, op, kp, root, proplist):
            self.log.info('Service premod(service=', kp, ')')
    
    
    # ---------------------------------------------
    # COMPONENT THREAD THAT WILL BE STARTED BY NCS.
    # ---------------------------------------------
    class Service(Application):
        def setup(self):
            self.log.info('Worker RUNNING')
            self.register_service('service-servicepoint', ServiceCallbacks)
    
        def teardown(self):
            self.log.info('Worker FINISHED')
    <config-template xmlns="http://tail-f.com/ns/config/1.0">
      <devices xmlns="http://tail-f.com/ns/ncs">
        <device tags="nocreate">
          <name>{$CE}</name>
          <config tags="merge">
          <interface xmlns="urn:ios">
            <FastEthernet>
              <name>0/{$INTERFACE}</name>
              <description>The maagic: {$MAGIC}</description>
            </FastEthernet>
          </interface>
          </config>
        </device>
      </devices>
    </config-template>
    $ cd packages
    $ ncs-make-package --service-skeleton python pyvalidation --component-class
     validation.ValidationApplication \
     --disable-service-example --validation-example
    $ tree pyaction
    pyaction/
    +-- README
    +-- doc/
    +-- load-dir/
    +-- package-meta-data.xml
    +-- python/
    |   +-- pyaction/
    |       +-- __init__.py
    |       +-- validation.py
    +-- src/
    |   +-- Makefile
    |   +-- yang/
    |       +-- validation.yang
    +-- templates/
    Example: Validation Implementation
    # -*- mode: python; python-indent: 4 -*-
    import ncs
    from ncs.dp import ValidationError, ValidationPoint
    
    
    # ---------------
    # VALIDATION EXAMPLE
    # ---------------
    class Validation(ValidationPoint):
        @ValidationPoint.validate
        def cb_validate(self, tctx, keypath, value, validationpoint):
            self.log.info('validate: ', str(keypath), '=', str(value))
            if value == 'invalid':
                raise ValidationError('invalid value')
            return ncs.CONFD_OK
    
    
    # ---------------------------------------------
    # COMPONENT THREAD THAT WILL BE STARTED BY NCS.
    # ---------------------------------------------
    class ValidationApplication(ncs.application.Application):
        def setup(self):
            # The application class sets up logging for us. It is accessible
            # through 'self.log' and is a ncs.log.Log instance.
            self.log.info('ValidationApplication RUNNING')
    
            # When using actions, this is how we register them:
            #
            self.register_validation('pyvalidation-valpoint', Validation)
    
            # If we registered any callback(s) above, the Application class
            # took care of creating a daemon (related to the service/action point).
    
            # When this setup method is finished, all registrations are
            # considered done and the application is 'started'.
    
        def teardown(self):
            # When the application is finished (which would happen if NCS went
            # down, packages were reloaded or some error occurred) this teardown
            # method will be called.
    
            self.log.info('ValidationApplication FINISHED')
    admin@ncs% set validation validate-value invalid
    admin@ncs% validate
    Failed: 'validation validate-value': invalid value
    [ok][2016-04-22 10:30:39]
    shared_apply_template
    shared_copy_tree
    shared_create
    shared_insert
    shared_set_elem
    shared_set_elem2
    shared_set_values
    load_config()
    load_config_cmds()
    load_config_stream()
    apply_template()
    copy_tree()
    create()
    insert()
    set_elem()
    set_elem2()
    set_object
    set_values()
    Example: Setting of Configuration Data using MAAPI
    import socket
    import _ncs
    from _ncs import maapi
    
    sock_maapi = socket.socket()
    
    maapi.connect(sock_maapi,
                  ip='127.0.0.1',
                  port=_ncs.NCS_PORT)
    
    maapi.load_schemas(sock_maapi)
    
    maapi.start_user_session(
                      sock_maapi,
                      'admin',
                      'python',
                      [],
                      '127.0.0.1',
                      _ncs.PROTO_TCP)
    
    maapi.install_crypto_keys(sock_maapi)
    
    
    th = maapi.start_trans(sock_maapi, _ncs.RUNNING, _ncs.READ)
    
    path = "/devices/authgroups/group{default}/umap{admin}/remote-password"
    encrypted_password = maapi.get_elem(sock_maapi, th, path)
    
    decrypted_password = _ncs.decrypt(str(encrypted_password))
    
    maapi.finish_trans(sock_maapi, th)
    maapi.end_user_session(sock_maapi)
    sock_maapi.close()
    
    print("Default authgroup admin password = %s" % decrypted_password)
    Example: Action Request
    import socket
    import _ncs
    from _ncs import maapi
    
    sock_maapi = socket.socket()
    
    maapi.connect(sock_maapi,
                  ip='127.0.0.1',
                  port=_ncs.NCS_PORT)
    
    maapi.load_schemas(sock_maapi)
    
    _ncs.maapi.start_user_session(
                      sock_maapi,
                      'admin',
                      'python',
                      [],
                      '127.0.0.1',
                      _ncs.PROTO_TCP)
    
    ns_hash = _ncs.str2hash("http://tail-f.com/ns/ncs")
    
    results = maapi.request_action(sock_maapi, [], ns_hash, "/devices/check-sync")
    for result in results:
        v = result.v
        t = v.confd_type()
        if t == _ncs.C_XMLBEGIN:
            print("sync-result {")
        elif t == _ncs.C_XMLEND:
            print("}")
        elif t == _ncs.C_BUF:
            tag = result.tag
            print("    %s %s" % (_ncs.hash2str(tag), str(v)))
        elif t == _ncs.C_ENUM_HASH:
            tag = result.tag
            text = v.val2str((ns_hash, '/devices/check-sync/sync-result/result'))
            print("    %s %s" % (_ncs.hash2str(tag), text))
    
    maapi.end_user_session(sock_maapi)
    sock_maapi.close()
    Example: Setting of Operational Data using CDB API
    import socket
    import _ncs
    from _ncs import cdb
    
    sock_cdb = socket.socket()
    
    cdb.connect(
        sock_cdb,
        type=cdb.DATA_SOCKET,
        ip='127.0.0.1',
        port=_ncs.NCS_PORT)
    
    cdb.start_session2(sock_cdb, cdb.OPERATIONAL, cdb.LOCK_WAIT | cdb.LOCK_REQUEST)
    
    path = "/operdata/value"
    cdb.set_elem(sock_cdb, _ncs.Value(42, _ncs.C_UINT32), path)
    
    new_value = cdb.get(sock_cdb, path)
    
    cdb.end_session(sock_cdb)
    sock_cdb.close()
    
    print("/operdata/value is now %s" % new_value)
    Example: Reading NED-IDs (read_nedids.py)
                import ncs
    
    
                def print_ned_ids(port):
                    with ncs.maapi.single_read_trans('admin', 'system', db=ncs.OPERATIONAL, port=port) as t:
                    dev_ned_id = ncs.maagic.get_node(t, '/devices/ned-ids/ned-id')
                    for id in dev_ned_id.keys():
                        print(id)
    
    
                if __name__ == '__main__':
                    print('=== lsa-1 ===')
                    print_ned_ids(4569)
                    print('=== lsa-2 ===')
                    print_ned_ids(4570)
                $ python3 read_nedids.py
                === lsa-1 ===
                {ned:lsa-netconf}
                {ned:netconf}
                {ned:snmp}
                {cisco-nso-nc-5.5:cisco-nso-nc-5.5}
                === lsa-2 ===
                {ned:lsa-netconf}
                {ned:netconf}
                {ned:snmp}
                {"[<_ncs.Value type=C_IDENTITYREF(44) value='idref<211668964'...>]"}
                {"[<_ncs.Value type=C_IDENTITYREF(44) value='idref<151824215'>]"}
                {"[<_ncs.Value type=C_IDENTITYREF(44) value='idref<208856485'...>]"}
    with ncs.maapi.single_read_trans('admin', 'system', db=ncs.OPERATIONAL, port=port,
        load_schemas=ncs.maapi.LOAD_SCHEMAS_RELOAD) as t:
              === lsa-1 ===
              {ned:lsa-netconf}
              {ned:netconf}
              {ned:snmp}
              {cisco-nso-nc-5.5:cisco-nso-nc-5.5}
              === lsa-2 ===
              {ned:lsa-netconf}
              {ned:netconf}
              {ned:snmp}
              {cisco-asa-cli-6.13:cisco-asa-cli-6.13}
              {cisco-ios-cli-6.72:cisco-ios-cli-6.72}
              {router-nc-1.0:router-nc-1.0}
    TypeError: cannot pickle '<object>' object
    Example: using multiprocessing.Process
    import ncs
    import _ncs
    from ncs.dp import Action
    from multiprocessing import Process
    import multiprocessing
    
    def child(uinfo, self):
        print(f"uinfo: {uinfo}, self: {self}")
    
    class DoAction(Action):
        @Action.action
        def cb_action(self, uinfo, name, kp, input, output, trans):
              t1 = multiprocessing.Process(target=child, args=(uinfo, self))
              t1.start()
    
    class Main(ncs.application.Application):
        def setup(self):
            self.log.info('Main RUNNING')
            self.register_action('sleep', DoAction)
    
        def teardown(self):
            self.log.info('Main FINISHED')
    Example: using multiprocessing.Process with primitive data
    import ncs
    import _ncs
    from ncs.dp import Action
    from multiprocessing import Process
    import multiprocessing
    
    def child(usid, th, action_point):
        print(f"uinfo: {usid}, th: {th}, action_point: {action_point}")
    
    class DoAction(Action):
        @Action.action
        def cb_action(self, uinfo, name, kp, input, output, trans):
              usid = uinfo.usid
              th = uinfo.actx_thandle
              action_point = self.actionpoint
              t1 = multiprocessing.Process(target=child, args=(usid,th,action_point,))
              t1.start()
    
    class Main(ncs.application.Application):
        def setup(self):
            self.log.info('Main RUNNING')
            self.register_action('sleep', DoAction)
    
        def teardown(self):
            self.log.info('Main FINISHED')

    input

    ncs.maagic.Node

    An object containing the parameters of the input section of the action yang model.

    output

    ncs.maagic.Node

    The object where to put the output parameters as defined in the output section of the action yang model.

    proplist

    list(tuple(str, str))

    The opaque object for the service configuration used to store hidden state information between invocations. It is updated by returning a modified list.

    validationpoint

    string

    The validation point that triggered the validation.

    Nano Services

    Implement staged provisioning in your network using nano services.

    Typical NSO services perform the necessary configuration by using the create() callback, within a transaction tracking the changes. This approach greatly simplifies service implementation, but it also introduces some limitations. For example, all provisioning is done at once, which may not be possible or desired in all cases. In particular, network functions implemented by containers or virtual machines often require provisioning in multiple steps.

    Another limitation is that the service mapping code must not produce any side effects. Side effects are not tracked by the transaction and therefore cannot be automatically reverted. For example, imagine that there is an API call to allocate an IP address from an external system as part of the create() code. The same code runs for every service change or a service re-deploy, even during a commit dry-run, unless you take special precautions. So, a new IP address would be allocated every time, resulting in a lot of waste, or worse, provisioning failures.

    Nano services help you overcome these limitations. They implement a service as several smaller (nano) steps or stages, by using a technique called reactive FASTMAP (RFM), and provide a framework to safely execute actions with side effects. Reactive FASTMAP can also be implemented directly, using the CDB subscribers, but nano services offer a more streamlined and robust approach for staged provisioning.

    The section starts by gradually introducing the nano service concepts in a typical use case. To aid readers working with nano services for the first time, some of the finer points are omitted in this part and discussed later on, in . The latter is designed as a reference to aid you during implementation, so it focuses on recapitulating the workings of nano services at the expense of examples. The rest of the chapter covers individual features with associated use cases and the complete working examples, which you may find in the examples.ncs folder.

    Basic Concepts

    Services ideally perform the configuration all at once, with all the benefits of a transaction, such as automatic rollback and cleanup on errors. For nano services, this is not possible in the general case. Instead, a nano service performs as much configuration as possible at the moment and leaves the rest for later. When an event occurs that allows more work to be done, the nano service instance restarts provisioning, by using a re-deploy action called reactive-re-deploy. It allows the service to perform additional configuration that was not possible before. The process of automatic re-deploy, called reactive FASTMAP, is repeated until the service is fully provisioned.

    This is most evident with, for example, virtual machine (VM) provisioning, during virtual network function (VNF) orchestration. Consider a service that deploys and configures a router in a VM. When the service is first instantiated, it starts provisioning a router VM. However, it will likely take some time before the router has booted up and is ready to accept a new configuration. In turn, the service cannot configure the router just yet. The service must wait for the router to become ready. That is the event that triggers a re-deploy and the service can finish configuring the router, as the following figure illustrates:

    While each step of provisioning happens inside a transaction and is still atomic, the whole service is not. Instead of a simple fully-provisioned or not-provisioned-at-all status, a nano service can be in a number of other states, depending on how far in the provisioning process it is.

    The figure shows that the router VM goes through multiple states internally, however, only two states are important for the service. These two are shown as arrows, in the lower part of the figure. When a new service is configured, it requests a new VM deployment. Having completed this first step, it enters the “VM is requested but still provisioning” state. In the following step, the VM is configured and so enters the second state, where the router VM is deployed and fully configured. The states obviously follow individual provisioning steps and are used to report progress. What is more, each state tracks if an error occurred during provisioning.

    For these reasons, service states are central to the design of a nano service. A list of different states, their order, and transitions between them is called a plan outline and governs the service behavior.

    Plan Outline

    The following YANG snippet, also part of the examples.ncs/development-guide/nano-services/basic-vrouter example shows a plan outline with the two VM-provisioning states presented above:

    The first part contains a definition of states as identities, deriving from the ncs:plan-state base. These identities are then used with the ncs:plan-outline, inside an ncs:component-type statement. The YANG code defines a single ncs:self component, that tracks the progress of the service as a whole, but additional components can be used, as described later. Also, note that it is customary to use past tense for state names, for example, configured-vm or vm-configured instead of configure-vm and configuring-vm.

    At present, the plan contains the two states but no logic. If you wish to do any provisioning for a state, the state must declare a special nano create callback, otherwise, it just acts as a checkpoint. The nano create callback is similar to an ordinary create service callback, allowing service code or templates to perform configuration. To add a callback for a state, extend the definition in the plan outline:

    The service automatically enters each state one by one when a new service instance is configured. However, for the vm-configured state, the service should wait until the router VM has had the time to boot and is ready to accept a new configuration. An ncs:pre-condition statement in YANG provides this functionality. Until the condition becomes fulfilled, the service will not advance to that state.

    The following YANG code instructs the nano service to check the value of the vm-up-and-running leaf, before entering and performing the configuration for a state.

    Per-State Configuration

    The main reason for defining multiple nano service states is to specify what part of the overall configuration belongs in each state. For the VM-router example, that entails splitting the configuration into a part for deploying a VM on a virtual infrastructure and a part for configuring it. In this case, a router VM is requested simply by adding an entry to a list of VM requests, while making the API calls is left to an external component, such as the VNF Manager.

    If a state defines a nano callback, you can register a configuration template to it. The XML template file is very similar to an ordinary service template but requires additional componenttype and state attributes in the config-template root element. These attributes identify which component and state in the plan outline the template belongs to, for example:

    Likewise, you can implement a callback in the service code. The registration requires you to specify the component and state, as the following Python example demonstrates:

    The selected NanoServiceCallbacks class then receives callbacks in the cb_nano_create() function:

    The component and state parameters allow the function to distinguish calls for different callbacks when registered for more than one.

    For most flexibility, each state defines a separate callback, allowing you to implement some with a template and others with code, all as part of the same service. You may even use Java instead of Python, as explained in .

    Link Plan Outline to Service

    The set of states used in the plan outline describes the stages that a service instance goes through during provisioning. Naturally, these are service-specific, which presents a problem if you just want to tell whether a service instance is still provisioning or has already finished. It requires the knowledge of which state is the last, final one, making it hard to check in a generic way.

    That is why each service component must have the built-in ncs:init state as the first state and ncs:ready as the last state. Using the two built-in states allows for interoperability with other services and tools. The following is a complete four-state plan outline for the VM-based router service, with the two states added:

    For the service to use it, the plan outline must be linked to a service point with the help of a behavior tree. The main purpose of a behavior tree is to allow a service to dynamically instantiate components, based on service parameters. Dynamic instantiation is not always required and the behavior tree for a basic, static, single-component scenario boils down to the following:

    This behavior tree always creates a single self component for the service. The service point is provided as an argument to the ncs:service-behavior-tree statement, while the ncs:plan-outline-ref statement provides the name for the plan outline to use.

    The following figure visualizes the resulting service plan and its states.

    Along with the behavior tree, a nano service also relies on the ncs:nano-plan-data grouping in its service model. It is responsible for storing state and other provisioning details for each service instance. Other than that, the nano service model follows the standard YANG definition of a service:

    This model includes the operational vm-up-and-running leaf, that the example plan outline depends on. In practice, however, a plan outline is more likely to reference values provided by another part of the system, such as the actual, externally provided, state of the provisioned VM.

    Service Instantiation

    A nano service does not directly use its service point for configuration. Instead, the service point invokes a behavior tree to generate a plan, and the service starts executing according to this plan. As it reaches a certain state, it performs the relevant configuration for that state.

    For example, when you create a new instance of the VM-router service, the vm-up-and-running leaf is not set, so only the first part of the service runs. Inspecting the service instance plan reveals the following:

    Since neither the init nor the vm-requested states have any pre-conditions, they are reached right away. In fact, NSO can optimize it into a single transaction (this behavior can be disabled if you use forced commits, discussed later on).

    But the process has stopped at the vm-configured state, denoted by the not-reached status in the output. It is waiting for the pre-condition to become fulfilled with the help of a kicker. The job of the kicker is to watch the value and perform an action, the reactive re-deploy, when the conditions are satisfied. The kickers are managed by the nano service subsystem: when an unsatisfied precondition is encountered, a kicker is configured, and when the precondition becomes satisfied, the kicker is removed.

    You may also verify, through the get-modifications action, that only the first part, the creation of the VM, was performed:

    At the same time, a kicker was installed under the kickers container but you may need to use the unhide debug command to inspect it. More information on kickers in general is available in .

    At a later point in time, the router VM becomes ready, and the vm-up-and-running leaf is set to a true value. The installed kicker notices the change and automatically calls the reactive-re-deploy action on the service instance. In turn, the service gets fully deployed.

    The get-modifications output confirms this fact. It contains the additional IP address configuration, performed as part of the vm-configured step:

    The ready state has no additional pre-conditions, allowing NSO to reach it along with the vm-configured state. This effectively breaks the provisioning process into two steps. To break it down further, simply add more states with corresponding pre-conditions and create logic.

    Other than staged provisioning, nano services act the same as other services, allowing you to use the service check-sync and similar actions, for example. But please note the un-deploy and re-deploy actions may behave differently than expected, as they deal with provisioning. Chiefly, a re-deploy reevaluates the pre-conditions, possibly generating a different configuration if a pre-condition depends on operational values that have changed. The un-deploy action, on the other hand, removes all of the recorded modifications, along with the generated plan.

    Benefits and Use Cases

    Every service in NSO has a YANG definition of the service parameters, a service point name, and an implementation of the service point create() callback. Normally, when a service is committed, the FASTMAP algorithm removes all previous data changes internally and presents the service data to the create() callback as if this was the initial create. When the create() callback returns, the FASTMAP algorithm compares the result and calculates a reverse diff-set from the data changes. This reverse diff-set contains the operations that are needed to restore the configuration data to the state as it was before the service was created. The reverse diff-set is required, for instance, if the service is deleted or modified.

    This fundamental principle is what makes the implementation of services and the create() callback simple. In turn, a lot of the NSO functionality relies on this mechanism.

    However, in the reactive FASTMAP pattern, the create() callback is re-entered several times by using the subsequent reactive-re-deploy calls. Storing all changes in a single reverse diff-set then becomes an impediment. For instance, if a staged delete is necessary, there is no way to single out which changes each RFM step performed.

    A nano service abandons the single reverse diff-set by introducing nano-plan-data and a new NanoCreate() callback. The nano-plan-data YANG grouping represents an executable plan that the system can follow to provision the service. It has additional storage for reverse diff-set and pre-conditions per state, for each component of the plan.

    This is illustrated in the following figure:

    You can still use the service get-modifications action to visualize all data changes performed by the service as an aggregate. In addition, each state also has its own get-modifications action that visualizes the data changes for that particular state. It allows you to more easily identify the state and, by extension, the code that produced those changes.

    Before nano services became available, RFM services could only be implemented by creating a CDB subscriber. With the subscriber approach, the service can still leverage the plan-data grouping, which nano-plan-data is based on, to report the progress of the service under the resulting plan container. But the create() callback becomes responsible for creating the plan components, their states, and setting the status of the individual states as the service creation progresses.

    Moreover, implementing a staged delete with a subscriber often requires keeping the configuration data outside of the service. The code is then distributed between the service create() callback and the correlated CDB subscriber. This all results in several sources that potentially contain errors that are complicated to track down. Nano services, on the other hand, do not require any use of CDB subscribers or other mechanisms outside of the service code itself to support the full-service life cycle.

    Backtracking and Staged Delete

    Resource de-provisioning is an important part of the service life cycle. The FASTMAP algorithm ensures that no longer needed configuration changes in NSO are removed automatically but that may be insufficient by itself. For example, consider the case of a VM-based router, such as the one described earlier. Perhaps provisioning of the router also involves assigning a license from a central system to the VM and that license must be returned when the VM is decommissioned. If releasing the license must be done by the VM itself, simply destroying it will not work.

    Another example is the management of a web server VM for a web application. Here, each VM is part of a larger pool of servers behind a load balancer that routes client requests to these servers. During de-provisioning, simply stopping the VM interrupts the currently processing requests and results in client timeouts. This can be avoided with a graceful shutdown, which stops the load balancer from sending new connections to the server and waits for the current ones to finish, before removing the VM.

    Both examples require two distinct steps for de-provisioning. Can nano services be of help in this case? Certainly. In addition to the state-by-state provisioning of the defined components, the nano service system in NSO is responsible for back-tracking during their removal. This process traverses all reached states in the reverse order, removing the changes previously done for each state one by one.

    In doing so, the back-tracking process checks for a 'delete pre-condition' of a state. A delete pre-condition is similar to the create pre-condition, but only relevant when back-tracking. If the condition is not fulfilled, the back-tracking process stops and waits until it becomes satisfied. Behind the scenes, a kicker is configured to restart the process when that happens.

    If the state's delete pre-condition is fulfilled, back-tracking first removes the state's 'create' changes recorded by FASTMAP and then invokes the nano delete() callback, if defined. The main use of the callback is to override or veto the default status calculation for a back-tracking state. That is why you can't implement the delete() callback with a template, for example. Very importantly, delete() changes are not kept in a service's reverse diff-set and may stay even after the service is completely removed. In general, you are advised to avoid writing any configuration data because this callback is called under a removal phase of a plan component where new configuration is seldom expected.

    Since the 'create' configuration is automatically removed, without the need for a separate delete() callback, these callbacks are used only in specific cases and are not very common. Regardless, the delete() callback may run as part of the commit dry-run command, so it must not invoke further actions or cause side effects.

    Backtracking is invoked when a component of a nano service is removed, such as when deleting a service. It is also invoked when evaluating a plan and a reached state's 'create' pre-condition is no longer satisfied. In this case, the affected component is temporarily set to a back-tracking mode for as long as it contains such nonconforming states. It allows the service to recover and return to a well-defined state.

    To implement the delete pre-condition or the delete() callback, you must add the ncs:delete statement to the relevant state in the plan outline. Applying it to the web server example above, you might have:

    While, in general, the delete() callback should not produce any configuration, the graceful shutdown scenario is one of the few exceptional cases where this may be required. Here, the delete() callback allows you to re-configure the load balancer to remove the server from actively accepting new connections, such as marking it 'under maintenance'. The 'delete' pre-condition allows you to further delay the VM removal until the ongoing requests are completed.

    Similar to the create() callback, the ncs:nano-callback statement instructs NSO to also process a delete() callback. A Python class that you have registered for the nano service must then implement the following method:

    As explained, there are some uncommon cases where additional configuration with the delete() callback is required. However, a more frequent use of the ncs:delete statement is in combination with side-effect actions.

    Managing Side Effects

    In some scenarios, side effects are an integral part of the provisioning process and cannot be avoided. The aforementioned example on license management may require calling a specific device action. Even so, the create() or delete() callbacks, nano service or otherwise, are a bad fit for such work. Since these callbacks are invoked during the transaction commit, no RPCs or other access outside of the NSO datastore are allowed. If allowed, they would break the core NSO functionality, such as a dry run, where side effects are not expected.

    A common solution is to perform these actions outside of the configuration transaction. Nano services provide this functionality through the post-actions mechanism, using a post-action-node statement for a state. It is a definition of an action that should be invoked after the state has been reached and the commit performed. To ensure the latter, NSO will commit the current transaction before executing the post-action and advancing to the next state.

    The service's plan state data also carries a post-action status leaf, which reflects whether the action was executed and if it was successful. The leaf will be set to not-reached, create-reached, delete-reached, or failed, depending on the case and result. If the action is still executing, then the leaf will show either a create-init or delete-init status instead.

    Moreover, post actions can be run either asynchronously (default) or synchronously. To run them synchronously, add a sync statement to the post-action statement. When a post action is run asynchronously, further states will not wait for the action to finish, unless you define an explicit post-action-status precondition. While for a synchronous post action, later states in the same component will be invoked only after the post action is run successfully.

    The exception to this setting is when a component switches to a backtracking mode. In that case, the system will not wait for any create post action to complete (synchronous or not) but will start executing backtracking right away. It means a delete callback or a delete post action for a state may run before its synchronous create post action has finished executing.

    The side-effect-queue and a corresponding kicker are responsible for invoking the actions on behalf of the nano service and reporting the result in the respective state's post-action-status leaf. The following figure shows an entry is made in the side-effect-queue (2) after the state is reached (1) and its post-action status is updated (3) once the action finishes executing.

    You can use the show side-effect-queue command to inspect the queue. The queue will run multiple actions in parallel and keep the failed ones for you to inspect. Please note that High Availability (HA) setups require special consideration: the side effect queue is disabled when High Availability is enabled and the High Availability mode is NONE. See for more details.

    In case of a failure, a post-action sets the post-action status accordingly and, if the action is synchronous, the nano service stops progressing. To retry the failed action, execute a (reactive) re-deploy, which will also restart the nano service if it is stopped.

    Using the post-action mechanism, it is possible to define side effects for a nano service in a safe way. A post-action is only executed one time. That is if the post-action-status is already at the create-reached in the create case or delete-reached in the delete case, then new calls of the post-actions are suppressed. In dry-run operations, post-actions are never called.

    These properties make post actions useful in a number of scenarios. A widely applicable use case is invoking a service self-test as part of initial service provisioning.

    Another example, requiring the use of post-actions, is the IP address allocation scenario from the chapter introduction. By its nature, the allocation or assignment call produces a side effect in an external system: it marks the assigned IP address in use. The same is true for releasing the address. Since NSO doesn't know how to reverse these effects on its own, they can't be part of any create() callback. Instead, the API calls can be implemented as post-actions.

    The following snippet of a plan outline defines a create and delete post-action to handle IP management:

    Let's see how this plan manifests during provisioning. After the first (init) state is reached and committed, it fires off an allocation action on the service instance, called allocate-ip. The job of the allocate-ip action is to communicate with the external system, the IP Address Management (IPAM), and allocate an address for the service instance. This process may take a while, however it does not tie up NSO, since it runs outside of the configuration transaction and other configuration sessions can proceed in the meantime.

    The $SERVICE XPath variable is automatically populated by the system and allows you to easily reference the service instance. There are other automatic variables defined. You can find the complete list inside the tailf-ncs-plan.yang submodule, in the $NCS_DIR/src/ncs/yang/ folder.

    Due to the ncs:sync statement, service provisioning can continue only after the allocation process (the action) completes. Once that happens, the service resumes processing in the ip-allocated state, with the IP value now available for configuration.

    On service deprovisioning, the back-tracking mechanism works backwards through the states. When it is the ip-allocated state's turn to deprovision, NSO reverts any configuration done as part of this state, and then runs the release-ip action, defined inside the ncs:delete block. Of course, this only happens if the state previously had a reached status. Implemented as a post-action, release-ip can safely use the external IPAM API to deallocate the IP address, without impacting other sessions.

    The actions, as defined in the example, do not take any parameters. When needed, you may pass additional parameters from the service's opaque and component_proplist object. These parameters must be set in advance, for example in some previous create callback. For details, please refer to the YANG definition of post-action-input-params in the tailf-ncs-plan.yang file.

    Multiple and Dynamic Plan Components

    The discussion on basic concepts briefly mentions the role of a nano behavior tree but it does not fully explore its potential. Let's now consider in which situations you may find a non-trivial behavior tree beneficial.

    Suppose you are implementing a service that requires not one but two VMs. While you can always add more states to the main (self) component, these states are processed sequentially. However, you might want to provision the two VMs in parallel, since they take a comparatively long time, and it makes little sense having to wait until the first one is finished before starting with the second one. Nano services provide an elegant solution to this challenge in the form of multiple plan components: provisioning of each VM can be tracked by a separate plan component, allowing the two to advance independently, in parallel.

    If the two VMs go through the same states, you can use a single component type in the plan outline for both. It is the job of the behavior tree to create or synthesize actual components for each service instance. Therefore, you could use a behavior tree similar to the following example:

    The two ncs:create-component statements instruct NSO to create two components, named vm1 and vm2, of the same vr:router-vm type. Note the required use of single quotes around component names, because the value is actually an XPath expression. The quotes ensure the name is used verbatim when the expression is evaluated.

    But this behavior tree has a flaw: It is missing the self component to tell the overall provisioning status. With multiple components in place, the self-component should naturally reflect the cumulative status of the service. You can define the plan outline in the following way:

    With the ncs:self-as-service-status statement present on a plan outline, the ready state of the self component will never have its status set to reached until all other components have the ready state status set to reached and all post actions have been run, too. Likewise, during backtracking, the init state will never be set to “not-reached” until all other components have been fully backtracked and all delete post actions have been run. Additionally, the self ready or init state status will be set to failed if any other state has a failed status or a failed post action, thus signaling that something has failed while executing the service instance.

    To make use of the self component, you must also add it to the behavior tree:

    As you can see, all the ncs:create-component statements are placed inside an ncs:selector block. A selector is a so-called control flow node. It selects a group of components and allows you to decide whether they are created or not, based on a pre-condition. The pre-condition can reference a service parameter, which in turn controls if the relevant components are provisioned for this service instance. The mechanism enables you to dynamically produce just the necessary plan components.

    The pre-condition is not very useful on the top selector node, but selectors can also be nested. For example, having a use-virtual-devices configuration leaf in the service YANG model, you could modify the behavior tree to the following:

    The described behavior tree always synthesizes the self component and evaluates the child selector. However, the child selector only synthesizes the two VM components if the service configuration requested so by setting the use-virtual-devices to true.

    What is more, if the pre-condition value changes, the system re-evaluates the behavior tree and starts the backtracking operation for any removed components.

    For even more complex cases, where a variable number of components needs to be synthesized, the ncs:multiplier control flow node becomes useful. Its ncs:foreach statement selects a set of elements and each element is processed in the following way:

    • If the optional when statement is not satisfied, the element is skipped.

    • All variable statements are evaluated as XPath expressions for this element, to produce a unique name for the component and any other element-specific values.

    • All ncs:create-component and other control flow nodes are processed, creating the necessary components for this element.

    The multiplier node is often used to create a component for each item in a list. For example, if the service model contains a list of VMs, with a key name, then the following code creates a component for each of the items:

    In this particular case, it might be possible to avoid the variable altogether, by using the expression for the create-component statement directly. However, defining a variable also makes it available to service create() callbacks.

    This is extremely useful, since you can access these values, as well as the ones from the service opaque object, directly in the nano service XML templates. The opaque, especially, allows you to separate the logic in code from applying the XML templates.

    Netsim Router Provisioning Example

    The examples.ncs/development-guide/nano-services/netsim-vrouter folder contains a complete implementation of a service that provisions a netsim device instance, onboards it to NSO, and pushes a sample interface configuration to the device. Netsim device creation is neither instantaneous nor side-effect-free and thus requires the use of a nano service. It more closely resembles a real-world use case for nano services.

    To see how the service is used through a prearranged scenario, execute the make demo command from the example folder. The scenario provisions and de-provisions multiple netsim devices to show different states and behaviors, characteristic of nano services.

    The service, called vrouter, defines three component types in the src/yang/vrouter.yang file:

    • vr:vrouter: A “day-0” component that creates and initializes a netsim process as a virtual router device.

    • vr:vrouter-day1: A “day-1” component for configuring the created device and tracking NETCONF notifications.

    • ncs:self: The overall status indicator. It summarizes the status of all service components through the self-as-service-status mechanism.

    As the name implies, the day-0 component must be provisioned before the day-1 component. Since the two provision in sequence, in general, a single component would suffice. However the components are kept separate to illustrate component dependencies.

    The behavior tree synthesizes each of the components for a service instance using some service-specific names. To do so, the example defines three variables to hold different names:

    The vr:vrouter (day-0) component has a number of plan states that it goes through during provisioning:

    • ncs:init

    • vr:requested

    • vr:onboarded

    • ncs:ready

    The init and ready states are required as the first and last state in all components for correct overall state tracking in ncs:self. They have no additional logic tied to them.

    The vr:requested state represents the first step in virtual router provisioning. While it does not perform any configuration itself (no nano-callback statement), it calls a post-action that does all the work. The following is a snippet of the plan outline for this state:

    The create-router action calls the Python code inside the python/vrouter/main.py file, which runs a couple of system commands, such as the ncs-netsim create-device and the ncs-netsim start commands. These commands do the same thing as you would if you performed the task manually from the shell.

    The vr:requested state also has a delete post-action, analogous to create, which stops and removes the netsim device during service de-provisioning or backtracking.

    Inspecting the Python code for these post actions will reveal that a semaphore is used to control access to the common netsim resource. It is needed because multiple vrouter instances may run the create and delete action callbacks in parallel. The Python semaphore is shared between the delete and create action processes using a Python multiprocessing manager, as the example configures the NSO Python VM to start the actions in multiprocessing mode. See for details.

    In vr:onboarded, the nano Python callback function from the main.py file adds the relevant NSO device entry for a newly created netsim device. It also configures NSO to receive notifications from this device through a NETCONF subscription. When the NSO configuration is complete, the state transitions into the reached status, denoting the onboarding has been completed successfully.

    The vr:vrouter component handles so-called day-0 provisioning. Alongside this component, the vr:vrouter-day1 component starts provisioning in parallel. During provisioning, it transitions through the following states:

    • ncs:init

    • vr:configured

    • vr:deployed

    • ncs:ready

    The component reaches the init state right away. However, the vr:configured state has a precondition:

    Provisioning can continue only after the first component, vr:vrouter, has executed its vr:onboarded post-action. The precondition demonstrates how one component can depend on another component reaching some particular state or successfully executing a post-action.

    The vr:onboarded post-action performs a sync-from command for the new device. After that happens, the vr:configured state can push the device configuration according to the service parameters, by using an XML template, templates/vrouter-configured.xml. The service simply configures an interface with a VLAN ID and a description.

    Similarly, the vr:deployed state has its own precondition, which makes use of the ncs:any statement. It specifies either (any) of the two monitor statements will satisfy the precondition.

    One of them checks the last received NETCONF notification contains a link-status value of up for the configured interface. In other words, it will wait for the interface to become operational.

    However, relying solely on notifications in the precondition can be problematic, as the received notifications list in NSO can be cleared and would result in unintentional backtracking on a service re-deploy. For this reason, there is the other monitor statement, checking the device live-status.

    Once either of the conditions is satisfied, it marks the end of provisioning. Perhaps the use of notifications in this case feels a little superficial but it illustrates a possible approach to waiting for the steady state, such as routing adjacencies to form and alike.

    Altogether, the example shows how to use different nano service mechanisms in a single, complex, multistage service that combines configuration and side effects. The example also includes a Python script that uses the RESTCONF protocol to configure a service instance and monitor its provisioning status. You are encouraged to configure a service instance yourself and explore the provisioning process in detail, including service removal. Regarding removal, have you noticed how nano services can de-provision in stages, but the service instance is gone from the configuration right away?

    Zombie Services

    By removing the service instance configuration from NSO, you start a service de-provisioning process. For an ordinary service, a stored reverse diff-set is applied, ensuring that all of the service-induced configuration is removed in the same transaction. For nano services, having a staged, multistep service delete operation, is not possible. The provisioned states must be backtracked one by one, often across multiple transactions. With the service instance deleted, NSO must track the de-provisioning progress elsewhere.

    For this reason, NSO mutates a nano service instance when it is removed. The instance is transformed into a zombie service, which represents the original service that still requires de-provisioning. Once the de-provisioning is complete, with all the states backtracked, the zombie is automatically removed.

    Zombie service instances are stored with their service data, their plan states, and diff-sets in a /ncs:zombies/services list. When a service mutates to a zombie, all plan components are set to back-tracking mode and all service pre-condition kickers are rewritten to reference the zombie service instead. Also, the nano service subsystem now updates the zombie plan states as de-provisioning progresses. You can use the show zombies service command to inspect the plan.

    Under normal conditions, you should not see any zombies, except for the service instances that are actively de-provisioning. However, if an error occurs, the de-provisioning process will stop with an error status and a zombie will remain. With a zombie present, NSO will not allow creating the same service instance in the configuration tree. The zombie must be removed first.

    After addressing the underlying problem, you can restart the de-provisioning process with the re-deploy or the reactive-re-deploy actions. The difference between the two is which user the action uses. The re-deploy uses the current user that initiated the action whilst the reactive-re-deploy action keeps using the same user that last modified the zombie service.

    These zombie actions behave a bit differently than their normal service counterparts. In particular, the zombie variants perform the following steps to better serve the de-provisioning process:

    1. Start a temporary transaction in which the service is reinstated (created). The service plan will have the same status as it had when it mutated.

    2. Back-track plan components in a normal fashion, that is, removing device changes for states with delete pre-conditions satisfied.

    3. If all components are completely back-tracked, the zombie is removed from the zombie-list. Otherwise, the service and the current plan states are stored back into the zombie-list, with new kickers waiting to activate the zombie when some delete pre-condition is satisfied.

    In addition, zombie services support the resurrect action. The action reinstates the zombie back in the configuration tree as a real service, with the current plan status, and reverts plan components back from back-tracking to normal mode. It is an “undo” for a nano service delete.

    In some situations, especially during nano service development, a zombie may get stuck because of a misconfigured precondition or similar issues. A re-deploy is unlikely to help in that case and you may need to forcefully remove the problematic plan component. The force-back-track action performs this job and allows you to backtrack to a specific state, if specified. But beware that using the action avoids calling any post-actions or delete callbacks for the forcefully backtracked states, even though the recorded configuration modifications are reverted. It can and will leave your systems in an inconsistent or broken state if you are not careful.

    Using Notifications to Track the Plan and its Status

    When a service is provisioned in stages, as nano services are, the success of the initial commit no longer indicates the service is provisioned. Provisioning may take a while and may fail later, requiring you to consult the service plan to observe the service status. This makes it harder to tell when a service finishes provisioning, for example. Fortunately, services provide a set of notifications that indicate important events in the service's life-cycle, including a successful completion. These events enable NETCONF and RESTCONF clients to subscribe to events instead of polling the plan and commit queue status.

    The built-in service-state-changes NETCONF/RESTCONF stream is used by NSO to generate northbound notifications for services, including nano services. The event stream is enabled by default in ncs.conf, however, individual notification events must be explicitly configured to be sent.

    The plan-state-change Notification

    When a service's plan component changes state, the plan-state-change notification is generated with the new state of the plan. It includes the status, which indicates one of not-reached, reached, or failed. The notification is sent when the state is created, modified, or deleted, depending on the configuration. For reference on the structure and all the fields present in the notification, please see the YANG model in the tailf-ncs-plan.yang file.

    As a common use case, suppose the self-as-service-status statement has been set on the plan outline. An event with status reached for the self component ready state signifies that all nano service components have reached their ready state and provisioning is complete. A simple example of this scenario is included in the examples.ncs/development-guide/nano-services/netsim-vrouter/demo.py Python script, using RESTCONF.

    To enable the plan-state-change notifications to be sent, you must enable them for a specific service in NSO. For example, can load the following configuration into the CDB as an XML initialization file:

    This configuration enables notifications for the self component's ready state when created or modified.

    The service-commit-queue-event Notification

    When a service is committed through the commit queue, this notification acts as a reference regarding the state of the service. Notifications are sent when the service commit queue item is waiting to run, executing, waiting to be unlocked, completed, failed, or deleted. More details on the service-commit-queue-event notification content can be found in the YANG model inside tailf-ncs-services.yang .

    For example, the failed event can be used to detect that a nano service instance deployment failed because a configuration change committed through the commit queue has failed. Measures to resolve the issue can then be taken and the nano service instance can be re-deployed. A simple example of this scenario is included in the examples.ncs/development-guide/nano-services/netsim-vrouter/demo.py Python script where the service is committed through the commit queue, using RESTCONF. By design, the configuration commit to a device fails, resulting in a commit-queue-notification with the failed event status for the commit queue item.

    To enable the service-commit-queue-event notifications to be sent, you can load the following example configuration into NSO, as an XML initialization file or some other way:

    Examples of service-state-changes Stream Subscriptions

    The following examples demonstrate the usage and sample events for the notification functionality, described in this section, using RESTCONF, NETCONF, and CLI northbound interfaces.

    RESTCONF subscription request using curl:

    See in Northbound APIs for further reference.

    NETCONF creates subscription using netconf-console:

    See in Northbound APIs for further reference.

    CLI shows received notifications using ncs_cli:

    The trace-id in the Notification

    You have likely noticed the trace-id field at the end of the example notifications above. The trace id is an optional but very useful parameter when committing the service configuration. It helps you trace the commit in the emitted log messages and the service-state-changes stream notifications. The above notifications, taken from the examples.ncs/development-guide/nano-services/netsim-vrouter example, are emitted after applying a RESTCONF plain patch:

    Note that the trace id is specified as part of the URL. If missing, NSO will generate and assign one on its own.

    Developing and Updating a Nano Service

    At times, especially when you use an iterative development approach or simply due to changing requirements, you might need to update (change) an existing nano service and its implementation. In addition to other service update best practices, such as model upgrades, you must carefully consider the nano-service-specific aspects. The following discussion mostly focuses on migrating an already provisioned service instance to a newer version; however, the same concepts also apply while you are initially developing the service.

    In the simple case, updating the model of a nano service and getting the changes to show up in an already created instance is a matter of executing a normal re-deploy. This will synthesize any new components and provision them, along with the new configuration, just like you would expect from a non-nano service.

    A major difference occurs if a service instance is deleted and is in a zombie state when the nano service is updated. You should be aware that no synthetization is done for that service instance. The only goal of a deleted service is to revert any changes made by the service instance. Therefore, in that case, the synthetization is not needed. It means that, if you've made changes to callbacks, post actions, or pre-conditions, those changes will not be applied to zombies of the nano service. If a service instance requires the new changes to be applied, you must re-deploy it before it is deleted.

    When updating nano services, you also need to be aware that any old callbacks, post actions and any other models that the service depends on, need to be available in the new nano service package until all service instances created before the update have either been updated (through a re-deploy) or fully deleted. Therefore, you must take great care with any updates to a service if there are still zombies left in the system.

    Adding Components

    Adding new components to the behavior tree will create the new components during the next re-deploy (synthetization) and execute the states in the new components as is normally done.

    Removing Components

    When removing components from the behavior tree, the components that are removed are set to backtracking and are backtracked fully before they are removed from the plan.

    When you remove a component, do so carefully so that any callbacks, post actions or any other model data that the component depends on are not removed until all instances of the old component are removed.

    If the identity for a component type is removed, then NSO removes the component from the database when upgrading the package. If this happens, the component is not backtracked and the reverse diffsets are not applied.

    Replacing Components

    Replacing components in the behavior tree is the same as having unrelated components that are deleted and added in the same update. The deleted components are backtracked as far as possible, and then the added components are created and their states executed in order.

    In some cases, this is not the desired behavior when replacing a component. For example, if you only want to rename a component, backtracking and then adding the component again might make NSO push unnecessary changes to the network or run delete callbacks and post actions that should not be run. To remedy this, you might add the ncs:deprecates-component statements to the new component, detailing which components it replaces. NSO then skips the backtracking of the old component and just applies all reverse diffsets of the deprecated component. In the same re-deploy, it then executes the new component as usual. Therefore, if the new component produces the same configuration as the old component, nothing is pushed to the network.

    If any of the deprecated components are backtracking, the backtracking will be handled before the component is removed. When there are multiple components that are deprecated in the same update, the components will not be removed, as detailed above, until all of them are done backtracking (if any one of them are backtracking).

    Adding and Removing States

    When adding or removing states in a component, the component is backtracked before a new component with the new states is added and executed. If the updated component produces the same configuration as the old one (and no preconditions halt the execution), this should lead to no configuration being pushed to the network. So, if changes to the states are done, you need to take care when writing the preconditions and post actions for a component if no new changes should be pushed to the network.

    Any changes to the already present states that are kept in the updated component will not have their configuration updated until the new component is created, which happens after the old one has been fully backtracked.

    Modifying States

    For a component where only the configuration for one or more states have changed, the synthetization process will update the component with the new configuration and make sure that any new callbacks or similar are called during future execution of the component.

    Implementation Reference

    The text in this section sums up as well as adds additional detail on the way nano services operate, which you will hopefully find beneficial during implementation.

    To reiterate, the purpose of a nano service is to break down an RFM service into its isolated steps. It extends the normal ncs:servicepoint YANG mechanism and requires the following:

    • A YANG definition of the service input parameters, with a service point name and the additional nano-plan-data grouping.

    • A YANG definition of the plan component types and their states in a plan outline.

    • A YANG definition of a behavior tree for the service. The behavior tree defines how and when to instantiate components in the plan.

    • Code or templates for individual state transfers in the plan.

    When a nano service is committed, the system evaluates its behavior tree. The result of this evaluation is a set of components that form the current plan for the service. This set of components is compared with the previous plan (before the commit). If there are new components, they are processed one by one.

    For each component in the plan, it is executed state by state in the defined order. Before entering a new state, the create pre-condition for the state is evaluated if it exists. If a create pre-condition exists and if it is not satisfied, the system stops progressing this component and jumps to the next one. A kicker is then defined for the pre-condition that was not satisfied. Later, when this kicker triggers and the pre-condition is satisfied, it performs a reactive-re-deploy and the kicker is removed. This kicker mechanism becomes a self-sustained RFM loop.

    If a state's pre-conditions are met, the callback function or template associated with the state is invoked, if it exists. If the callback is successful, the state is marked as reached, and the next state is executed.

    A component, that is no longer present but was in the previous plan, goes into back-tracking mode, during which the goal is to remove all reached states and eventually remove the component from the plan. Removing state data changes is performed in a strict reverse order, beginning with the last reached state and taking into account a delete pre-condition if defined.

    A nano service is expected to have a ncs:self component type, all other component types are optional. Any component type, including ncs:self, are expected to have ncs:init as its first state and ncs:ready as its last state. Any component type, including ncs:self, can have any number of specific states in between ncs:init and ncs:ready.

    Back-Tracking

    Back-tracking is completely automatic and occurs in the following scenarios:

    • State pre-condition not satisfied_:_ A reached state's pre-condition is no longer satisfied, and there are subsequent states that are reached and contain reverse diff-sets.

    • Plan component is removed_:_ When a plan component is removed and has reached states that contain reverse diff-sets.

    • Service is deleted_:_ When a service is deleted, NSO will set all plan components to back-tracking mode before deleting the service.

    For each RFM loop, NSO traverses each component and state in order. For each non-satisfied create pre-condition, a kicker is started that monitors and triggers when the pre-condition becomes satisfied.

    While traversing the states, a create pre-condition that was previously satisfied may become unsatisfied. If there are subsequent reached states that contain reverse diff-sets, then the component must be set to back-tracking mode. The back-tracking mode has as its goal to revert all changes up to the state that originally failed to satisfy its create pre-condition. While back-tracking, the delete pre-condition for each state is evaluated, if it exists. If the delete pre-condition is satisfied, the state's reverse diff-set is applied, and the next state is considered. If the delete pre-condition is not satisfied, a kicker is created to monitor this delete pre-condition. When the kicker triggers, a reactive-re-deploy is called and the back-tracking will continue until the goal is reached.

    When the back-tracking plan component has reached its goal state, the component is set to normal mode again. The state's create pre-condition is evaluated and if it is satisfied the state is entered or otherwise a kicker is created as described above.

    In some circumstances, a complete plan component is removed (for example, if the service input parameters are changed). If this happens, the plan component is checked if it contains reached states that contain reverse diff-sets.

    If the removed component contains reached states with reverse diff-sets, the deletion of the component is deferred and the component is set to back-tracking mode.

    In this case, there is no specified goal state for the back-tracking. This means that when all the states have been reverted, the component is automatically deleted.

    If a service is deleted, all components are set to back-tracking mode. The service becomes a zombie, storing away its plan states so that the service configuration can be removed.

    All components of a deleted service are set in backtracking mode.

    When a component becomes completely back-tracked, it is removed.

    When all components in the plan are deleted, the service is removed.

    Behavior Tree

    A nano service behavior tree is a data structure defined for each service type. Without a behavior tree defined for the service point, the nano service cannot execute. It is the behavior tree that defines the currently executing nano-plan with its components.

    This is in stark contrast to plan-data used for logging purposes where the programmer needs to write the plan and its components in the create() callback. For nano services, it is not allowed to define the nano plan in any other way than by a behavior tree.

    The purpose of a behavior tree is to have a declarative way to specify how the service's input parameters are mapped to a set of component instances.

    A behavior tree is a directed tree in which the nodes are classified as control flow nodes and execution nodes. For each pair of connected nodes, the outgoing node is called parent and the incoming node is called child. A control flow node has zero or one parent and at least one child and the execution nodes have one parent and no children.

    There is exactly one special control flow node called the root, which is the only control flow node without a parent.

    This definition implies that all interior nodes are control flow nodes, and all leaves are execution nodes. When creating, modifying, or deleting a nano service, NSO evaluates the behavior tree to render the current nano plan for the service. This process is called synthesizing the plan.

    The control flow nodes have a different behavior, but in the end, they all synthesize its children in zero or more instances. When the a control flow node is synthesized, the system executes its rules for synthesizing the node's children. Synthesizing an execution node adds the corresponding plan component instance to the nano service's plan.

    All control flow and execution nodes may define pre-conditions, which must be satisfied to synthesize the node. If a pre-condition is not satisfied, a kicker is started to monitor the pre-condition.

    All control flow and execution nodes may define an observe monitor which results in a kicker being started for the monitor when the node is synthesized.

    If an invocation of an RFM loop (for example, a re-deploy) synthesizes the behavior tree and a pre-condition for a child is no longer satisfied, the sub-tree with its plan-components is removed (that is, the plan-components are set to back-tracking mode).

    The following control flow nodes are defined:

    • Selector: A selector node has a set of children which are synthesized as described above.

    • Multiplier: A multiplier has a 'foreach_'_ mechanism that produces a list of elements. For each resulting element, the children are synthesized as described above. This can be used, for example, to create several plan-components of the same type.

    There is just one type of execution node:

    • Create component_:_ The create-component execution node creates an instance of the component type that it refers to in the plan.

    It is recommended to keep the behavior tree as flat as possible. The most trivial case is when the behavior tree creates a static nano-plan, that is, all the plan-components are defined and never removed. The following is an example of such a behavior tree:

    Having a selector on root implies that all plan-components are created if they don't have any pre-conditions, or for which the pre-conditions are satisfied.

    An example of a more elaborated behavior tree is the following:

    This behavior tree has a selector node as the root. It will always synthesize the "self" plan component and then evaluate the pre-condition for the selector child. If that pre-condition is satisfied, it then creates four other plan-components.

    The multiplier control flow node is used when a plan component of a certain type should be cloned into several copies depending on some service input parameters. For this reason, the multiplier node defines a foreach, a when, and a variable. The foreach is evaluated and for each node in the nodeset that satisfies the when, the variable is evaluated as the outcome. The value is used for parameter substitution to a unique name for a duplicated plan component.

    The value is also added to the nano service opaque which enables the individual state nano service create() callbacks to retrieve the value.

    Variables might also have “when” expressions, which are used to decide if the variable should be added to the list of variables or not.

    Nano Service Pre-Condition

    Pre-conditions are what drive the execution of a nano service. A pre-condition is a prerequisite for a state to be executed or a component to be synthesized. If the pre-condition is not satisfied, it is then turned into a kicker which in turn re-deploys the nano service once the condition is fulfilled.

    When working with pre-conditions, you need to be aware that they work a bit differently when used as a kicker to redeploy the service and when they are used in the execution of the service. When the pre-condition is used in the re-deploy kicker, it then works as explained in the kicker documentation (that is, the trigger expression is evaluated before and after the change-set of the commit when the monitored nodeset is changed). When used during the execution of a nano service, you can only evaluate it on the current state of the database, which means that it only checks that the monitor returns a nodeset of one or more nodes and that trigger expression (if there is one) is fulfilled for any of the nodes in the nodeset.

    Support for pre-conditions checking, if a node has been deleted, is handled a bit differently due to the difference in how the pre-condition is evaluated. Kickers always trigger for changed nodes (add, deleted, or modified) and can check that the node was deleted in the commit that triggered the kicker. While in the nano service evaluation, you only have the current state of the database and the monitor expression will not return any nodes for evaluation of the trigger expression, consequently evaluating the pre-condition to false. To support deletes in both cases, you can create a pre-condition with a monitor expression and a child node ncs:trigger-on-delete which then both create a kicker that checks for deletion of the monitored node and also does the right thing in the nano service evaluation of the pre-condition. For example, you could have the following component:

    The component would only trigger the init states delete pre-condition when the device named test is deleted.

    It is possible to add multiple monitors to a pre-condition by using the ncs:all or ncs:any extensions. Both extensions take one or multiple monitors as argument. A pre-condition using the ncs:all extension is satisfied if all monitors given as arguments evaluate to true. A pre-condition using the ncs:any extension is satisfied if at least one of the monitors given as argument evaluates to true. The following component uses the ncs:all and ncs:any extensions for its self state's create and delete pre-condition, respectively:

    Nano Service Opaque and Component Properties

    The service opaque is a name-value list that can optionally be created/modified in some of the service callbacks, and then travels the chain of callbacks (pre-modification, create, post-modification). It is returned by the callbacks and stored persistently in the service private data. Hence, the next service invocation has access to the current opaque and can make subsequent read/write operations to the same object. The object is usually called opaque in Java and proplist in Python callbacks.

    The nano services handle the opaque in a similar fashion, where a callback for every state has access to and can modify the opaque. However, the behavior tree can also define variables, which you can use in preconditions or to set component names. These variables are also available in the callbacks, as component properties. The mechanism is similar but separate from the opaque. While the opaque is a single service-instance-wide object set only from the service code, component variables are set in and scoped according to the behavior tree. That is, component properties contain only the behavior tree variables which are in scope when a component is synthesized.

    For example, take the following behavior tree snippet:

    The callbacks for states in the “self” component only see the VAR1 variable, while those in “component1” see both VAR1 and VAR2 as component properties.

    Additionally, both the service opaque and component variables (properties) are used to look up substitutions in nano service XML templates and in the behavior tree. If used in the behavior tree, the same rules apply for the opaque as for component variables. So, a value needs to contain single quotes if you wish to use it verbatim in preconditions and similar constructs, for example:

    Using this scheme at an early state, such as the “self” component's “ncs:init”, you can have a callback that sets name-value pairs for all other states that are then implemented solely with templates and preconditions.

    Nano Service Callbacks

    The nano service can have several callback registrations, one for each plan component state. But note that some states may have no callbacks at all. The state may simply act as a checkpoint, that some condition is satisfied, using pre-condition statements. A component's ncs:ready state is a good example of this.

    The drawback with this flexible callback registration is that there must be a way for the NSO Service Manager to know if all expected nano service callbacks have been registered. For this reason, all nano service plan component states that require callbacks are marked with this information. When the plan is executed and the callback markings in the plan mismatch with the actual registrations, this results in an error.

    All callback registrations in NSO require a daemon to be instantiated, such as a Python or Java process. For nano services, it is allowed to have many daemons where each daemon is responsible for a subset of the plan state callback registrations. The neat thing here is that it becomes possible to mix different callback types (Template/Python/Java) for different plan states.

    The mixed callback feature caters to the case where most of the callbacks are templates and only some are Java or Python. This works well because nano services try to resolve the template parameters using the nano service opaque when applying a template. This is a unique functionality for nano services that makes Java or Python apply-template callbacks unnecessary.

    You can implement nano service callbacks as Templates as well as Python, Java, Erlang, and C code. The following examples cover the implementation of Template, Python and Java.

    A plan state template, if defined, replaces the need of a create() callback. In this case, there are no delete() callbacks and the status definitions must in this case be handled by the states delete pre-condition. The template must in addition to the servicepoint attribute, have a componenttype and a state attribute to be registered on the plan state:

    Specific to nano services, you can use parameters, such as $SOMEPARAM in the template. The system searches for the parameter value in the service opaque and in the component properties. If it is not defined, applying the template will fail.

    A Python create() callback is very similar to its ordinary service counterpart. The difference is that it has additional arguments. plan refers to the synthesized plan, while component and state specify the component and state for which it is invoked. The proplist argument is the nano service opaque (same naming as for ordinary services) and component_proplist contains component variables, along with their values.

    In the majority of cases, you should not need to manage the status of nano states yourself. However, should you need to override the default behavior, you can set the status explicitly, in the callback, using code similar to the following :

    The Python nano service callback needs a registration call for the specific service point, componentType, and state that it should be invoked for.

    For Java, annotations are used to define the callbacks for the component states. The registration of these callbacks is performed by the ncs-java-vm. The NanoServiceContext argument contains methods for retrieving the component and state for the invoked callback as well as methods for setting the resulting plan state status.

    Several componentType and state callbacks can be defined in the same Java class and are then registered by the same daemon.

    Generic Service Callbacks

    In some scenarios, there is a need to be able to register a callback for a certain state in several components with different component types. For this reason, it is possible to register a callback with a wildcard, using “*” as the component type. The invoked state sends the actual component name to the callback, allowing the callback to still distinguish component types if required.

    In Python, the component type is provided as an argument to the callback (component) and a generic callback is registered with an asterisk for a component, such as:

    In Java, you can perform the registration in the method annotation, as before. To retrieve the calling component type, use the NanoServiceContext.getComponent() method. For example:

    The generic callback can then act for the registered state in any component type.

    Nano Service Pre/Post Modifications

    The ordinary service pre/post modification callbacks still exist for nano services. They are registered as for an ordinary service and are invoked before the behavior tree synthetization and after the last component/state invocation.

    Registration of the ordinary create() will not fail for a nano service. But they will never be invoked.

    Forced Commits

    When implementing a nano service, you might end up in a situation where a commit is needed between states in a component to make sure that something has happened before the service can continue executing. One example of such behavior is if the service is dependent on the notifications from a device. In such a case, you can set up a notification kicker in the first state and then trigger a forced commit before any later states can proceed, therefore making sure that all future notifications are seen by the later states of the component.

    To force a commit in between two states of a component, add the ncs:force-commit tag in a ncs:create or ncs:delete tag. See the following example:

    Plan Location

    When defining a nano service, it is assumed that the plan is stored under the service path, as ncs:plan-data is added to the service definition. When the service instance is deleted, the plan is moved to the zombie instead, since the instance has been removed and the plan cannot be stored under it anymore. When writing other services or when working with a nano service in general, you need to be aware that the plan for a service might be in one of these two places depending on if the service instance has been deleted or not.

    To make it easier to work with a service, you can define a custom location for the plan and its history. In the ncs:service-behaviour-tree, you can specify that the plan should be stored outside of the service by setting the ncs:plan-location tag to a custom location. The location where the plan should be stored must be either a list or a container and include the ncs:plan-data tag. The plan data is then created in this location, no matter if the service instance has been deleted (turned into a zombie) or not, making it easy to base decisions on the state of the service as all plan queries can query the same plan.

    You can use XPath with the ncs:plan-location statement. The XPath is evaluated based on the nano service context. When the list or container, which contains the plan, is nested under another list, the outer list instance must exist before creating the nano service. At the same time, the outer list instance of the plan location must also remain intact for further service's life-cycle management, such as redeployment, deletion, etc. Otherwise, an error will be returned and logged, and any service interaction (create, re-deploy, delete, etc.) won't succeed.

    Nano Services and Commit Queue

    The commit queue feature, described in , allows for increased overall throughput of NSO by committing configuration changes into an outbound queue item instead of directly to affected devices. Nano services are aware of the commit queue and will make use of it, however, this interaction requires additional consideration.

    When the commit queue is enabled and there are outstanding commit queue items, the network is lagging behind the CDB. The CDB is forward-looking and shows the desired state of the network. Hence, the nano plan shows the desired state as well, since changes to reach this state may not have been pushed to the devices yet.

    To keep the convergence of the nano service in sync with the commit queue, nano services behave more asynchronously:

    • A nano service state does not make any progression while the service has an outstanding commit queue item. The outstanding item is listed under plan/commit-queue for the service, in normal or in zombie mode.

    • On completion of the commit queue item, the nano plan comes in sync with the network. The outstanding commit queue item is removed from the list above and the system issues a reactive-re-deploy action to resume the progression of the nano service.

    • Post-actions are delayed, while there is an outstanding commit queue item.

    The reason for such behavior is that commit queue items can fail. In case of a failure, the CDB and the network have diverged. In turn, the nano plan may have diverged and not reflect the actual network state if the failed commit queue item contained changes related to the nano service.

    What is worse, the network may be left in an inconsistent state. To counter that, NSO supports multiple recovery options for the commit queue. Since NSO release 5.7, using the rollback-on-error is the recommended option, as it undoes all the changes that are part of the same transaction. If the transaction includes the initial service instance creation, the instance is removed as well. That is usually not desired for nano services. A nano service will avoid such removal by only committing the service intent (the instance configuration) in the initial transaction. In this case, the service avoids potential rollback, as it does not perform any device configuration in the same transaction but progresses solely through (reactive) re-deploy.

    While error recovery helps keeping the network consistent, the end result remains that the requested change was not deployed. If a commit queue item with nano service-related changes fails, that signifies a failure for the nano service and NSO does the following:

    • Service progression stops.

    • The nano plan is marked as failed by creating the failed leaf under the plan.

    • The scheduled post-actions are canceled. Canceled post actions stay in the side-effect-queue with status canceled and are not going to be executed.

    After such an event, manual intervention is required. If not using the rollback-on-error option or rollback transaction fails, please consult for the correct procedure to follow. Once the cause of the commit queue failure is resolved, you can manually resume the service progression by invoking the reactive-re-deploy action on a nano service or a zombie.

    The service-commit-queue-event helps detect that a nano service instance deployment failed because a configuration change committed through the commit queue has failed. See section for details.

    Graceful Link Migration Example

    You can find another nano service example under examples.ncs/getting-started/developing-with-ncs/20-nano-services. The example illustrates a situation with a simple VPN link that should be set up between two devices. The link is considered established only after it is tested and a test-passed leaf is set to true. If the VPN link changes, the new endpoints must be set up before removing the old endpoints, to avoid disturbing customer traffic during the operation.

    The package named link contains the nano service definition. The service has a list containing at most one element, which constitutes the VPN link and is keyed on a-device a-interface b-device b-interface. The list element corresponds to a component type link:vlan-link in the nano service plan.

    In the plan definition, note that there is only one nano service callback registered for the service. This callback is defined for the link:dev-setup state in the link:vlan-link component type. In the plan, it is represented as follows:

    The callback is a template. You can find it under packages/link/templates as link-template.xml.

    For the state ncs:ready in the link:vlan-link component type there are both a create and a delete pre-condition. The create pre-condition for this state is as follows:

    This pre-condition implies that the components based on this component type are not considered finished until the test-passed leaf is set to a true value. The pre-condition implements the requirement that after the initial setup of a link configured by the link:dev-setup state, a manual test and setting of the test-passed leaf is performed before the link is considered finished.

    The delete pre-condition for the same state is as follows:

    This pre-condition implies that before you start deleting (back-tracking) an old component, the new component must have reached the ncs:ready state, that is, after being successfully tested. The first part of the pre-condition checks the status of the non-self components. Since there can be at most one link configured in the service instance, the only non-backtracking component, other than self, is the new link component. However, that condition on its own prevents the component from being deleted when deleting the service. So, the second part, after the or statement, checks if all components are back-tracking, which signifies service deletion. This approach illustrates a create-before-break scenario where the new link is created first, and only when it is set up, the old one is removed.

    The ncs:service-behavior-tree is registered on the service point link-servicepoint that is defined by the nano service. It refers to the plan definition named link:link-plan. The behavior tree has a selector on top, which chooses to synthesize its children depending on their pre-conditions. In this tree, there are no pre-conditions, so all children will be synthesized.

    First, there is a component self based on the ncs:self component type in the plan that is always synthesized.

    Second, there is a multiplier control node that chooses a node set. A variable named VALUE is created with a unique value for each node in that node set and creates a component of the link:vlan-link type for each node in the chosen node set. The name for each individual component is the value of the variable VALUE.

    Since the chosen node-set is the "endpoints" list that can contain at most one element, it produces only one component. However, if the link in the service is changed, that is, the old list entry is deleted and a new one is created, then the multiplier creates a component with a new name.

    This forces the old component (which is no longer synthesized) to be back-tracked and the plan definition above handles the create-before-break behavior of the back-tracking.

    To run the example, do the following:

    Build the example:

    Start the example:

    Run the example:

    Now you create a service that sets up a VPN link between devices ex1 and ex2, and is completed immediately since the test-passed leaf is set to true.

    You can inspect the result of the commit:

    The service sets up the link between the devices. Inspect the plan:

    All components in the plan have reached their ready state.

    Now, change the link by changing the interface on one of the devices. To do this, you must remove the old list entry in endpoints and create a new one.

    Commit dry-run to inspect what happens:

    Upon committing, the service just adds the new interface and does not remove anything at this point. The reason is that the test-passed leaf is not set to true for the new component. Commit this change and inspect the plan:

    Notice that the new component ex1-eth0-ex2-eth1 has not reached its ready state yet. Therefore, the old component ex1-eth0-ex2-eth0 still exists in back-track mode but is still waiting for the new component to finish.

    If you check what the service has configured at this point, you get the following:

    Both the old and the new links exist at this point. Now, set the test-passed leaf to true to force the new component to reach its ready state.

    If you now check the service plan, you see the following:

    The old component has been completely backtracked and is removed because the new component is finished. You should also check the service modifications. You should see that the old link endpoint is removed:

    Deleting a nano service always (even without a commit queue) creates a zombie and schedules its re-deploy to perform backtracking. Again, the re-deploy and, consequently, removal will not take place while there is an outstanding commit queue item.

    Implementation Reference
    Nano Service Callbacks
    Kicker
    Mode of Operation
    The Application Component
    Streams
    Notification Capability
    Commit Queue
    Commit Queue
    The service-commit-queue-event Notification
    Virtual Router Provisioning Steps
    Virtual Router Provisioning Plan
    Per-state FASTMAP with nano services
    Staged Delete with Backtracking
    Backtracking on no longer satisfied pre-condition
    Post-action Execution Through side-effect-queue
    Behavior Tree with a Static nano-plan
    Elaborated Behavior Tree
    module vrouter {
      prefix vr;
    
      identity vm-requested {
        base ncs:plan-state;
      }
    
      identity vm-configured {
        base ncs:plan-state;
      }
    
      ncs:plan-outline vrouter-plan {
        description "Plan for configuring a VM-based router";
    
        ncs:component-type "ncs:self" {
          ncs:state "vr:vm-requested";
          ncs:state "vr:vm-configured";
        }
      }
    }
    ncs:state "vr:vm-requested" {
      ncs:create {
        ncs:nano-callback;
      }
    }
    ncs:state "vr:vm-configured" {
      ncs:create {
        ncs:nano-callback;
        ncs:pre-condition {
          ncs:monitor "$SERVICE" {
            ncs:trigger-expr "vm-up-and-running = 'true'";
          }
        }
      }
    }
    <config-template xmlns="http://tail-f.com/ns/config/1.0"
                     servicepoint="vrouter-servicepoint"
                     componenttype="ncs:self"
                     state="vr:vm-configured">
      <devices xmlns="http://tail-f.com/ns/ncs">
        <!-- ... -->
      </devices>
    </config-template>
    class NanoApp(ncs.application.Application):
        def setup(self):
            self.register_nano_service('vrouter-servicepoint',  # Service point
                                       'ncs:self',              # Component
                                       'vr:vm-requested',       # State
                                       NanoServiceCallbacks)
    class NanoServiceCallbacks(ncs.application.NanoService):
        @ncs.application.NanoService.create
        def cb_nano_create(self, tctx, root, service, plan, component, state,
                           proplist, component_proplist):
            ...
    ncs:plan-outline vrouter-plan {
      description "Plan for configuring a VM-based router";
    
      ncs:component-type "ncs:self" {
        ncs:state "ncs:init";
        ncs:state "vr:vm-requested" {
          ncs:create {
            ncs:nano-callback;
          }
        }
        ncs:state "vr:vm-configured" {
          ncs:create {
            ncs:nano-callback;
            ncs:pre-condition {
              ncs:monitor "$SERVICE" {
                ncs:trigger-expr "vm-up-and-running = 'true'";
              }
            }
          }
        }
        ncs:state "ncs:ready";
      }
    }
    ncs:service-behavior-tree vrouter-servicepoint {
      description "A static, single component behavior tree";
      ncs:plan-outline-ref "vr:vrouter-plan";
      ncs:selector {
        ncs:create-component "'self'" {
          ncs:component-type-ref "ncs:self";
        }
      }
    }
    list vrouter {
      description "Trivial VM-based router nano service";
    
      uses ncs:nano-plan-data;
      uses ncs:service-data;
      ncs:servicepoint vrouter-servicepoint;
    
      key name;
      leaf name {
        type string;
      }
    
      leaf vm-up-and-running {
        type boolean;
        config false;
      }
    }
    admin@ncs# show vrouter vr-01 plan
                                                                                   POST
                BACK                                                               ACTION
    TYPE  NAME  TRACK  GOAL  STATE          STATUS       WHEN                 ref  STATUS
    ---------------------------------------------------------------------------------------
    self  self  false  -     init           reached      2021-09-16T14:04:38  -    -
                             vm-requested   reached      2021-09-16T14:04:38  -    -
                             vm-configured  not-reached  -                    -    -
                             ready          not-reached  -                    -    -
    admin@ncs# vrouter vr-01 get-modifications
    cli {
        local-node {
            data +vm-instance vr-01 {
                  +    type csr-small;
                  +}
    
        }
    }
    admin@ncs# show vrouter vr-01 plan
                                                                               POST
                BACK                                                           ACTION
    TYPE  NAME  TRACK  GOAL  STATE          STATUS   WHEN                 ref  STATUS
    -----------------------------------------------------------------------------------
    self  self  false  -     init           reached  2021-09-16T14:26:30  -    -
                             vm-requested   reached  2021-09-16T14:26:30  -    -
                             vm-configured  reached  2021-09-16T14:28:40  -    -
                             ready          reached  2021-09-16T14:28:40  -    -
    admin@ncs# vrouter vr-01 get-modifications
    cli {
        local-node {
            data +vm-instance vr-01 {
                 +    type    csr-small;
                 +    address 198.51.100.1;
                 +}
        }
    }
        ncs:state "vr:vm-requested" {
          ncs:create { ... }
          ncs:delete {
            ncs:pre-condition {
              ncs:monitor "$SERVICE" {
                ncs:trigger-expr "requests-in-processing = '0'";
              }
            }
          }
        }
        ncs:state "vr:vm-configured" {
          ncs:create { ... }
          ncs:delete {
            ncs:nano-callback;
          }
        }
        @NanoService.delete
        def cb_nano_delete(self, tctx, root, service, plan, component, state,
                           proplist, component_proplist):
            ...
          ncs:state "ncs:init" {
            ncs:create {
              ncs:post-action-node "$SERVICE" {
                ncs:action-name "allocate-ip";
                ncs:sync;
              }
            }
          }
          ncs:state "vr:ip-allocated" {
            ncs:delete {
              ncs:post-action-node "$SERVICE" {
                ncs:action-name "release-ip";
              }
            }
          }
    ncs:service-behavior-tree multirouter-servicepoint {
      description "A 2-VM behavior tree";
      ncs:plan-outline-ref "vr:multirouter-plan";
      ncs:selector {
        ncs:create-component "'vm1'" {
          ncs:component-type-ref "vr:router-vm";
        }
        ncs:create-component "'vm2'" {
          ncs:component-type-ref "vr:router-vm";
        }
      }
    }
    ncs:plan-outline multirouter-plan {
      description "Plan for configuring VM-based routers";
      ncs:self-as-service-status;
    
      ncs:component-type "ncs:self" {
        ncs:state "ncs:init";
        ncs:state "ncs:ready";
      }
    
      ncs:component-type "vr:router-vm" {
        ncs:state "ncs:init";
        // additional states
        ncs:state "ncs:ready";
      }
    }
    ncs:service-behavior-tree multirouter-servicepoint {
      description "A 2-VM behavior tree";
      ncs:plan-outline-ref "vr:multirouter-plan";
      ncs:selector {
        ncs:create-component "'self'" {
          ncs:component-type-ref "ncs:self";
        }
        ncs:create-component "'vm1'" {
          ncs:component-type-ref "vr:router-vm";
        }
        ncs:create-component "'vm2'" {
          ncs:component-type-ref "vr:router-vm";
        }
      }
    }
    ncs:service-behavior-tree multirouter-servicepoint {
      description "A conditional 2-VM behavior tree";
      ncs:plan-outline-ref "vr:multirouter-plan";
      ncs:selector {
        ncs:create-component "'self'" { ... }
        ncs:selector {
          ncs:pre-condition {
            ncs:monitor "$SERVICE" {
              ncs:trigger-expr "use-virtual-devices = 'true'";
            }
          }
          ncs:create-component "'vm1'" { ... }
          ncs:create-component "'vm2'" { ... }
        }
      }
    }
    ncs:multiplier {
      ncs:foreach "vms" {
        ncs:variable "NAME" {
          ncs:value-expr "concat('vm-', name)";
        }
        ncs:create-component "$NAME" { ... }
      }
    }
          // vrouter name
          ncs:variable "NAME" {
            ncs:value-expr "current()/name";
          }
          // vrouter component name
          ncs:variable "D0NAME" {
            ncs:value-expr "concat(current()/name, '-day0')";
          }
          // vrouter day1 component name
          ncs:variable "D1NAME" {
            ncs:value-expr "concat(current()/name, '-day1')";
          }
          ncs:state "vr:requested" {
            ncs:create {
              // Call a Python action to create and start a netsim vrouter
              ncs:post-action-node "$SERVICE" {
                ncs:action-name "create-vrouter";
                ncs:result-expr "result = 'true'";
                ncs:sync;
              }
            }
          }
          ncs:state "vr:configured" {
            ncs:create {
              // Wait for the onboarding to complete
              ncs:pre-condition {
                ncs:monitor  "$SERVICE/plan/component[type='vr:vrouter']" +
                             "[name=$D0NAME]/state[name='vr:onboarded']" {
                  ncs:trigger-expr "post-action-status = 'create-reached'";
                }
              }
              // Invoke a service template to configure the vrouter
              ncs:nano-callback;
            }
          }
    <services xmlns="http://tail-f.com/ns/ncs">
      <plan-notifications>
        <subscription>
          <name>nano1</name>
          <service-type>/vr:vrouter</service-type>
          <component-type>self</component-type>
          <state>ready</state>
          <operation>modified</operation>
        </subscription>
        <subscription>
          <name>nano2</name>
          <service-type>/vr:vrouter</service-type>
          <component-type>self</component-type>
          <state>ready</state>
          <operation>created</operation>
        </subscription>
      </plan-notifications>
    </services>
    <services xmlns="http://tail-f.com/ns/ncs">
      <commit-queue-notifications>
        <subscription>
          <name>nano1</name>
          <service-type>/vr:vrouter</service-type>
        </subscription>
      </commit-queue-notifications>
    </services>
    $ curl -isu admin:admin -X GET -H "Accept: text/event-stream"
        http://localhost:8080/restconf/streams/service-state-changes/json
    
    data: {
    data:   "ietf-restconf:notification": {
    data:     "eventTime": "2021-11-16T20:36:06.324322+00:00",
    data:     "tailf-ncs:service-commit-queue-event": {
    data:       "service": "/vrouter:vrouter[name='vr7']",
    data:       "id": 1637135519125,
    data:       "status": "completed",
    data:       "trace-id": "vr7-1"
    data:     }
    data:   }
    data: }
    
    data: {
    data:   "ietf-restconf:notification": {
    data:     "eventTime": "2021-11-16T20:36:06.728911+00:00",
    data:     "tailf-ncs:plan-state-change": {
    data:       "service": "/vrouter:vrouter[name='vr7']",
    data:       "component": "self",
    data:       "state": "tailf-ncs:ready",
    data:       "operation": "modified",
    data:       "status": "reached",
    data:       "trace-id": "vr7-1"
    data:     }
    data:   }
    data: }
    $ netconf-console create-subscription=service-state-changes
    
    <?xml version="1.0" encoding="UTF-8"?>
    <notification xmlns="urn:ietf:params:xml:ns:netconf:notification:1.0">
      <eventTime>2021-11-16T20:36:06.324322+00:00</eventTime>
      <service-commit-queue-event xmlns="http://tail-f.com/ns/ncs">
        <service xmlns:vr="http://com/example/vrouter">/vr:vrouter[vr:name='vr7']</service>
        <id>1637135519125</id>
        <status>completed</status>
        <trace-id>vr7-1</trace-id>
      </service-commit-queue-event>
    </notification>
    <?xml version="1.0" encoding="UTF-8"?>
    <notification xmlns="urn:ietf:params:xml:ns:netconf:notification:1.0">
      <eventTime>2021-11-16T20:36:06.728911+00:00</eventTime>
      <plan-state-change xmlns="http://tail-f.com/ns/ncs">
        <service xmlns:vr="http://com/example/vrouter">/vr:vrouter[vr:name='vr7']</service>
        <component>self</component>
        <state>ready</state>
        <operation>modified</operation>
        <status>reached</status>
        <trace-id>vr7-1</trace-id>
      </plan-state-change>
    </notification>
    $ ncs_cli -u admin -C <<<'show notification stream service-state-changes'
    
    notification
     eventTime 2021-11-16T20:36:06.324322+00:00
     service-commit-queue-event
      service /vrouter[name='vr17']
      id 1637135519125
      status completed
      trace-id vr7-1
     !
    !
    notification
     eventTime 2021-11-16T20:36:06.728911+00:00
     plan-state-change
      service /vrouter[name='vr7']
      component self
      state ready
      operation modified
      status reached
      trace-id vr7-1
     !
    !
    $ curl -isu admin:admin -X PATCH
      -H "Content-type: application/yang-data+json"
      'http://localhost:8080/restconf/data?commit-queue=sync&trace-id=vr7-1'
      -d '{ "vrouter:vrouter": [ { "name": "vr7" } ] }'
              
                ncs:component "self" {
                  ncs:state "init" {
                    ncs:delete {
                      ncs:pre-condition {
                        ncs:monitor "/devices/device[name='test']" {
                          ncs:trigger-on-delete;
                        }
                      }
                    }
                  }
                  ncs:state "ready";
                }
              ncs:component "self" {
                ncs:state "init" {
                  ncs:create {
                    ncs:pre-condition {
                      ncs:all {
                        ncs:monitor $SERVICE/syslog {
                          ncs:trigger-expr: "current() = true"
                        }
                        ncs:monitor $SERVICE/dns {
                          ncs:trigger-expr: "current() = true"
                          }
                        }
                      }
                    }
                  }
                  ncs:delete {
                    ncs:pre-condition {
                      ncs:any {
                        ncs:monitor $SERVICE/syslog {
                          ncs:trigger-expr: "current() = false"
                        }
                        ncs:monitor $SERVICE/dns {
                          ncs:trigger-expr: "current() = false"
                          }
                        }
                      }
                    }
                  }
                }
                ncs:state "ready";
              }
        ncs:selector {
          ncs:variable "VAR1" {
            ncs:value-expr "'value1'";
          }
          ncs:create-component "'self'" {
            ncs:component-type-ref "ncs:self";
          }
          ncs:selector {
            ncs:variable "VAR2" {
              ncs:value-expr "'value2'";
            }
            ncs:create-component "'component1'" {
              ncs:component-type-ref "t:my-component";
            }
          }
        }
        proplist.append(('VARX', "'some value'"))
    <config-template xmlns="http://tail-f.com/ns/config/1.0"
                     servicepoint="my-servicepoint"
                     componenttype="my:some-component"
                     state="my:some-state">
      <devices xmlns="http://tail-f.com/ns/ncs">
        <!-- ... -->
      </devices>
    </config-template>
    class NanoServiceCallbacks(ncs.application.NanoService):
    
        @ncs.application.NanoService.create
        def cb_nano_create(self, tctx, root, service, plan, component, state,
                           proplist, component_proplist):
            ...
    
        @ncs.application.NanoService.delete
        def cb_nano_delete(self, tctx, root, service, plan, component, state,
                           proplist, component_proplist):
            ...
            plan.component[component].state[state].status = 'failed'
    class Main(ncs.application.Application):
    
        def setup(self):
            ...
            self.register_nano_service('my-servicepoint',
                                       'my:some-component',
                                       'my:some-state',
                                       NanoServiceCallbacks)
    public class myRFS {
    
        @NanoServiceCallback(servicePoint="my-servicepoint",
                             componentType="my:some-component",
                             state="my:some-state",
                             callType=NanoServiceCBType.CREATE)
        public Properties createSomeComponentSomeState(
                                        NanoServiceContext context,
                                        NavuNode service,
                                        NavuNode ncsRoot,
                                        Properties opaque,
                                        Properties componentProperties)
                                        throws DpCallbackException {
            // ...
        }
    
        @NanoServiceCallback(servicePoint="my-servicepoint",
                             componentType="my:some-component",
                             state="my:some-state",
                             callType=NanoServiceCBType.DELETE)
        public Properties deleteSomeComponentSomeState(
                                        NanoServiceContext context,
                                        NavuNode service,
                                        NavuNode ncsRoot,
                                        Properties opaque,
                                        Properties componentProperties)
                                        throws DpCallbackException {
            // ...
        }
    self.register_nano_service('my-servicepoint', '*', state, ServiceCallbacks)
        @NanoServiceCallback(servicePoint="my-servicepoint",
                             componentType="*", state="my:some-state",
                             callType=NanoServiceCBType.CREATE)
        public Properties genericNanoCreate(NanoServiceContext context,
                                            NavuNode service,
                                            NavuNode ncsRoot,
                                            Properties opaque,
                                            Properties componentProperties)
                                            throws DpCallbackException {
    
            String currentComponent = context.getComponent();
            // ...
        }
                  ncs:component "self" {
                    ncs:state "init" {
                      ncs:create {
                        ncs:force-commit;
                      }
                    }
                    ncs:state "ready" {
                      ncs:delete {
                        ncs:force-commit;
                      }
                    }
                  }
    Nano services custom plan location example
      list custom {
        description "Custom plan location example service.";
    
        key name;
        leaf name {
          tailf:info "Unique service id";
          tailf:cli-allow-range;
          type string;
        }
    
        uses ncs:service-data;
        ncs:servicepoint custom-plan-servicepoint;
      }
    
      list custom-plan {
        description "Custom plan location example plan.";
    
        key name;
        leaf name {
          tailf:info "Unique service id";
          tailf:cli-allow-range;
          type string;
        }
    
        uses ncs:nano-plan-data;
      }
    
      ncs:plan-outline custom-plan {
        description
          "Custom plan location example outline";
    
        ncs:component-type "ncs:self" {
          ncs:state "ncs:init";
          ncs:state "ncs:ready";
        }
      }
    
      ncs:service-behavior-tree custom-plan-location-servicepoint {
        description
          "Custom plan location example service behaviour three.";
    
        ncs:plan-outline-ref custom:custom-plan;
        ncs:plan-location "/custom-plan";
    
        ncs:selector {
          ncs:create-component "'self'" {
            ncs:component-type-ref "ncs:self";
          }
        }
      }
    20-nano-services link example plan
      identity vlan-link {
        base ncs:plan-component-type;
      }
    
      identity dev-setup {
        base ncs:plan-state;
      }
    
      ncs:plan-outline link:link-plan {
        description
          "Make before brake vlan plan";
    
        ncs:component-type "ncs:self" {
          ncs:state "ncs:init";
          ncs:state "ncs:ready";
        }
    
        ncs:component-type "link:vlan-link" {
          ncs:state "ncs:init";
          ncs:state "link:dev-setup" {
            ncs:create {
              ncs:nano-callback;
            }
          }
          ncs:state "ncs:ready" {
            ncs:create {
              ncs:pre-condition {
                ncs:monitor "$SERVICE/endpoints" {
                  ncs:trigger-expr "test-passed = 'true'";
                }
              }
            }
            ncs:delete {
              ncs:pre-condition {
                ncs:monitor "$SERVICE/plan" {
                  ncs:trigger-expr
                    "component[name != 'self'][back-track = 'false']"
                  + "/state[name = 'ncs:ready'][status = 'reached']"
                  + " or not(component[back-track = 'false'])";
                }
              }
            }
          }
        }
      }
            ncs:state "link:dev-setup" {
              ncs:create {
                ncs:nano-callback;
              }
            }
            ncs:create {
              ncs:pre-condition {
                ncs:monitor "$SERVICE/endpoints" {
                  ncs:trigger-expr "test-passed = 'true'";
                }
              }
            }
            ncs:delete {
              ncs:pre-condition {
                ncs:monitor "$SERVICE/plan" {
                  ncs:trigger-expr
                    "component[name != 'self'][back-track = 'false']"
                  + "/state[name = 'ncs:ready'][status = 'reached']"
                  + " or not(component[back-track = 'false'])";
                }
              }
            }
    20-nano-services link example behavior tree
      ncs:service-behavior-tree link-servicepoint {
        description
          "Make before brake vlan example";
    
        ncs:plan-outline-ref "link:link-plan";
    
        ncs:selector {
          ncs:create-component "'self'" {
            ncs:component-type-ref "ncs:self";
          }
    
          ncs:multiplier {
            ncs:foreach "endpoints" {
              ncs:variable "VALUE" {
                ncs:value-expr "concat(a-device, '-', a-interface,
                                       '-', b-device, '-', b-interface)";
              }
            }
            ncs:create-component "$VALUE" {
              ncs:component-type-ref "link:vlan-link";
            }
          }
        }
    $ cd examples.ncs/getting-started/developing-with-ncs/20-nano-services
    $ make all
    $ cd ncs-netsim restart
    $ ncs
    $ ncs_cli -C -u admin
    admin@ncs(config)# devices sync-from
    sync-result {
        device ex0
        result true
    }
    sync-result {
        device ex1
        result true
    }
    sync-result {
        device ex2
        result true
    }
    admin@ncs(config)# config
    Entering configuration mode terminal
    admin@ncs(config)# link t2 unit 17 vlan-id 1
    admin@ncs(config-link-t2)# link t2 endpoints ex1 eth0 ex2 eth0 test-passed true
    admin@ncs(config-endpoints-ex1/eth0/ex2/eth0)# commit
    admin@ncs(config-endpoints-ex1/eth0/ex2/eth0)# top
    admin@ncs(config)# exit
    admin@ncs# link t2 get-modifications
    cli  devices {
              device ex1 {
                  config {
                      r:sys {
                          interfaces {
                              interface eth0 {
         +                        unit 17 {
         +                            vlan-id 1;
         +                        }
                              }
                          }
                      }
                  }
              }
              device ex2 {
                  config {
                      r:sys {
                          interfaces {
                              interface eth0 {
         +                        unit 17 {
         +                            vlan-id 1;
         +                        }
                              }
                          }
                      }
                  }
              }
          }
    admin@ncs# show link t2 plan component * state * status
    NAME               STATE      STATUS
    ---------------------------------------
    self               init       reached
                       ready      reached
    ex1-eth0-ex2-eth0  init       reached
                       dev-setup  reached
                       ready      reached
    admin@ncs# config
    Entering configuration mode terminal
    admin@ncs(config)# no link t2 endpoints ex1 eth0 ex2 eth0
    admin@ncs(config)# link t2 endpoints ex1 eth0 ex2 eth1
    admin@ncs(config-endpoints-ex1/eth0/ex2/eth1)# commit dry-run
    cli  devices {
             device ex1 {
                 config {
                     r:sys {
                         interfaces {
                             interface eth0 {
                             }
                         }
                     }
                 }
             }
             device ex2 {
                 config {
                     r:sys {
                         interfaces {
        +                    interface eth1 {
        +                        unit 17 {
        +                            vlan-id 1;
        +                        }
        +                    }
                         }
                     }
                 }
             }
         }
         link t2 {
        -    endpoints ex1 eth0 ex2 eth0 {
        -        test-passed true;
        -    }
        +    endpoints ex1 eth0 ex2 eth1 {
        +    }
         }
    admin@ncs(config-endpoints-ex1/eth0/ex2/eth1)# commit
    admin@ncs(config-endpoints-ex1/eth0/ex2/eth1)# top
    admin@ncs(config)# exit
    admin@ncs# show link t2 plan
                                                                       ...
                                  BACK                                 ...
    NAME               TYPE       TRACK  GOAL  STATE      STATUS       ...
    -------------------------------------------------------------------...
    self               self       false  -     init       reached      ...
                                               ready      reached      ...
    ex1-eth0-ex2-eth1  vlan-link  false  -     init       reached      ...
                                               dev-setup  reached      ...
                                               ready      not-reached  ...
    ex1-eth0-ex2-eth0  vlan-link  true   -     init       reached      ...
                                               dev-setup  reached      ...
                                               ready      reached      ...
    admin@ncs# link t2 get-modifications
    cli  devices {
              device ex1 {
                  config {
                      r:sys {
                          interfaces {
                              interface eth0 {
         +                        unit 17 {
         +                            vlan-id 1;
         +                        }
                              }
                          }
                      }
                  }
              }
              device ex2 {
                  config {
                      r:sys {
                          interfaces {
                              interface eth0 {
         +                        unit 17 {
         +                            vlan-id 1;
         +                        }
                              }
         +                    interface eth1 {
         +                        unit 17 {
         +                            vlan-id 1;
         +                        }
         +                    }
                          }
                      }
                  }
              }
          }
    admin@ncs(config)# link t2 endpoints ex1 eth0 ex2 eth1 test-passed true
    admin@ncs(config-endpoints-ex1/eth0/ex2/eth1)# commit
    admin@ncs(config-endpoints-ex1/eth0/ex2/eth1)# top
    admin@ncs(config)# exit
    admin@ncs# show link t2 plan
                                                                   ...
                                  BACK                             ...
    NAME               TYPE       TRACK  GOAL  STATE      STATUS   ...
    ---------------------------------------------------------------...
    self               self       false  -     init       reached  ...
                                               ready      reached  ...
    ex1-eth0-ex2-eth1  vlan-link  false  -     init       reached  ...
                                               dev-setup  reached  ...
                                               ready      reached  ...
    admin@ncs# link t2 get-modifications
    cli  devices {
              device ex1 {
                  config {
                      r:sys {
                          interfaces {
                              interface eth0 {
         +                        unit 17 {
         +                            vlan-id 1;
         +                        }
                              }
                          }
                      }
                  }
              }
              device ex2 {
                  config {
                      r:sys {
                          interfaces {
         +                    interface eth1 {
         +                        unit 17 {
         +                            vlan-id 1;
         +                        }
         +                    }
                          }
                      }
                  }
              }
          }

    YANG

    Learn the working aspects of YANG data modeling language in NSO.

    YANG is a data modeling language used to model configuration and state data manipulated by a NETCONF agent. The YANG modeling language is defined in RFC 6020 (version 1) and RFC 7950 (version 1.1). YANG as a language will not be described in its entirety here - rather, we refer to the IETF RFC text at RFC6020 and RFC7950.

    YANG in NSO

    In NSO, YANG is not only used for NETCONF data. On the contrary, YANG is used to describe the data model as a whole and used by all northbound interfaces.

    NSO uses YANG for Service Models as well as for specifying device interfaces. Where do these models come from? When it comes to services, the YANG service model is specified as part of the service design activity. NSO ships several examples of service models that can be used as a starting point. For devices, it depends on the underlying device interface how the YANG model is derived. For native NETCONF/YANG devices the YANG model is of course given by the device. For SNMP devices, the NSO tool-chain generates the corresponding YANG modules, (SNMP NED). For CLI devices, the package for the device contains the YANG data model. This is shipped in text and can be modified to cater for upgrades. Customers can also write their own YANG data models to render the CLI integration (CLI NED). The situation for other interfaces is similar to CLI, a YANG model that corresponds to the device interface data model is written and bundled in the NED package.

    NSO also relies on the revision statement in YANG modules for revision management of different versions of the same type of managed device, but running different software versions.

    A YANG module can be directly transformed into a final schema (.fxs) file that can be loaded into NSO. Currently all features of the YANG language is supported where anyxml statement data is treated as a string.

    The data models including the .fxs file along with any code are bundled into packages that can be loaded to NSO. This is true for service applications as well as for NEDs and other packages. The corresponding YANG can be found in the src/yang directory in the package.

    YANG Introduction

    This section is a brief introduction to YANG. The exact details of all language constructs are fully described in RFC 6020 and RFC 7950.

    The NSO programmer must know YANG well since all APIs use various paths that are derived from the YANG data model.

    Modules and Submodules

    A module contains three types of statements: module-header statements, revision statements, and definition statements. The module header statements describe the module and give information about the module itself, the revision statements give information about the history of the module, and the definition statements are the body of the module where the data model is defined.

    A module may be divided into submodules, based on the needs of the module owner. The external view remains that of a single module, regardless of the presence or size of its submodules.

    The include statement allows a module or submodule to reference material in submodules, and the import statement allows references to material defined in other modules.

    Data Modeling Basics

    YANG defines four types of nodes for data modeling. In each of the following subsections, the example shows the YANG syntax as well as a corresponding NETCONF XML representation.

    Leaf Nodes

    A leaf node contains simple data like an integer or a string. It has exactly one value of a particular type and no child nodes.

    With XML value representation for example:

    An interesting variant of leaf nodes is typeless leafs.

    With XML value representation for example:

    Leaf-list Nodes

    A leaf-list is a sequence of leaf nodes with exactly one value of a particular type per leaf.

    With XML value representation for example:

    Container Nodes

    A container node is used to group related nodes in a subtree. It has only child nodes and no value and may contain any number of child nodes of any type (including leafs, lists, containers, and leaf-lists).

    With XML value representation for example:

    List Nodes

    A list defines a sequence of list entries. Each entry is like a structure or a record instance and is uniquely identified by the values of its key leafs. A list can define multiple keys and may contain any number of child nodes of any type (including leafs, lists, containers, etc.).

    With XML value representation for example:

    Example Module

    These statements are combined to define the module:

    State Data

    YANG can model state data, as well as configuration data, based on the config statement. When a node is tagged with config false, its sub-hierarchy is flagged as state data, to be reported using NETCONF's get operation, not the get-config operation. Parent containers, lists, and key leafs are reported also, giving the context for the state data.

    In this example, two leafs are defined for each interface, a configured speed, and an observed speed. The observed speed is not a configuration, so it can be returned with NETCONF get operations, but not with get-config operations. The observed speed is not configuration data, and cannot be manipulated using edit-config.

    Built-in Types

    YANG has a set of built-in types, similar to those of many programming languages, but with some differences due to special requirements from the management domain. The following table summarizes the built-in types.

    The table below lists YANG built-in types:

    Name
    Type
    Description

    Derived Types (typedef)

    YANG can define derived types from base types using the typedef statement. A base type can be either a built-in type or a derived type, allowing a hierarchy of derived types. A derived type can be used as the argument for the type statement.

    With XML value representation for example:

    User-defined typedefs are useful when we want to name and reuse a type several times. It is also possible to restrict leafs inline in the data model as in:

    Reusable Node Groups (grouping)

    Groups of nodes can be assembled into the equivalent of complex types using the grouping statement. grouping defines a set of nodes that are instantiated with the uses statement:

    With XML value representation for example:

    The grouping can be refined as it is used, allowing certain statements to be overridden. In this example, the description is refined:

    Choices (choice)

    YANG allows the data model to segregate incompatible nodes into distinct choices using the choice and case statements. The choice statement contains a set of case statements that define sets of schema nodes that cannot appear together. Each case may contain multiple nodes, but each node may appear in only one case under a choice.

    When the nodes from one case are created, all nodes from all other cases are implicitly deleted. The device handles the enforcement of the constraint, preventing incompatibilities from existing in the configuration.

    The choice and case nodes appear only in the schema tree, not in the data tree or XML encoding. The additional levels of hierarchy are not needed beyond the conceptual schema.

    With XML value representation for example:

    Extending Data Models (augment)

    YANG allows a module to insert additional nodes into data models, including both the current module (and its submodules) or an external module. This is useful e.g. for vendors to add vendor-specific parameters to standard data models in an interoperable way.

    The augment statement defines the location in the data model hierarchy where new nodes are inserted, and the when statement defines the conditions when the new nodes are valid.

    This example defines a uid node that only is valid when the user's class is not wheel.

    If a module augments another model, the XML representation of the data will reflect the prefix of the augmenting model. For example, if the above augmentation were in a module with the prefix other, the XML would look like:

    RPC Definitions

    YANG allows the definition of NETCONF RPCs. The method names, input parameters, and output parameters are modeled using YANG data definition statements.

    Notification Definitions

    YANG allows the definition of notifications suitable for NETCONF. YANG data definition statements are used to model the content of the notification.

    Working With YANG Modules

    Assume we have a small trivial YANG file test.yang:

    There is an Emacs mode suitable for YANG file editing in the system distribution. It is called yang-mode.el.

    We can use ncsc compiler to compile the YANG module.

    The above command creates an output file test.fxs that is a compiled schema that can be loaded into the system. The ncsc compiler with all its flags is fully described in in Manual Pages.

    There exist several standards-based auxiliary YANG modules defining various useful data types. These modules, as well as their accompanying .fxs files can be found in the ${NCS_DIR}/src/confd/yang directory in the distribution.

    The modules are:

    • ietf-yang-types: Defining some basic data types such as counters, dates, and times.

    • ietf-inet-types: Defining several useful types related to IP addresses.

    Whenever we wish to use any of those predefined modules we need to not only import the module into our YANG module, but we must also load the corresponding .fxs file for the imported module into the system.

    So, if we extend our test module so that it looks like:

    Normally when importing other YANG modules we must indicate through the --yangpath flag to ncsc where to search for the imported module. In the special case of the standard modules, this is not required.

    We compile the above as:

    We see that the generated .fxs file has a dependency on the standard urn:ietf:params:xml:ns:yang:inet-types namespace. Thus if we try to start NSO we must also ensure that the fxs file for that namespace is loaded.

    Failing to do so gives:

    The remedy is to modify ncs.conf so that it contains the proper load path or to provide the directory containing the fxs file, alternatively, we can provide the path on the command line. The directory ${NCS_DIR}/etc/ncs contains pre-compiled versions of the standard YANG modules.

    ncs.conf is the configuration file for NSO itself. It is described in the in Manual Pages.

    Integrity Constraints

    The YANG language has built-in declarative constructs for common integrity constraints. These constructs are conveniently specified as must statements.

    A must statement is an XPath expression that must evaluate to true or a non-empty node-set.

    An example is:

    XPath is a very powerful tool here. It is often possible to express the most realistic validation constraints using XPath expressions. Note that for performance reasons, it is recommended to use the tailf:dependency statement in the must statement. The compiler gives a warning if a must statement lacks a tailf:dependency statement, and it cannot derive the dependency from the expression. The options --fail-on-warnings or -E TAILF_MUST_NEED_DEPENDENCY can be given to force this warning to be treated as an error. See tailf:dependency in in Manual Pages for details.

    Another useful built-in constraint checker is the unique statement.

    With the YANG code:

    We specify that the combination of IP and port must be unique. Thus the configuration is not valid:

    The usage of leafrefs (See the YANG specification) ensures that we do not end up with configurations with dangling pointers. Leafrefs are also especially good since the CLI and Web UI can render a better interface.

    If other constraints are necessary, validation callback functions can be programmed in Java, Python, or Erlang. See tailf:validate in in Manual Pages for details.

    The when statement

    The when statement is used to make its parent statement conditional. If the XPath expression specified as the argument to this statement evaluates to false, the parent node cannot be given configured. Furthermore, if the parent node exists, and some other node is changed so that the XPath expression becomes false, the parent node is automatically deleted. For example:

    This data model snippet says that b can only exist if a is true. If a is true, and b has a value, and a is set to false, b will automatically be deleted.

    Since the XPath expression in theory can refer to any node in the data tree, it has to be re-evaluated when any node in the tree is modified. But this would have a disastrous performance impact, so to avoid this, NSO keeps track of dependencies for each when expression. In some simple cases, the confdc can figure out these dependencies by itself. In the example above, NSO will detect that b is dependent on a, and evaluate b's XPath expression only if a is modified. If confdc cannot detect the dependencies by itself, it requires a tailf:dependency statement in the when statement. See tailf:dependency in in Manual Pages for details.

    Using the Tail-f Extensions with YANG

    Tail-f has an extensive set of extensions to the YANG language that integrates YANG models in NSO. For example, when we have config false; data, we may wish to invoke user C code to deliver the statistics data in runtime. To do this we annotate the YANG model with a Tail-f extension called tailf:callpoint.

    Alternatively, we may wish to invoke user code to validate the configuration, this is also controlled through an extension called tailf:validate.

    All these extensions are handled as normal YANG extensions. (YANG is designed to be extended) We have defined the Tail-f proprietary extensions in a file ${NCS_DIR}/src/ncs/yang/tailf-common.yang

    Continuing with our previous example, by adding a callpoint and a validation point, we get:

    The above module contains a callpoint and a validation point. The exact syntax for all Tail-f extensions is defined in the tailf-common.yang file.

    Note the import statement where we import tailf-common.

    When we are using YANG specifications to generate Java classes for ConfM, these extensions are ignored. They only make sense on the device side. It is worth mentioning them though since EMS developers will certainly get the YANG specifications from the device developers, thus the YANG specifications may contain extensions

    The man page in Manual Pages describes all the Tail-f YANG extensions.

    Using a YANG Annotation File

    Sometimes it is convenient to specify all Tail-f extension statements in-line in the original YANG module. But in some cases, e.g. when implementing a standard YANG module, it is better to keep the Tail-f extension statements in a separate annotation file. When the YANG module is compiled to an fxs file, the compiler is given the original YANG module and any number of annotation files.

    A YANG annotation file is a normal YANG module that imports the module to annotate. Then the tailf:annotate statement is used to annotate nodes in the original module. For example, the module test above can be annotated like this:

    To compile the module with annotations, use the -a parameter to confdc:

    Custom Help Texts and Error Messages

    Certain parts of a YANG model are used by northbound agents, e.g. CLI and Web UI, to provide the end-user with custom help texts and error messages.

    Custom Help Texts

    A YANG statement can be annotated with a description statement which is used to describe the definition for a reader of the module. This text is often too long and too detailed to be useful as help text in a CLI. For this reason, NSO by default does not use the text in the description for this purpose. Instead, a tail-f-specific statement, tailf:info is used. It is recommended that the standard description statement contains a detailed description suitable for a module reader (e.g. NETCONF client or server implementor), and tailf:info contains a CLI help text.

    As an alternative, NSO can be instructed to use the text in the description statement also for CLI help text. See the option --use-description in in Manual Pages.

    For example, CLI uses the help text to prompt for a value of this particular type. The CLI shows this information during tab/command completion or if the end-user explicitly asks for help using the ?-character. The behavior depends on the mode the CLI is running in.

    The Web UI uses this information likewise to help the end-user.

    The mtu definition below has been annotated to enrich the end-user experience:

    Custom Help Text in a typedef

    Alternatively, we could have provided the help text in a typedef statement as in:

    If there is an explicit help text attached to a leaf, it overrides the help text attached to the type.

    Custom Error Messages

    A statement can have an optional error message statement. The northbound agents, for example, the CLI uses this to inform the end-user about a provided value that is not of the correct type. If no custom error message statement is available NSO generates a built-in error message, e.g. 1505 is too large.

    All northbound agents use the extra information provided by an error-message statement.

    The typedef statement below has been annotated to enrich the end-user experience when it comes to error information:

    Example: Modeling a List of Interfaces

    Say, for example, that we want to model the interface list on a Linux-based device. Running the ip link list command reveals the type of information we have to model

    And, this is how we want to represent the above in XML:

    An interface or a link has data associated with it. It also has a name, an obvious choice to use as the key - the data item that uniquely identifies an individual interface.

    The structure of a YANG model is always a header, followed by type definitions, followed by the actual structure of the data. A YANG model for the interface list starts with a header:

    A number of datatype definitions may follow the YANG module header. Looking at the output from /sbin/ip we see that each interface has a number of boolean flags associated with it, e.g. UP, and NOARP.

    One way to model a sequence of boolean flags is as a sequence of statements:

    A better way is to model this as:

    We could choose to group these leafs together into a grouping. This makes sense if we wish to use the same set of boolean flags in more than one place. We could thus create a named grouping such as:

    The output from /sbin/ip also contains Ethernet MAC addresses. These are best represented by the mac-address type defined in the ietf-yang-types.yang file. The mac-address type is defined as:

    This defines a restriction on the string type, restricting values of the defined type mac-address to be strings adhering to the regular expression [0-9a-fA-F]{2}(:[0-9a-fA-F]{2}){5} Thus strings such as a6:17:b9:86:2c:04 will be accepted.

    Queue disciplines are associated with each device. They are typically used for bandwidth management. Another string restriction we could do is to define an enumeration of the different queue disciplines that can be attached to an interface.

    We could write this as:

    There are a large number of queue disciplines and we only list a few here. The example serves to show that by using enumerations we can restrict the values of the data set in a way that ensures that the data entered always is valid from a syntactical point of view.

    Now that we have a number of usable datatypes, we continue with the actual data structure describing a list of interface entries:

    The key attribute on the leaf named "name" is important. It indicates that the leaf is the instance key for the list entry named link. All the link leafs are guaranteed to have unique values for their name leafs due to the key declaration.

    If one leaf alone does not uniquely identify an object, we can define multiple keys. At least one leaf must be an instance key - we cannot have lists without a key.

    List entries are ordered and indexed according to the value of the key(s).

    Modeling Relationships

    A very common situation when modeling a device configuration is that we wish to model a relationship between two objects. This is achieved by means of the leafref statements. A leafref points to a child of a list entry which either is defined using a key or unique attribute.

    The leafref statement can be used to express three flavors of relationships: extensions, specializations, and associations. Below we exemplify this by extending the link example from above.

    Firstly, assume we want to put/store the queue disciplines from the previous section in a separate container - not embedded inside the links container.

    We then specify a separate container, containing all the queue disciplines which each refers to a specific link entry. This is written as:

    The linkName statement is both an instance key of the queueDiscipline list, and at the same time refers to a specific link entry. This way we can extend the amount of configuration data associated with a specific link entry.

    Secondly, assume we want to express a restriction or specialization on Ethernet link entries, e.g. it should be possible to restrict interface characteristics such as 10Mbps and half duplex.

    We then specify a separate container, containing all the specializations which each refers to a specific link:

    The linkName leaf is both an instance key to the linkLimitation list, and at the same time refers to a specific link leaf. This way we can restrict or specialize a specific link.

    Thirdly, assume we want to express that one of the link entries should be the default link. In that case, we enforce an association between a non-dynamic defaultLink and a certain link entry:

    Ensuring Uniqueness

    Key leafs are always unique. Sometimes we may wish to impose further restrictions on objects. For example, we can ensure that all link entries have a unique MAC address. This is achieved through the use of the unique statement:

    In this example, we have two unique statements. These two groups ensure that each server has a unique index number as well as a unique IP and port pair.

    Default Values

    A leaf can have a static or dynamic default value. Static default values are defined with the default statement in the data model. For example:

    and:

    A dynamic default value means that the default value for the leaf is the value of some other leaf in the data model. This can be used to make the default values configurable by the user. Dynamic default values are defined using the tailf:default-ref statement. For example, suppose we want to make the MTU default value configurable:

    Now suppose we have the following data:

    In the example above, link eth0 has the mtu 1500, and the link eth1 has the mtu 1000. Since eth1 does not have a mtu value set, it defaults to the value of ../../mtu, which is 1000 in this case.

    Whenever a leaf has a default value, it implies that the leaf can be left out from the XML document, i.e. mandatory = false.

    With the default value mechanism an old configuration can be used even after having added new settings.

    Another example where default values are used is when a new instance is created. If all leafs within the instance have default values, these need not be specified in, for example, a NETCONF create operation.

    The Final Interface YANG Model

    Here is the final interface YANG model with all constructs described above:

    If the above YANG file is saved on disk, as links.yang, we can compile and link it using the confdc compiler:

    We now have a ready-to-use schema file named links.fxs on disk. To run this example, we need to copy the compiled links.fxs to a directory where NSO can find it.

    More on leafrefs

    A leafref is used to model relationships in the data model, as described in . In the simplest case, the leafref is a single leaf that references a single key in a list:

    But sometimes a list has more than one key, or we need to refer to a list entry within another list. Consider this example:

    If we want to refer to a specific server on a host, we must provide three values; the host name, the server IP, and the server port. Using leafrefs, we can accomplish this by using three connected leafs:

    The path specification for server-ip means the IP address of the server under the host with the same name as specified in server-host.

    The path specification for server-port means the port number of the server with the same IP as specified in server-ip, under the host with the same name as specified in server-host.

    This syntax quickly gets awkward and error-prone. NSO supports a shorthand syntax, by introducing an XPath function deref() (see in Manual Pages ). Technically, this function follows a leafref value and returns all nodes that the leafref refers to (typically just one). The example above can be written like this:

    Note that using the deref function is syntactic sugar for the basic syntax. The translation between the two formats is trivial. Also note that deref() is an extension to YANG, and third-party tools might not understand this syntax. To make sure that only plain YANG constructs are used in a module, the parameter --strict-yang can be given to confdc -c.

    Using Multiple Namespaces

    There are several reasons for supporting multiple configuration namespaces. Multiple namespaces can be used to group common datatypes and hierarchies to be used by other YANG models. Separate namespaces can be used to describe the configuration of unrelated sub-systems, i.e. to achieve strict configuration data model boundaries between these sub-systems.

    As an example, datatypes.yang is a YANG module that defines a reusable data type.

    We compile and link datatypes.yang into a final schema file representing the http://example.com/ns/dt namespace:

    To reuse our user defined countersType, we must import the datatypes module.

    When compiling this new module that refers to another module, we must indicate to confdc where to search for the imported module:

    confdc also searches for referred modules in the colon (:) separated path defined by the environment variable YANG_MODPATH and . (dot) is implicitly included.

    Module Names, Namespaces, and Revisions

    We have three different entities that define our configuration data.

    • The module name. A system typically consists of several modules. In the future, we also expect to see standard modules in a manner similar to how we have standard SNMP modules.

      It is highly recommended to have the vendor name embedded in the module name, similar to how vendors have their names in proprietary MIBs today.

    • The XML namespace. A module defines a namespace. This is an important part of the module header. For example, we have:\

      The namespace string must uniquely define the namespace. It is very important that once we have settled on a namespace we never change it. The namespace string should remain the same between revisions of a product. Do not embed revision information in the namespace string since that breaks manager-side NETCONF scripts.

    Hash Values and the id-value Statement

    Internally and in the programming APIs, NSO uses integer values to represent YANG node names and the namespace URI. This conserves space and allows for more efficient comparisons (including switch statements) in the user application code. By default, confdc automatically computes a hash value for the namespace URI and for each string that is used as a node name.

    Conflicts can occur in the mapping between strings and integer values - i.e. the initial assignment of integers to strings is unable to provide a unique, bi-directional mapping. Such conflicts are extremely rare (but possible) when the default hashing mechanism is used.

    The conflicts are detected either by confdc or by the NSO daemon when it loads the .fxs files.

    If there are any conflicts reported they will pertain to XML tags (or the namespace URI),

    There are two different cases:

    • Two different strings mapped to the same integer. This is the classical hash conflict - extremely rare due to the high quality of the hash function used. The resolution is to manually assign a unique value to one of the conflicting strings. The value should be greater than 2^31+2 but less than 2^32-1. This way it will be out of the range of the automatic hash values, which are between 0 and 2^31-1. The best way to choose a value is by using a random number generator, as in 2147483649 + rand:uniform(2147483645). The tailf:id-value should be placed as a substatement to the statement where the conflict occurs, or in the module statement in case of namespace URI conflict.

    • One string mapped to two different integers. This is even more rare than the previous case - it can only happen if a hash conflict was detected and avoided through the use of tailf:id-value on one of the strings, and that string also occurs somewhere else. The resolution is to add the same tailf:id-value

    NSO Caveats

    The union Type and Value Conversion

    When converting a string to an enumeration value, the order of types in the union is important when the types overlap. The first matching type will be used, so we recommend having the narrower (or more specific) types first.

    Consider the example below:

    Converting the string 42 to a typed value using the YANG model above, will always result in a string value even though it is the string representation of an int32. Trying to convert the string unbounded will also result in a string value instead of the enumeration because the enumeration is placed after the string.

    Instead, consider the example below where the string (being a wider type) is placed last:

    Converting the string 42 to the corresponding union value will result in a int32. Trying to convert the string unbounded will also result in the enumeration value as expected. The relative order of the int32 and enumeration does not matter as they do not overlap.

    Using the C and Python APIs to convert a string to a given value is further limited by the lack of restriction matching on the types. Consider the following example:

    Converting the string 42 will result in a string value, even though the pattern requires the string to begin with a character in the "a" to "z" range. This value will be considered invalid by NSO if used in any calls handled by NSO.

    To avoid issues when working with unions place wider types at the end. As an example put string last, int8 before int16 etc.

    User-defined Types

    When using user-defined types together with NSO the compiled schema does not contain the original type as specified in the YANG file. This imposes some limitations on the running system.

    High-level APIs are unable to infer the correct type of a value as this information is left out when the schema is compiled. It is possible to work around this issue by specifying the type explicitly whenever setting values of a user-defined type.

    empty

    Empty

    A leaf that does not have any value

    enumeration

    Text/Number

    Enumerated strings with associated numeric values

    identityref

    Text

    A reference to an abstract identity

    instance-identifier

    Text

    References a data tree node

    int8

    Number

    8-bit signed integer

    int16

    Number

    16-bit signed integer

    int32

    Number

    32-bit signed integer

    int64

    Number

    64-bit signed integer

    leafref

    Text/Number

    A reference to a leaf instance

    string

    Text

    Human readable string

    uint8

    Number

    8-bit unsigned integer

    uint16

    Number

    16-bit unsigned integer

    uint32

    Number

    32-bit unsigned integer

    uint64

    Number

    64-bit unsigned integer

    union

    Text/Number

    Choice of member types

    The
    revision
    statement as in:

    The revision is exposed to a NETCONF manager in the capabilities sent from the agent to the NETCONF manager in the initial hello message. The fine details of revision management are being worked on in the IETF NETMOD working group and are not finalized at the time of this writing.

    What is clear though, is that a manager should base its version decisions on the information in the revision string.

    A capabilities reply from a NETCONF agent to the manager may look as:\

    where the revision information for the http://example.com/ns/link namespace is encoded as ?revision=2007-06-09 using standard URI notation.

    When we change the data model for a namespace, it is recommended to change the revision statement and never make any changes to the data model that are backward incompatible. This means that all leafs that are added must be either optional or have a default value. That way it is ensured that the old NETCONF client code will continue to function on the new data model. Section 10 of RFC 6020 and section 11 of RFC 7950 define exactly what changes can be made to a data model to not break old NETCONF clients.

    to the second occurrence of the string.

    binary

    Text

    Any binary data

    bits

    Text/Number

    A set of bits or flags

    boolean

    Text

    true or false

    decimal64

    Number

    64-bit fixed point real number

    ncsc(1)
    ncs.conf(5)
    tailf_yang_extensions(5)
    tailf_yang_extensions(5)
    tailf_yang_extensions(5)
    tailf_yang_extensions(5)
    ncsc(1)
    Modeling Relationships
    XPATH FUNCTIONS
    leaf host-name {
        type string;
        description "Hostname for this system";
    }
    <host-name>my.example.com</host-name>
    leaf enabled {
        type empty;
        description "Enable the interface";
    }
    <enabled/>
    leaf-list domain-search {
             type string;
             description "List of domain names to search";
         }
    <domain-search>high.example.com</domain-search>
    <domain-search>low.example.com</domain-search>
    <domain-search>everywhere.example.com</domain-search>
    container system {
        container login {
            leaf message {
                type string;
                description
                    "Message given at start of login session";
            }
        }
    }
    <system>
      <login>
        <message>Good morning, Dave</message>
      </login>
    </system>
    list user {
        key "name";
        leaf name {
            type string;
        }
        leaf full-name {
            type string;
        }
        leaf class {
            type string;
        }
    }
    <user>
      <name>glocks</name>
      <full-name>Goldie Locks</full-name>
      <class>intruder</class>
    </user>
    <user>
      <name>snowey</name>
      <full-name>Snow White</full-name>
      <class>free-loader</class>
    </user>
    <user>
      <name>rzull</name>
      <full-name>Repun Zell</full-name>
      <class>tower</class>
    </user>
    // Contents of "acme-system.yang"
    module acme-system {
        namespace "http://acme.example.com/system";
        prefix "acme";
    
        organization "ACME Inc.";
        contact "[email protected]";
        description
            "The module for entities implementing the ACME system.";
    
        revision 2007-06-09 {
            description "Initial revision.";
        }
    
        container system {
            leaf host-name {
                type string;
                description "Hostname for this system";
            }
    
            leaf-list domain-search {
                type string;
                description "List of domain names to search";
            }
    
            container login {
                leaf message {
                    type string;
                    description
                        "Message given at start of login session";
                }
    
                list user {
                    key "name";
                    leaf name {
                        type string;
                    }
                    leaf full-name {
                        type string;
                    }
                    leaf class {
                        type string;
                    }
                }
            }
        }
    }
    list interface {
        key "name";
        config true;
    
        leaf name {
            type string;
        }
        leaf speed {
            type enumeration {
                enum 10m;
                enum 100m;
                enum auto;
            }
        }
        leaf observed-speed {
            type uint32;
            config false;
        }
    }
    typedef percent {
        type uint16 {
            range "0 .. 100";
        }
        description "Percentage";
    }
    
    leaf completed {
        type percent;
    }
    <completed>20</completed>
    leaf completed {
        type uint16 {
            range "0 .. 100";
        }
        description "Percentage";
    }
    grouping target {
        leaf address {
            type inet:ip-address;
            description "Target IP address";
        }
        leaf port {
            type inet:port-number;
            description "Target port number";
        }
    }
    
    container peer {
        container destination {
            uses target;
        }
    }
    <peer>
      <destination>
        <address>192.0.2.1</address>
        <port>830</port>
      </destination>
    </peer>
    container connection {
        container source {
            uses target {
                refine "address" {
                    description "Source IP address";
                }
                refine "port" {
                    description "Source port number";
                }
            }
        }
        container destination {
            uses target {
                refine "address" {
                    description "Destination IP address";
                }
                refine "port" {
                    description "Destination port number";
                }
            }
        }
    }
    container food {
       choice snack {
           mandatory true;
           case sports-arena {
               leaf pretzel {
                   type empty;
               }
               leaf beer {
                   type empty;
               }
           }
           case late-night {
               leaf chocolate {
                   type enumeration {
                       enum dark;
                       enum milk;
                       enum first-available;
                   }
               }
           }
       }
    }
    <food>
      <chocolate>first-available</chocolate>
    </food>
    augment /system/login/user {
        when "class != 'wheel'";
        leaf uid {
            type uint16 {
                range "1000 .. 30000";
            }
        }
    }
    <user>
      <name>alicew</name>
      <full-name>Alice N. Wonderland</full-name>
      <class>drop-out</class>
      <other:uid>1024</other:uid>
    </user>
    rpc activate-software-image {
        input {
            leaf image-name {
                type string;
            }
        }
        output {
            leaf status {
                type string;
            }
        }
    }
    <rpc message-id="101"
         xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
      <activate-software-image xmlns="http://acme.example.com/system">
        <name>acmefw-2.3</name>
     </activate-software-image>
    </rpc>
    
    <rpc-reply message-id="101"
               xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
      <status xmlns="http://acme.example.com/system">
        The image acmefw-2.3 is being installed.
      </status>
    </rpc-reply>
    notification link-failure {
        description "A link failure has been detected";
        leaf if-name {
            type leafref {
                path "/interfaces/interface/name";
            }
        }
        leaf if-admin-status {
            type ifAdminStatus;
        }
    }
    <notification xmlns="urn:ietf:params:netconf:capability:notification:1.0">
      <eventTime>2007-09-01T10:00:00Z</eventTime>
      <link-failure xmlns="http://acme.example.com/system">
        <if-name>so-1/2/3.0</if-name>
        <if-admin-status>up</if-admin-status>
      </link-failure>
    </notification>
    module test {
      namespace "http://tail-f.com/test";
      prefix "t";
    
      container top {
          leaf a {
              type int32;
          }
          leaf b {
              type string;
          }
      }
    }
    $ ncsc -c test.yang
    module test {
        namespace "http://tail-f.com/test";
        prefix "t";
    
        import ietf-inet-types {
            prefix inet;
        }
    
        container top {
            leaf a {
                type int32;
            }
            leaf b {
                type string;
            }
            leaf ip {
                type inet:ipv4-address;
            }
        }
    }
    $ ncsc -c test.yang
    $ ncsc --get-info test.fxs
    fxs file
    Ncsc version:           "3.0_2"
    uri:                    http://tail-f.com/test
    id:                     http://tail-f.com/test
    prefix:                 "t"
    flags:                  6
    type:                   cs
    mountpoint:             undefined
    exported agents:        all
    dependencies:           ['http://www.w3.org/2001/XMLSchema',
                             'urn:ietf:params:xml:ns:yang:inet-types']
    source:                 ["test.yang"]
    $ ncs -c ncs.conf --foreground --verbose
    The namespace urn:ietf:params:xml:ns:yang:inet-types (referenced by http://tail-f.com/test) could not be found in the loadPath.
    Daemon died status=21
    $ ncs -c ncs.conf --addloadpath ${NCS_DIR}/etc/ncs --foreground --verbose
     container interface {
        leaf ifType {
            type enumeration {
                enum ethernet;
                enum atm;
            }
        }
        leaf ifMTU {
            type uint32;
        }
        must "ifType != 'ethernet' or "
          +  "(ifType = 'ethernet' and ifMTU = 1500)" {
            error-message "An ethernet MTU must be 1500";
        }
        must "ifType != 'atm' or "
           + "(ifType = 'atm' and ifMTU <= 17966 and ifMTU >= 64)" {
            error-message "An atm MTU must be  64 .. 17966";
        }
    }
    list server {
          key "name";
          unique "ip port";
          leaf name {
              type string;
          }
          leaf ip {
              type inet:ip-address;
          }
          leaf port {
              type inet:port-number;
          }
      }
    <server>
      <name>smtp</name>
      <ip>192.0.2.1</ip>
      <port>25</port>
    </server>
    
    <server>
      <name>http</name>
      <ip>192.0.2.1</ip>
      <port>25</port>
    </server>
    leaf a {
        type boolean;
    }
    leaf b {
        type string;
        when "../a = 'true'";
    }
    module test {
       namespace "http://tail-f.com/test";
       prefix "t";
    
       import ietf-inet-types {
          prefix inet;
       }
       import tailf-common {
          prefix tailf;
       }
    
       container top {
          leaf a {
              type int32;
              config false;
              tailf:callpoint mycp;
          }
          leaf b {
             tailf:validate myvalcp {
                tailf:dependency "../a";
             }
             type string;
          }
          leaf ip {
             type inet:ipv4-address;
          }
       }
    }
    module test {
       namespace "http://tail-f.com/test";
       prefix "t";
    
       import ietf-inet-types {
          prefix inet;
       }
    
       container top {
          leaf a {
              type int32;
              config false;
          }
          leaf b {
             type string;
          }
          leaf ip {
             type inet:ipv4-address;
          }
       }
    }
    module test-ann {
       namespace "http://tail-f.com/test-ann";
       prefix "ta";
    
       import test {
          prefix t;
       }
       import tailf-common {
          prefix tailf;
       }
    
       tailf:annotate "/t:top/t:a" {
           tailf:callpoint mycp;
       }
    
       tailf:annotate "/t:top" {
           tailf:annotate "t:b" {  // recursive annotation
               tailf:validate myvalcp {
                   tailf:dependency "../t:a";
               }
           }
       }
    }
    confdc -c -a test-ann.yang test.yang
    leaf mtu {
        type uint16 {
            range "1 .. 1500";
        }
        description
           "MTU is the largest frame size that can be transmitted
            over the network. For example, an Ethernet MTU is 1,500
            bytes. Messages longer than the MTU must be divided
            into smaller frames.";
        tailf:info
           "largest frame size";
    }
     typedef mtuType {
        type uint16 {
            range "1 .. 1500";
        }
        description
            "MTU is the largest frame size that can be transmitted over the
             network. For example, an Ethernet MTU is 1,500
             bytes. Messages longer than the MTU must be
             divided into smaller frames.";
        tailf:info
           "largest frame size";
    }
    
    leaf mtu {
        type mtuType;
    }
    typedef mtuType {
       type uint32 {
           range "1..1500" {
               error-message
                  "The MTU must be a positive number not "
                + "larger than 1500";
           }
       }
    }
    $ /sbin/ip link list
    1: eth0: <BROADCAST,MULTICAST,UP>; mtu 1500 qdisc pfifo_fast qlen 1000
        link/ether 00:12:3f:7d:b0:32 brd ff:ff:ff:ff:ff:ff
    2: lo: <LOOPBACK,UP>; mtu 16436 qdisc noqueue
        link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    3: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop
        link/ether a6:17:b9:86:2c:04 brd ff:ff:ff:ff:ff:ff
    <?xml version="1.0"?>
    <config xmlns="http://example.com/ns/link">
      <links>
        <link>
          <name>eth0</name>
          <flags>
            <UP/>
            <BROADCAST/>
            <MULTICAST/>
          </flags>
          <addr>00:12:3f:7d:b0:32</addr>
          <brd>ff:ff:ff:ff:ff:ff</brd>
          <mtu>1500</mtu>
        </link>
    
        <link>
          <name>lo</name>
          <flags>
            <UP/>
            <LOOPBACK/>
          </flags>
          <addr>00:00:00:00:00:00</addr>
          <brd>00:00:00:00:00:00</brd>
          <mtu>16436</mtu>
        </link>
      </links>
    </config>
    module links {
        namespace "http://example.com/ns/links";
        prefix link;
    
        revision 2007-06-09 {
          description "Initial revision.";
        }
        ...
    leaf UP {
        type boolean;
        default false;
    }
    leaf NOARP {
        type boolean;
        default false;
    }
    leaf UP {
        type empty;
    }
    leaf NOARP {
        type empty;
    }
    grouping LinkFlags {
        leaf UP {
            type empty;
        }
        leaf NOARP {
            type empty;
        }
        leaf BROADCAST {
            type empty;
        }
        leaf MULTICAST {
            type empty;
        }
        leaf LOOPBACK {
            type empty;
        }
        leaf NOTRAILERS {
            type empty;
        }
    }
    typedef mac-address {
        type string {
            pattern '[0-9a-fA-F]{2}(:[0-9a-fA-F]{2}){5}';
        }
        description
           "The mac-address type represents an IEEE 802 MAC address.
    
           This type is in the value set and its semantics equivalent to
           the MacAddress textual convention of the SMIv2.";
        reference
          "IEEE 802: IEEE Standard for Local and Metropolitan Area
                     Networks: Overview and Architecture
           RFC 2579: Textual Conventions for SMIv2";
    }
    typedef QueueDisciplineType {
       type enumeration {
          enum pfifo_fast;
          enum noqueue;
          enum noop;
          enum htp;
       }
    }
    container links {
        list link {
            key name;
            unique addr;
            max-elements 1024;
            leaf name {
                type string;
            }
            container flags {
                uses LinkFlags;
            }
            leaf addr {
                type yang:mac-address;
                mandatory true;
            }
            leaf brd {
                type yang:mac-address;
                mandatory true;
            }
            leaf qdisc {
                type QueueDisciplineType;
                mandatory true;
            }
            leaf qlen {
                type uint32;
                mandatory true;
            }
            leaf mtu {
                type uint32;
                mandatory true;
            }
        }
    }
    container queueDisciplines {
        list queueDiscipline {
            key linkName;
            max-elements 1024;
            leaf linkName {
                type leafref {
                    path "/config/links/link/name";
                }
            }
    
            leaf type {
                type QueueDisciplineType;
                mandatory true;
            }
            leaf length {
                type uint32;
            }
        }
    }
    container linkLimitations {
        list LinkLimitation {
            key linkName;
            max-elements 1024;
            leaf linkName {
                type leafref {
                    path "/config/links/link/name";
                }
            }
            container limitations {
                leaf only10Mbs { type boolean;}
                leaf onlyHalfDuplex { type boolean;}
            }
        }
    }
    leaf defaultLink {
        type leafref {
            path "/config/links/link/name";
        }
    }
    container servers {
        list server {
            key name;
            unique "ip port";
            unique "index";
            max-elements 64;
            leaf name {
                type string;
            }
            leaf index {
                type uint32;
                mandatory true;
            }
            leaf ip {
                type inet:ip-address;
                mandatory true;
            }
            leaf port {
                type inet:port-number;
                mandatory true;
            }
        }
    }
    leaf mtu {
        type int32;
        default 1500;
    }
    leaf UP {
        type boolean;
        default true;
    }
    container links {
        leaf mtu {
            type uint32;
        }
        list link {
            key name;
            leaf name {
                type string;
            }
            leaf mtu {
                type uint32;
                tailf:default-ref '../../mtu';
            }
        }
    }
    <links>
      <mtu>1000</mtu>
      <link>
        <name>eth0</name>
        <mtu>1500</mtu>
      </link>
      <link>
        <name>eth1</name>
      </link>
    </links>
    module links {
        namespace "http://example.com/ns/link";
        prefix link;
    
        import ietf-yang-types {
            prefix yang;
        }
    
    
        grouping LinkFlagsType {
            leaf UP {
                type empty;
            }
            leaf NOARP {
                type empty;
            }
            leaf BROADCAST {
                type empty;
            }
            leaf MULTICAST {
                type empty;
            }
            leaf LOOPBACK {
                type empty;
          }
            leaf NOTRAILERS {
                type empty;
            }
        }
    
        typedef QueueDisciplineType {
            type enumeration {
                enum pfifo_fast;
                enum noqueue;
                enum noop;
                enum htb;
            }
        }
        container config {
            container links {
                list link {
                    key name;
                    unique addr;
                    max-elements 1024;
                    leaf name {
                        type string;
                    }
                    container flags {
                        uses LinkFlagsType;
                    }
                    leaf addr {
                        type yang:mac-address;
                        mandatory true;
                    }
                    leaf brd {
                        type yang:mac-address;
                        mandatory true;
                    }
                    leaf mtu {
                        type uint32;
                        default 1500;
                    }
                }
            }
            container queueDisciplines {
                list queueDiscipline {
                    key linkName;
                    max-elements 1024;
                    leaf linkName {
                        type leafref {
                            path "/config/links/link/name";
                        }
                    }
                    leaf type {
                        type QueueDisciplineType;
                        mandatory true;
                    }
                    leaf length {
                        type uint32;
                    }
                }
            }
            container linkLimitations {
                list linkLimitation {
                    key linkName;
                    leaf linkName {
                        type leafref {
                            path "/config/links/link/name";
                        }
                    }
                    container limitations {
                        leaf only10Mbps {
                            type boolean;
                            default false;
                        }
                        leaf onlyHalfDuplex {
                            type boolean;
                            default false;
                        }
                    }
                }
            }
            container defaultLink {
                leaf linkName {
                    type leafref {
                        path "/config/links/link/name";
                    }
                }
            }
        }
    }
    $ confdc -c links.yang
    list host {
        key "name";
        leaf name {
            type string;
        }
        ...
    }
    
    leaf host-ref {
        type leafref {
            path "../host/name";
        }
    }
    list host {
        key "name";
        leaf name {
            type string;
        }
    
        list server {
            key "ip port";
            leaf ip {
                type inet:ip-address;
            }
            leaf port {
                type inet:port-number;
            }
            ...
        }
    }
    leaf server-host {
        type leafref {
            path "/host/name";
        }
    }
    leaf server-ip {
        type leafref {
            path "/host[name=current()/../server-host]/server/ip";
        }
    }
    leaf server-port {
        type leafref {
            path "/host[name=current()/../server-host]"
               + "/server[ip=current()/../server-ip]/../port";
        }
    }
    leaf server-host {
        type leafref {
            path "/host/name";
        }
    }
    leaf server-ip {
        type leafref {
            path "deref(../server-host)/../server/ip";
        }
    }
    leaf server-port {
        type leafref {
            path "deref(../server-ip)/../port";
        }
    }
    module datatypes {
      namespace "http://example.com/ns/dt";
      prefix dt;
    
      grouping countersType {
         leaf recvBytes {
            type uint64;
            mandatory true;
         }
         leaf sentBytes {
            type uint64;
            mandatory true;
         }
      }
    }
    $ confdc -c datatypes.yang
    module test {
        namespace "http://tail-f.com/test";
        prefix "t";
    
        import datatypes {
            prefix dt;
        }
    
        container stats {
            uses dt:countersType;
        }
    }
    $ confdc -c test.yang --yangpath /path/to/dt
     module acme-system {
         namespace "http://acme.example.com/system";
         .....
    leaf example {
      type union {
        type string; // NOTE: widest type first
        type int32;
        type enumeration {
          enum "unbounded";
        }
      }
    }
    leaf example {
      type union {
        type enumeration {
          enum "unbounded";
        }
        type int32;
        type string; // NOTE: widest type last
      }
    }
    leaf example {
      type union {
        type string {
          pattern "[a-z]+[0-9]+";
        }
        type int32;
      }
    }
     module acme-system {
         namespace "http://acme.example.com/system";
         prefix "acme";
    
         revision 2007-06-09;
         .....
    <?xml version="1.0" encoding="UTF-8"?>
    <hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
    <capabilities>
      <capability>urn:ietf:params:netconf:base:1.0</capability>
      <capability>urn:ietf:params:netconf:capability:writable-running:1.0</capability>
      <capability>urn:ietf:params:netconf:capability:candidate:1.0</capability>
      <capability>urn:ietf:params:netconf:capability:confirmed-commit:1.0</capability>
      <capability>urn:ietf:params:netconf:capability:xpath:1.0</capability>
      <capability>urn:ietf:params:netconf:capability:validate:1.0</capability>
      <capability>urn:ietf:params:netconf:capability:rollback-on-error:1.0</capability>
      <capability>http://example.com/ns/link?revision=2007-06-09</capability>
      ....

    Using CDB

    Concepts in usage of the Configuration Database (CDB).

    When using CDB to store the configuration data, the applications need to be able to:

    1. Read configuration data from the database.

    2. React to changes to the database. There are several possible writers to the database, such as the CLI, NETCONF sessions, the Web UI, either of the NSO sync commands, alarms that get written into the alarm table, NETCONF notifications that arrive at NSO or the NETCONF agent.

    The figure below illustrates the architecture of when the CDB is used. The Application components read configuration data and subscribe to changes to the database using a simple RPC-based API. The API is part of the Java library and is fully documented in the Javadoc for CDB.

    NSO CDB Architecture Scenario

    While CDB is the default data store for configuration data in NSO, it is possible to use an external database, if needed. See the example examples.ncs/getting-started/developing-with-ncs/6-extern-db for details.

    In the following, we will use the files in examples.ncs/service-provider/mpls-vpn as a source for our examples. Refer to README in that directory for additional details.

    The NSO Data Model

    NSO is designed to manage devices and services. NSO uses YANG as the overall modeling language. YANG models describe the NSO configuration, the device configurations, and the configuration of services. Therefore it is vital to understand the data model for NSO including these aspects. The YANG models are available in $NCS_DIR/src/ncs/yang and are structured as follows.

    tailf-ncs.yang is the top module that includes the following sub-modules:

    • tailf-ncs-common.yang: common definitions

    • tailf-ncs-packages.yang: this sub-module defines the management of packages that are run by NSO. A package contains custom code, models, and documentation for any function added to the NSO platform. It can for example be a service application or a southbound integration to a device.

    • tailf-ncs-devices.yang: This is a core model of NSO. The device model defines everything a user can do with a device that NSO speaks to via a Network Element Driver, NED.

    These models will be illustrated and briefly explained below. Note that the figures only contain some relevant aspects of the model and are far from complete. The details of the model are explained in the respective sections.

    A good way to learn the model is to start the NSO CLI and use tab completion to navigate the model. Note that depending if you are in operation mode or configuration mode different parts of the model will show up. Also try using TAB to get a list of actions at the level you want, for example, devices TAB.

    Another way to learn and explore the NSO model is to use the Yanger tool to render a tree output from the NSO model: yanger -f tree --tree-depth=3 tailf-ncs.yang. This will show a tree for the complete model. Below is a truncated example:

    Addressing Data Using Keypaths

    As CDB stores hierarchical data as specified by a YANG model, data is addressed by a path to the key. We call this a keypath. A keypath provides a path through the configuration data tree. A keypath can be either absolute or relative. An absolute keypath starts from the root of the tree, while a relative path starts from the "current position" in the tree. They are differentiated by the presence or absence of a leading /. Navigating the configuration data tree is thus done in the same way as a directory structure. It is possible to change the current position with for example the CdbSession.cd() method. Several of the API methods take a keypath as a parameter.

    YANG elements that are lists of other YANG elements can be traversed using two different path notations. Consider the following YANG model fragment:

    We can use the method CdbSession.getNumberOfInstances() to find the number of elements in a list has, and then traverse them using a standard index notation, i.e., <path to list>[integer]. The children of a list are numbered starting from 0. Looking at the example above (L3 VPN YANG Extract) the path /l3vpn:topology/connection[2]/endpoint-1 refers to the endpoint-1 leaf of the third connection. This numbering is only valid during the current CDB session. CDB is always locked for writing during a read session.

    We can also refer to list instances using the values of the keys of the list. In a YANG model, you specify which leafs (there can be several) are to be used for keys by using the key <name> statement at the beginning of the list. In our case a connection has the name leaf as the key. So the path /l3vpn:topology/connection{c1}/endpoint-2 refers to the endpoint-2 leaf of the connection whose name is “c1”.

    A YANG list may have more than one key. The syntax for the keys is a space-separated list of key values enclosed within curly brackets: {Key1 Key2 ...}

    Which version of the list element referencing to use depends on the situation. Indexing with an integer is convenient when looping through all elements. As a convenience all methods expecting keypaths accept formatting characters and accompanying data items. For example, you can use CdbSession.getElem("server[%d]/ifc{%s}/mtu", 2, "eth0") to fetch the MTU of the third server instance's interface named "eth0". Using relative paths and CdbSession.pushd() it is possible to write code that can be re-used for common sub-trees.

    The current position also includes the namespace. To read elements from a different namespace use the prefix qualified tag for that element like in l3vpn:topology.

    Subscriptions

    The CDB subscription mechanism allows an external program to be notified when some part of the configuration changes. When receiving a notification it is also possible to iterate through the changes written to CDB. Subscriptions are always towards the running data store (it is not possible to subscribe to changes to the startup data store). Subscriptions towards operational data (see ) kept in CDB are also possible, but the mechanism is slightly different.

    The first thing to do is to inform CDB which paths we want to subscribe to. Registering a path returns a subscription point identifier. This is done by acquiring a subscriber instance by calling CdbSubscription Cdb.newSubscription() method. For the subscriber (or CdbSubscription instance) the paths are registered with the CdbSubscription.subscribe() that that returns the actual subscription point identifier. A subscriber can have multiple subscription points, and there can be many different subscribers. Every point is defined through a path - similar to the paths we use for read operations, with the exception that instead of fully instantiated paths to list instances we can selectively use tagpaths.

    When a client is done defining subscriptions it should inform NSO that it is ready to receive notifications by calling CdbSubscription.subscribeDone(), after which the subscription socket is ready to be polled.

    We can subscribe either to specific leaves, or entire subtrees. Explaining this by example we get:

    • /ncs:devices/global-settings/trace: Subscription to a leaf. Only changes to this leaf will generate a notification.

    • /ncs:devices: Subscription to the subtree rooted at /ncs:devices. Any changes to this subtree will generate a notification. This includes additions or removals of device instances, as well as changes to already existing device instances.

    When adding a subscription point the client must also provide a priority, which is an integer (a smaller number means a higher priority). When data in CDB is changed, this change is part of a transaction. A transaction can be initiated by a commit operation from the CLI or an edit-config operation in NETCONF resulting in the running database being modified. As the last part of the transaction CDB will generate notifications in lock-step priority order. First, all subscribers at the lowest numbered priority are handled, once they all have replied and synchronized by calling CdbSubscription.sync() the next set - at the next priority level - is handled by CDB. Not until all subscription points have been acknowledged is the transaction complete. This implies that if the initiator of the transaction was for example a commit command in the CLI, the command will hang until notifications have been acknowledged.

    Note that even though the notifications are delivered within the transaction, a subscriber can't reject the changes (since this would break the two-phase commit protocol used by the NSO backplane towards all data providers).

    As a subscriber has read its subscription notifications using CdbSubscription.read(), it can iterate through the changes that caused the particular subscription notification using the CdbSubscription.diffIterate() method. It is also possible to start a new read-session to the CdbDBType.CDB_PRE_COMMIT_RUNNING database to read the running database as it was before the pending transaction.

    To view registered subscribers use the ncs --status command.

    Sessions

    It is important to note that CDB is locked for writing during a read session using the Java API. A session starts with CdbSession Cdb.startSession() and the lock is not released until the CdbSession.endSession() (or the Cdb.close()) call. CDB will also automatically release the lock if the socket is closed for some other reason, such as program termination.

    Loading Initial Data into CDB

    When NSO starts for the first time, the CDB database is empty. The location of the database files used by CDB is given in ncs.conf. At first startup, when CDB is empty, i.e., no database files are found in the directory specified by <db-dir> (./ncs-cdb as given by the example below (CDB Init)), CDB will try to initialize the database from all XML documents found in the same directory.

    This feature can be used to reset the configuration to factory settings.

    Given the YANG model in the example above (L3 VPN YANG Extract), the initial data for topology can be found in topology.xml as seen in the example below (Initial Data for Topology).

    Another example of using these features is when initializing the AAA database. This is described in .

    All files ending in .xml will be loaded (in an undefined order) and committed in a single transaction when CDB enters start phase 1 (see for more details on start phases). The format of the init files is rather lax in that it is not required that a complete instance document following the data model is present, much like the NETCONF edit-config operation. It is also possible to wrap multiple top-level tags in the file with a surrounding config tag, as shown in the example below (Wrapper for Multiple Top-Level Tags) like this:

    The actual names of the XML files do not matter, i.e., they do not need to correspond to the part of the YANG model being initialized.

    Operational Data in CDB

    In addition to handling configuration data, CDB can also take care of operational data such as alarms and traffic statistics. By default, operational data is not persistent and thus not kept between restarts. In the YANG model annotating a node with config false will mark the subtree rooted at that node as operational data. Reading and writing operational data is done similarly to ordinary configuration data, with the main difference being that you have to specify that you are working against operational data. Also, the subscription model is different.

    Subscriptions

    Subscriptions towards the operational data in CDB are similar to the above, but because the operational data store is designed for light-weight access, does not have transactions, and normally avoids the use of any locks, there are several differences - in particular:

    • Subscription notifications are only generated if the writer obtains the “subscription lock”, by using the Cdb.startSession() method with the CdbLockType.LOCK_REQUEST flag.

    • Subscriptions are registered with the CdbSubscription.subscribe() method with the flag CdbSubscriptionType.SUB_OPERATIONAL rather than CdbSubscriptionType.SUB_RUNNING.

    • No priorities are used.

    Essentially a write operation towards the operational data store, combined with the subscription lock, takes on the role of a transaction for configuration data as far as subscription notifications are concerned. This means that if operational data updates are done with many single-element write operations, this can potentially result in a lot of subscription notifications. Thus it is a good idea to use the multi-element CdbSession.setObject() etc methods for updating operational data that applications subscribe to.

    Since write operations that do not attempt to obtain the subscription lock are allowed to proceed even during notification delivery, it is the responsibility of the applications using the operational data store to obtain the lock as needed when writing. If subscribers should be able to reliably read the exact data that resulted from the write that triggered their subscription, the subscription lock must always be obtained when writing that particular set of data elements. One possibility is of course to obtain the lock for all writes to operational data, but this may have an unacceptable performance impact.

    Example

    We will take a first look at the examples.ncs/getting-started/developing-with-ncs/1-cdb example. This example is an NSO project with two packages: cdb and router.

    Example packages

    • router: A NED package with a simple but still realistic model of a network device. The only component in this package is the NED component that uses NETCONF to communicate with the device. This package is used in many NSO examples including examples.ncs/getting-started/developing-with-ncs/0-router-network which is an introduction to NSO device manager, NSO netsim, and this router package.

    • cdb: This package has an even simpler YANG model to illustrate some aspects of CDB data retrieval. The package consists of five application components:

    The cdb package includes the YANG shown in the example below (1-cdb Simple Config Data).

    Let us now populate the database and look at the Plain CDB Subscriber and how it can use the Java API to react to changes to the data. This component subscribes to changes under the path /devices/device{ex0}/config which is configuration changes for the device named ex0 which is a device connected to NSO via the router NED.

    Being an application component in the cdb package implies that this component is realized by a Java class that implements the com.tailf.ncs.ApplicationComponent Java interface. This interface inherits the Java standard Runnable interface which requires the run() method to be implemented. In addition to this method, there is a init() and a finish() method that has to be implemented. When the NSO Java-VM starts this class will be started in a separate thread with an initial call to init() before the thread starts. When the package is requested to stop execution a call to finish() is performed and this method is expected to end thread execution.

    We will walk through the code and highlight different aspects. We start with how the Cdb instance is retrieved in this example. It is always possible to open a socket to NSO and create the Cdb instance with this socket. But with this comes the responsibility to manage that socket. In NSO, there is a resource manager that can take over this responsibility. In the code, the field that should contain the Cdb instance is simply annotated with a @Resource annotation. The resource manager will find this annotation and create the Cdb instance as specified. In this example below (Resource Annotation) Scope.INSTANCE implies that new instances of this example class should have unique Cdb instances (see more in ).

    The init() method (shown in the example below, (Plain Subscriber Init) is called before this application component thread is started. For this subscriber, this is the place to setup the subscription. First, an CdbSubscription instance is created and in this instance, the subscription points are registered (one in this case). When all subscription points are registered a call to CdbSubscriber.subscribeDone() will indicate that the registration is finished and the subscriber is ready to start.

    The run() method comes from the standard Java API Runnable interface and is executed when the application component thread is started. For this subscriber (see example below (Plain CDB Subscriber)) a loop over the CdbSubscription.read() method drives the subscription. This call will be blocked until data has changed for some of the subscription points that were registered, and the IDs for these subscription points will then be returned. In our example, since we only have one subscription point, we know that this is the one stored as subId. This subscriber chooses to find the changes by calling the CdbSubscription.diffIterate() method. Important is to acknowledge the subscription by calling CdbSubscription.sync() or else this subscription will block the ongoing transaction.

    The call to the CdbSubscription.diffIterate() requires an object instance implementing an iterate() method. To do this, the CdbDiffIterate interface is implemented by a suitable class. In our example, this is done by a private inner class called Iter (Example below (Plain Subscriber Iterator Implementation)). The iterate() method is called for all changes and the path, type of change, and data are provided as arguments. In the end, the iterate() should return a flag that controls how further iteration should prolong, or if it should stop. Our example iterate() method just logs the changes.

    areThe finish() method (Example below (Plain Subscriber finish)) is called when the NSO Java-VM wants the application component thread to stop execution. An orderly stop of the thread is expected. Here the subscription will stop if the subscription socket and underlying Cdb instance are closed. This will be done by the ResourceManager when we tell it that the resources retrieved for this Java object instance could be unregistered and closed. This is done by a call to the ResourceManager.unregisterResources() method.

    We will now compile and start the 1-cdb example, populate some config data, and look at the result. The example below (Plain Subscriber Startup) shows how to do this.

    By far, the easiest way to populate the database with some actual data is to run the CLI (see the example below (Populate Data using CLI)).

    We have now added a server to the Syslog. What remains is to check what our 'Plain CDB Subscriber' ApplicationComponent got as a result of this update. In the logs directory of the 1-cdb example there is a file named PlainCdbSub.out which contains the log data from this application component. At the beginning of this file, a lot of logging is performed which emanates from the sync-from of the device. At the end of this file, we can find the three log rows that come from our update. See the extract in the example below (Plain Subscriber Output) (with each row split over several to fit on the page).

    We will turn to look at another subscriber which has a more elaborate diff iteration method. In our example cdb package, we have an application component named CdbCfgSubscriber. This component consists of a subscriber for the subscription point /ncs:devices/device/config/r:sys/interfaces/interface. The iterate() method is here implemented as an inner class called DiffIterateImpl.

    The code for this subscriber is left out but can be found in the file ConfigCdbSub.java.

    The example below (Run CdbCfgSubscriber Example) shows how to build and run the example.

    If we look at the file logs/ConfigCdbSub.out, we will find log records from the subscriber (see the example below (Subscriber Output)). At the end of this file the last DUMP DB will show only one remaining interface.

    Operational Data

    We will look once again at the YANG model for the CDB package in the examples.ncs/getting-started/developing-with-ncs/1-cdb example. Inside the test.yang YANG model, there is a test container. As a child in this container, there is a list stats-item (see the example below (1-cdb Simple Operational Data).

    Note the list stats-item has the substatement config false; and below it, we find a tailf:cdb-oper; statement. A standard way to implement operational data is to define a callpoint in the YANG model and write instrumentation callback methods for retrieval of the operational data (see more on data callbacks in ). Here on the other hand we use the tailf:cdb-oper; statement which implies that these instrumentation callbacks are automatically provided internally by NSO. The downside is that we must populate this operational data in CDB from the outside.

    An example of Java code that creates operational data using the Navu API is shown in the example below (Creating Operational Data using Navu API)).

    An example of Java code that deletes operational data using the CDB API is shown in the example below (Deleting Operational Data using CDB API).

    In the 1-cdb example in the CDB package, there is also an application component with an operational data subscriber that subscribes to data from the path "/t:test/stats-item" (see the example below (CDB Operational Subscriber Java code)).

    Notice that the CdbOperSubscriber is very similar to the CdbConfigSubscriber described earlier.

    In the 1-cdb examples, there are two shell scripts setoper and deloper that will execute the above CreateEntry() and DeleteEntry() respectively. We can use these to populate the operational data in CDB for the test.yang YANG model (see the example below (Populating Operational Data)).

    And if we look at the output from the 'CDB Operational Subscriber' that is found in the logs/OperCdbSub.out, we will see output similar to the example below (Operational subscription Output).

    Automatic Schema Upgrades and Downgrades

    Software upgrades and downgrades represent one of the main problems in managing the configuration data of network devices. Each software release for a network device is typically associated with a certain version of configuration data layout, i.e., a schema. In NSO the schema is the data model stored in the .fxs files. Once CDB has initialized, it also stores a copy of the schema associated with the data it holds.

    Every time NSO starts, CDB will check the current contents of the .fxs files with its own copy of the schema files. If CDB detects any changes in the schema, it initiates an upgrade transaction. In the simplest case, CDB automatically resolves the changes and commits the new data before NSO reaches start-phase one.

    The CDB upgrade can be followed by checking the devel.log. The development log is meant to be used as support while the application is developed. It is enabled in ncs.conf as shown in the example below (Enabling Developer Logging).

    CDB can automatically handle the following changes to the schema:

    • Deleted elements When an element is deleted from the schema, CDB simply deletes it (and any children) from the database.

    • Added elements If a new element is added to the schema it needs to either be optional, dynamic, or have a default value. New elements with a default are added and set to their default value. New dynamic or optional elements are simply noted as a schema change.

    • Re-ordering elements An element with the same name, but in a different position on the same level, is considered to be the same element. If its type hasn't changed it will retain its value, but if the type has changed it will be upgraded as described below.

    Should the automatic upgrade fail, exit codes and log entries will indicate the reason (see ).

    Using Initialization Files for Upgrade

    As described earlier, when NSO starts with an empty CDB database, CDB will load all instantiated XML documents found in the CDB directory and use these to initialize the database. We can also use this mechanism for CDB upgrade since CDB will again look for files in the CDB directory ending in .xml when doing an upgrade.

    This allows for handling many of the cases that the automatic upgrade can not do by itself, e.g., the addition of mandatory leaves (without default statements), or multiple instances of new dynamic containers. Most of the time we can probably simply use the XML init file that is appropriate for a fresh install of the new version and also for the upgrade from a previous version.

    When using XML files for the initialization of CDB, the complete contents of the files are used. On upgrade, however, doing this could lead to modification of the user's existing configuration - e.g., we could end up resetting data that the user has modified since CDB was first initialized. For this reason, two restrictions are applied when loading the XML files on upgrade:

    • Only data for elements that are new as of the upgrade, i.e., elements that did not exist in the previous schema, will be considered.

    • The data will only be loaded if all old, i.e., previously existing, optional/dynamic parent elements and instances exist in the current configuration.

    To clarify this, let's make up the following example. Some ServerManager package was developed and delivered. It was realized that the data model had a serious shortcoming in that there was no way to specify the protocol to use, TCP or UDP. To fix this, in a new version of the package, another leaf was added to the /servers/server list, and the new YANG module can be seen in the example below (New YANG module for the ServerManager Package).

    The differences from the earlier version of the YANG module can be seen in the example below (Difference between YANG Modules).

    Since it was considered important that the user explicitly specified the protocol, the new leaf was made mandatory. The XML init file must include this leaf, and the result can be seen in the example below (Protocol Upgrade Init File) like this:

    We can then just use this new init file for the upgrade, and the existing server instances in the user's configuration will get the new /servers/server/protocol leaf filled in as expected. However some users may have deleted some of the original servers from their configuration, and in those cases, we do not want those servers to get re-created during the upgrade just because they are present in the XML file - the above restrictions make sure that this does not happen. The configuration after the upgrade can be seen in the example below (Configuration After Upgrade).

    Here is what the configuration looks like after the upgrade if the smtp server has been deleted before the upgrade:

    This example also implicitly shows a limitation of this method. If the user has created additional servers, the new XML file will not specify what protocol to use for those servers, and the upgrade cannot succeed unless the package upgrade component method is used, see below. However, the example is a bit contrived. In practice, this limitation is rarely a problem. It does not occur for new lists or optional elements, nor for new mandatory elements that are not children of old lists. In fact, correctly adding this protocol leaf for user-created servers would require user input; it cannot be done by any fully automated procedure.

    Since CDB will attempt to load all *.xml files in the CDB directory at the time of upgrade, it is important to not leave XML init files from a previous version that are no longer valid there.

    It is always possible to write a package-specific upgrade component to change the data belonging to a package before the upgrade transaction is committed. This will be explained in the following section.

    New Validation Points

    One case the system does not handle directly is the addition of new custom validation points using the tailf:validate statement during an upgrade. The issue that surfaces is that the schema upgrade is performed before the (new) user code gets deployed and therefore the code required for validation is not yet available. It results in an error similar to no registration found for callpoint NEW-VALIDATION/validate or simply application communication failure.

    One way to solve this problem is to first redeploy the package with the custom validation code and then perform the schema upgrade through the full packages reload action. For example, suppose you are upgrading the package test-svc. Then you first perform packages package test-svc redeploy, followed by packages reload. The main downside to this approach is that the new code must work with the old data model, which may require extra effort when there are major data model changes.

    An alternative is to temporarily disable the validation by starting the NSO with the --ignore-initial-validation option. In this case, you should stop the ncs process and start it using --ignore-initial-validation and --with-package-reload options to perform the schema upgrade without custom validation. However, this may result in data in the CDB that would otherwise not pass custom validation. If you still want to validate the data, you can write an upgrade component to do this one-time validation.

    Writing an Upgrade Package Component

    In previous sections, we showed how automatic upgrades and XML initialization files can help in upgrading CDB when YANG models have changed. In some situations, this is not sufficient. For instance, if a YANG model is changed and new mandatory leaves are introduced that need calculations to set the values then a programmatic upgrade is needed. This is when the upgrade component of a package comes into play.

    An upgrade component is a Java class with a standard main() method that becomes a standalone program that is run as part of the package reload action.

    As with any package component type, the upgrade component has to be defined in the package-meta-data.xml file for the package (see the example below (Upgrade Package Components)).

    Let's recapitulate how packages are loaded and reloaded. NSO can search the /ncs-config/load-path for packages to run and will copy these to a private directory tree under /ncs-config/state-dir with root directory packages-in-use.cur. However, NSO will only do this search when packages-in-use.cur is empty or when a reload is requested. This scheme makes package upgrades controlled and predictable, for more on this, see .

    So in preparation for a package upgrade, the new packages replace the old ones in the load path. In our scenario, the YANG model changes are such that the automatic schema upgrade that CDB performs is not sufficient, therefore the new packages also contain upgrade components. At this point, NSO is still running with the old package definitions.

    When the package reload is requested, the packages in the load path are copied to the state directory. The old state directory is scratched, so that packages that no longer exist in the load path are removed and new packages are added. Unchanged packages will be unchanged. Automatic schema CDB upgrades will be performed, and afterward, for all packages that have an upgrade component and for which at least one YANG model was changed, this upgrade component will be executed. Also for added packages that have an upgrade component, this component will be executed. Hence the upgrade component needs to be programmed in such a way that care is taken for both the new and upgrade package scenarios.

    So how should an upgrade component be implemented? In the previous section, we described how CDB can perform an automatic upgrade. But this means that CDB has deleted all values that are no longer part of the schema. Well, not quite yet. At the initial phase of the NSO startup procedure (called start-phase0), it is possible to use all the CDB Java API calls to access the data using the schema from the database as it looked before the automatic upgrade. That is, the complete database as it stood before the upgrade is still available to the application. It is under this condition that the upgrade components are executed and this is the reason why they are standalone programs and not executed by the NSO Java-VM as all other Java code for components are.

    So the CDB Java API can be used to read data defined by the old YANG models. To write new config data Maapi has a specific method Maapi.attachInit(). This method attaches a Maapi instance to the upgrade transaction (or init transaction) during phase0. This special upgrade transaction is only available during phase0. NSO will commit this transaction when the phase0 is ended, so the user should only write config data (not attempt to commit, etc.).

    We take a look at the example $NCS_DIR/examples.ncs/getting-started/developing-with-ncs/14-upgrade-service to see how an upgrade component can be implemented. Here the vlan package has an original version which is replaced with a version vlan_v2. See the README and play with examples to get acquainted.

    The 14-upgrade-service is a service package. But the upgrade components here described work equally well and in the same way for any package type. The only requirement is that the package contain at least one YANG model for the upgrade component to have meaning. If not the upgrade component will never be executed.

    The complete YANG model for the version 2 of the VLAN service looks as follows:

    If we diff the changes between the two YANG models for the service, we see that in version 2, a new mandatory leaf has been added (see the example below (YANG Service diff)).

    We need to create a Java class with a main() method that connects to CDB and MAAPI. This main will be executed as a separate program and all private and shared jars defined by the package will be in the classpath. To upgrade the VLAN service, the following Java code is needed:

    Let's go through the code and point out the different aspects of writing an upgrade component. First (see the example below (Upgrade Init)) we open a socket and connect to NSO. We pass this socket to a Java API Cdb instance and call Cdb.setUseForCdbUpgrade(). This method will prepare cdb sessions for reading old data from the CDB database, and it should only be called in this context. At the end of this first code fragment, we start the CDB upgrade session:

    We then open and connect a second socket to NSO and pass this to a Java API Maapi instance. We call the Maapi.attachInit() method to get the init transaction (see the example below (Upgrade Get Transaction)).

    Using the CdbSession instance we read the number of service instance that exists in the CDB database. We will work on all these instances. Also, if the number of instances is zero the loop will not be entered. This is a simple way to prevent the upgrade component from doing any harm in the case of this being a new package that is added to NSO for the first time:

    Via the CdbUpgradeSession, the old service data is retrieved:

    The value for the new leaf introduced in the new version of the YANG model is calculated, and the value is set using Maapi and the init transaction:

    At the end of the program, the sockets are closed. Important to note is that no commits or other handling of the init transaction is done. This is NSO's responsibility:

    More complicated service package upgrade scenarios occur when a YANG model containing a service point is renamed or moved and augmented to a new place in the NSO model. This is because, not only, does the complete config data set need to be recreated on the new position but a service also has hidden private data that is part of the FASTMAP algorithm and necessary for the service to be valid. For this reason a specific MAAPI method Maapi.ncsMovePrivateData() exists that takes both the old and the new positions for the service point and moves the service data between these positions.

    In the 14-upgrade-service example, this more complicated scenario is illustrated with the tunnel package. The tunnel package YANG model maps the vlan_v2 package one-to-one but is a complete rename of the model containers and all leafs:

    To upgrade from the vlan_v2 to the tunnel package, a new upgrade component for the tunnel package has to be implemented:

    We will walk throw this code also and point out the aspects that differ from the earlier more simple scenario. First, we want to create the Cdb instance and get the CdbSession. However, in this scenario, the old namespace is removed and the Java API cannot retrieve it from NSO. To be able to use CDB to read and interpret the old YANG Model, the old generated and removed Java namespace classes have to be temporarily reinstalled. This is solved by adding a jar (Java archive) containing these removed namespaces to the private-jar directory of the tunnel package. The removed namespace can then be instantiated and passed to Cdb via an overridden version of the Cdb.setUseForCdbUpgrade() method:

    As an alternative to including the old namespace file in the package, a ConfNamespaceStub can be constructed for each old model that is to be accessed:

    Since the old YANG model with the service point is removed, the new service container with the new service has to be created before any config data can be written to this position:

    The complete config for the old service is read via the CdbUpgradeSession. Note in particular that the path oldPath is constructed as a ConfCdbUpgradePath. These are the paths that allow access to nodes that are not available in the current schema (i.e., nodes in deleted models).

    The new data structure with the service data is created and written to NSO via Maapi and the init transaction:

    Last the service private data is moved from the old position to the new position via the method Maapi.ncsMovePrivateData():

    NSO NETCONF Server

    Description of northbound NETCONF implementation in NSO.

    This section describes the northbound NETCONF implementation in NSO. As of this writing, the server supports the following specifications:

    • : NETCONF Configuration Protocol

    • : Using the NETCONF Configuration Protocol over Secure Shell (SSH)

    • : NETCONF Event Notifications

    tailf-ncs-services.yang: Services represent anything that spans across devices. This can for example be MPLS VPN, MEF e-line, BGP peer, or website. NSO provides several mechanisms to handle services in general which are specified by this model. Also, it defines placeholder containers under which developers, as an option, can augment their specific services.

  • tailf-ncs-snmp-notification-receiver.yang: NSO can subscribe to SNMP notifications from the devices. The subscription is specified by this model.

  • tailf-ncs-java-vm.yang: Custom code that is part of a package is loaded and executed by the NSO Java VM. This is managed by this model. Further, when browsing $NCS_DIR/src/ncs/yang you will find models for all aspects of NSO functionality, for example

  • tailf-ncs-alarms.yang: This model defines how NSO manages alarms. The source of an alarm can be anything like an NSO state change, SNMP, or NETCONF notification.

  • tailf-ncs-snmp.yang: This model defines how to configure the NSO northbound SNMP agent.

  • tailf-ncs-config.yang: This model describes the layout of the NSO config file, usually called ncs.conf

  • tailf-ncs-packages.yang: This model describes the layout of the file package-meta-data.xml. All user code, data models MIBS, and Java code are always contained in an NSO package. The package-meta-data.xml file must always exist in a package and describe the package.

  • /ncs:devices/device{"ex0"}/address: Subscription to a specific element in a list. A notification will be generated when the device ex0 changes its IP address.
  • /ncs:devices/device/address: Subscription to a leaf in a list. A notification will be generated leaf address is changed in any device instance.

  • Neither the writer that generated the subscription notifications nor other writes to the same data are blocked while notifications are being delivered. However, the subscription lock remains in effect until notification delivery is complete.

  • The previous value for the modified leaf is not available when using the CdbSubscriber.diffIterate() method.

  • Plain CDB Subscriber: This CDB subscriber subscribes to changes under the path /devices/device{ex0}/config. Whenever a change occurs there, the code iterates through the change and prints the values.
  • CdbCfgSubscriber: A more advanced CDB subscriber that subscribes to changes under the path /devices/device/config/sys/interfaces/interface.

  • OperSubscriber: An operational data subscriber that subscribes to changes under the path /t:test/stats-item.

  • Type changes If a leaf is still present but its type has changed, automatic coercions are performed, so for example integers may be transformed to their string representation if the type changed from e.g. int32 to string. Automatic type conversion succeeds as long as the string representation of the current value can be parsed into its new type. (Which of course also implies that a change from a smaller integer type, e.g. int8, to a larger type, e.g., int32, succeeds for any value - while the opposite will not hold, but might!). If the coercion fails, any supplied default value will be used. If no default value is present in the new schema, the automatic upgrade will fail and the leaf will be deleted after the CDB upgrade. Type changes when user-defined types are used are also handled automatically, provided that some straightforward rules are followed for the type definitions. Read more about user-defined types in the confd_types(3) manual page, which also describes these rules.

  • Hash changes When a hash value of a particular element has changed (due to an addition of, or a change to, a tailf:id-value statement) CDB will update that element.

  • Key changes When a key of a list is modified, CDB tries to upgrade the key using the same rules as explained above for adding, deleting, re-ordering, change of type, and change of hash value. If an automatic upgrade of a key fails the entire list entry will be deleted. When individual entries upgrade successfully but result in an invalid list, all list entries will be deleted. This can happen, e.g., when an upgrade removes a leaf from the key, resulting in several entries having the same key.

  • Default values If a leaf has a default value, that has not been changed from its default, then the automatic upgrade will use the new default value (if any). If the leaf value has been changed from the old default, then that value will be kept.

  • Adding / Removing namespaces If a namespace no longer is present after an upgrade, CDB removes all data in that namespace. When CDB detects a new namespace, it is initialized with default values.

  • Changing to/from operational Elements that previously had config false set that are changed into database elements will be treated as added elements. In the opposite case, where data elements in the new data model are tagged with config false, the elements will be deleted from the database.

  • Callpoint changes CDB only considers the part of the data model in YANG modules that do not have external data callpoints. But while upgrading, CDB handles moving subtrees into CDB from a callpoint and vice versa. CDB simply considers these as added and deleted schema elements. Thus an application can be developed using CDB in the first development cycle. When the external database component is ready it can easily replace CDB without changing the schema.

  • Operational Data in CDB
    AAA infrastructure
    Starting NSO
    The Resource Manager
    DP API
    Disaster Management
    Loading Packages
    NSO Package before Reload
    NSO Package at Reload
    NSO Advanced Service Upgrade
    Example: Using yanger
    $ yanger -f tree --tree-depth=3 tailf-ncs.yang
    module: tailf-ncs
       +--rw ssh
       |  +--rw host-key-verification?   ssh-host-key-verification-level
       |  +--rw private-key* [name]
       |     +--rw name          string
       |     +--rw key-data      ssh-private-key
       |     +--rw passphrase?   tailf:aes-256-cfb-128-encrypted-string
       +--rw cluster
       |  +--rw remote-node* [name]
       |  |  +--rw name             node-name
       |  |  +--rw address?         inet:host
       |  |  +--rw port?            inet:port-number
       |  |  +--rw ssh
       |  |  +--rw authgroup        -> /cluster/authgroup/name
       |  |  +--rw trace?           trace-flag
       |  |  +--rw username?        string
       |  |  +--rw notifications
       |  |  +--ro device* [name]
       |  +--rw authgroup* [name]
       |  |  +--rw name           string
       |  |  +--rw default-map!
       |  |  +--rw umap* [local-user]
       |  +--rw commit-queue
       |  |  +--rw enabled?   boolean
       |  +--ro enabled?        boolean
       |  +--ro connection*
       |     +--ro remote-node?   -> /cluster/remote-node/name
       |     +--ro address?       inet:ip-address
       |     +--ro port?          inet:port-number
       |     +--ro channels?      uint32
       |     +--ro local-user?    string
       |     +--ro remote-user?   string
       |     +--ro status?        enumeration
       |     +--ro trace?         enumeration
    ...
    Example: L3 VPN YANG Extract
    module l3vpn {
    
      namespace "http://com/example/l3vpn";
      prefix l3vpn;
    
    
            ...
    
      container topology {
        list role {
          key "role";
          tailf:cli-compact-syntax;
          leaf role {
            type enumeration {
              enum ce;
              enum pe;
              enum p;
            }
          }
    
          leaf-list device {
            type leafref {
              path "/ncs:devices/ncs:device/ncs:name";
            }
          }
        }
    
        list connection {
          key "name";
          leaf name {
            type string;
          }
          container endpoint-1 {
            tailf:cli-compact-syntax;
            uses connection-grouping;
          }
          container endpoint-2 {
            tailf:cli-compact-syntax;
            uses connection-grouping;
          }
          leaf link-vlan {
            type uint32;
          }
        }
      }
    Example: CDB Init
    <!-- Where the database (and init XML) files are kept -->
    <cdb>
        <db-dir>./ncs-cdb</db-dir>
    </cdb>
    Example: Initial Data for Topology
    <config xmlns="http://tail-f.com/ns/config/1.0">
      <topology xmlns="http://com/example/l3vpn">
        <role>
          <role>ce</role>
          <device>ce0</device>
          <device>ce1</device>
          <device>ce2</device>
        ...
        </role>
        <role>
          <role>pe</role>
          <device>pe0</device>
          <device>pe1</device>
          <device>pe2</device>
          <device>pe3</device>
        </role>
        ...
        <connection>
          <name>c0</name>
          <endpoint-1>
            <device>ce0</device>
            <interface>GigabitEthernet0/8</interface>
            <ip-address>192.168.1.1/30</ip-address>
          </endpoint-1>
          <endpoint-2>
            <device>pe0</device>
            <interface>GigabitEthernet0/0/0/3</interface>
            <ip-address>192.168.1.2/30</ip-address>
          </endpoint-2>
          <link-vlan>88</link-vlan>
        </connection>
        <connection>
          <name>c1</name>
        ...
    Example: Wrapper for Multiple Top-Level Tags
    <config xmlns="http://tail-f.com/ns/config/1.0">
      ...
    </config>
    Example: 1-cdb Simple Config Data
    module test {
      namespace "http://example.com/test";
      prefix t;
    
      import tailf-common {
        prefix tailf;
      }
    
      description "This model is used as a simple example model
                   illustrating some aspects of CDB subscriptions
                   and CDB operational data";
    
      revision 2012-06-26 {
        description "Initial revision.";
      }
    
      container test {
        list config-item {
          key ckey;
          leaf ckey {
            type string;
          }
          leaf i {
            type int32;
          }
        }
        list stats-item {
          config false;
          tailf:cdb-oper;
          key skey;
          leaf skey {
            type string;
          }
          leaf i {
            type int32;
          }
          container inner {
            leaf  l {
              type string;
            }
          }
        }
      }
    }
    Example: Plain CDB Subscriber Java Code
    public class PlainCdbSub implements ApplicationComponent  {
        private static final Logger LOGGER
                = LogManager.getLogger(PlainCdbSub.class);
    
        @Resource(type = ResourceType.CDB, scope = Scope.INSTANCE,
                  qualifier = "plain")
        private Cdb cdb;
    
        private CdbSubscription sub;
        private int subId;
        private boolean requestStop;
    
        public PlainCdbSub() {
        }
    
        public void init() {
            try {
                LOGGER.info(" init cdb subscriber ");
                sub = new CdbSubscription(cdb);
                String str = "/devices/device{ex0}/config";
                subId = sub.subscribe(1, new Ncs(), str);
                sub.subscribeDone();
                LOGGER.info("subscribeDone");
                requestStop = false;
            } catch (Exception e) {
                throw new RuntimeException("FAIL in init", e);
            }
        }
    
        public void run() {
            try {
                while (!requestStop) {
                    try {
                        sub.read();
                        sub.diffIterate(subId, new Iter());
                    } finally {
                        sub.sync(CdbSubscriptionSyncType.DONE_SOCKET);
                    }
                }
            } catch (ConfException e) {
                if (e.getErrorCode() == ErrorCode.ERR_EOF) {
                    // Triggered by finish method
                    // if we throw further NCS JVM will try to restart
                    // the package
                    LOGGER.warn(" Socket Closed!");
                } else {
                    throw new RuntimeException("FAIL in run", e);
                }
            } catch (Exception e) {
                LOGGER.warn("Exception:" + e.getMessage());
                throw new RuntimeException("FAIL in run", e);
            } finally {
                requestStop = false;
                LOGGER.warn(" run end ");
            }
        }
    
        public void finish() {
            requestStop = true;
            LOGGER.warn(" PlainSub in finish () =>");
            try {
                // ResourceManager will close the resource (cdb) used by this
                // instance that triggers ConfException with ErrorCode.ERR_EOF
                // in run method
                ResourceManager.unregisterResources(this);
            } catch (Exception e) {
                throw new RuntimeException("FAIL in finish", e);
            }
            LOGGER.warn(" PlainSub in finish () => ok");
        }
    
        private class Iter implements CdbDiffIterate  {
            public DiffIterateResultFlag iterate(ConfObject[] kp,
                                                 DiffIterateOperFlag op,
                                                 ConfObject oldValue,
                                                 ConfObject newValue,
                                                 Object state) {
                try {
                    String kpString = Conf.kpToString(kp);
                    LOGGER.info("diffIterate: kp= " + kpString + ", OP=" + op
                                + ", old_value=" + oldValue + ", new_value="
                                + newValue);
                    return DiffIterateResultFlag.ITER_RECURSE;
                } catch (Exception e) {
                    return DiffIterateResultFlag.ITER_CONTINUE;
                }
            }
        }
    }
    Example: Resource Annotation
        @Resource(type = ResourceType.CDB, scope = Scope.INSTANCE,
                  qualifier = "plain")
        private Cdb cdb;
    Example: Plain Subscriber Init
        public void init() {
            try {
                LOGGER.info(" init cdb subscriber ");
                sub = new CdbSubscription(cdb);
                String str = "/devices/device{ex0}/config";
                subId = sub.subscribe(1, new Ncs(), str);
                sub.subscribeDone();
                LOGGER.info("subscribeDone");
                requestStop = false;
            } catch (Exception e) {
                throw new RuntimeException("FAIL in init", e);
            }
        }
    Example: Plain CDB Subscriber
        public void run() {
            try {
                while (!requestStop) {
                    try {
                        sub.read();
                        sub.diffIterate(subId, new Iter());
                    } finally {
                        sub.sync(CdbSubscriptionSyncType.DONE_SOCKET);
                    }
                }
            } catch (ConfException e) {
                if (e.getErrorCode() == ErrorCode.ERR_EOF) {
                    // Triggered by finish method
                    // if we throw further NCS JVM will try to restart
                    // the package
                    LOGGER.warn(" Socket Closed!");
                } else {
                    throw new RuntimeException("FAIL in run", e);
                }
            } catch (Exception e) {
                LOGGER.warn("Exception:" + e.getMessage());
                throw new RuntimeException("FAIL in run", e);
            } finally {
                requestStop = false;
                LOGGER.warn(" run end ");
            }
        }
    Example: Plain Subscriber Iterator Implementation
        private class Iter implements CdbDiffIterate  {
            public DiffIterateResultFlag iterate(ConfObject[] kp,
                                                 DiffIterateOperFlag op,
                                                 ConfObject oldValue,
                                                 ConfObject newValue,
                                                 Object state) {
                try {
                    String kpString = Conf.kpToString(kp);
                    LOGGER.info("diffIterate: kp= " + kpString + ", OP=" + op
                                + ", old_value=" + oldValue + ", new_value="
                                + newValue);
                    return DiffIterateResultFlag.ITER_RECURSE;
                } catch (Exception e) {
                    return DiffIterateResultFlag.ITER_CONTINUE;
                }
            }
        }
    Example: Plain Subscriber finish
        public void finish() {
            requestStop = true;
            LOGGER.warn(" PlainSub in finish () =>");
            try {
                // ResourceManager will close the resource (cdb) used by this
                // instance that triggers ConfException with ErrorCode.ERR_EOF
                // in run method
                ResourceManager.unregisterResources(this);
            } catch (Exception e) {
                throw new RuntimeException("FAIL in finish", e);
            }
            LOGGER.warn(" PlainSub in finish () => ok");
        }
    Example: Plain Subscriber Startup
    $ make clean all
    $ ncs-netsim start
    DEVICE ex0 OK STARTED
    DEVICE ex1 OK STARTED
    DEVICE ex2 OK STARTED
    
    $ ncs
    Example: Populate Data using CLI
    $ ncs_cli -u admin
    admin connected from 127.0.0.1 using console on ncs
    admin@ncs# config exclusive
    Entering configuration mode exclusive
    Warning: uncommitted changes will be discarded on exit
    admin@ncs(config)# devices sync-from
    sync-result {
        device ex0
        result true
    }
    sync-result {
        device ex1
        result true
    }
    sync-result {
        device ex2
        result true
    }
    
    admin@ncs(config)# devices device ex0 config r:sys syslog server 4.5.6.7 enabled
    admin@ncs(config-server-4.5.6.7)# commit
    Commit complete.
    admin@ncs(config-server-4.5.6.7)# top
    admin@ncs(config)# exit
    admin@ncs# show devices device ex0 config r:sys syslog
    NAME
    ----------
    4.5.6.7
    10.3.4.5
    Example: Plain Subscriber Output
    <INFO> 05-Feb-2015::13:24:55,760  PlainCdbSub$Iter
      (cdb-examples:Plain CDB Subscriber) -Run-4: - diffIterate:
      kp= /ncs:devices/device{ex0}/config/r:sys/syslog/server{4.5.6.7},
      OP=MOP_CREATED, old_value=null, new_value=null
    <INFO> 05-Feb-2015::13:24:55,761  PlainCdbSub$Iter
      (cdb-examples:Plain CDB Subscriber) -Run-4: - diffIterate:
      kp= /ncs:devices/device{ex0}/config/r:sys/syslog/server{4.5.6.7}/name,
      OP=MOP_VALUE_SET, old_value=null, new_value=4.5.6.7
    <INFO> 05-Feb-2015::13:24:55,762  PlainCdbSub$Iter
      (cdb-examples:Plain CDB Subscriber) -Run-4: - diffIterate:
      kp= /ncs:devices/device{ex0}/config/r:sys/syslog/server{4.5.6.7}/enabled,
      OP=MOP_VALUE_SET, old_value=null, new_value=true
    Example: Run CdbCfgSubscriber Example
    $ make clean all
    $ ncs-netsim start
    DEVICE ex0 OK STARTED
    DEVICE ex1 OK STARTED
    DEVICE ex2 OK STARTED
    
    $ ncs
    
    $ ncs_cli -u admin
    admin@ncs# devices sync-from suppress-positive-result
    admin@ncs# config
    admin@ncs(config)# no devices device ex* config r:sys interfaces
    admin@ncs(config)# devices device ex0 config r:sys interfaces \
    > interface en0 mac 3c:07:54:71:13:09 mtu 1500 duplex half unit 0 family inet \
    > address 192.168.1.115 broadcast 192.168.1.255 prefix-length 32
    admin@ncs(config-address-192.168.1.115)# commit
    Commit complete.
    admin@ncs(config-address-192.168.1.115)# top
    admin@ncs(config)# exit
    Example: Subscriber Output
    ...
    <INFO> 05-Feb-2015::16:10:23,346  ConfigCdbSub
     (cdb-examples:CdbCfgSubscriber)-Run-1: -  Device {ex0}
    <INFO> 05-Feb-2015::16:10:23,346  ConfigCdbSub
     (cdb-examples:CdbCfgSubscriber)-Run-1: -     INTERFACE
    <INFO> 05-Feb-2015::16:10:23,346  ConfigCdbSub
     (cdb-examples:CdbCfgSubscriber)-Run-1: -       name: {en0}
    <INFO> 05-Feb-2015::16:10:23,346  ConfigCdbSub
     (cdb-examples:CdbCfgSubscriber)-Run-1: -       description:null
    <INFO> 05-Feb-2015::16:10:23,350  ConfigCdbSub
     (cdb-examples:CdbCfgSubscriber)-Run-1: -       speed:null
    <INFO> 05-Feb-2015::16:10:23,354  ConfigCdbSub
     (cdb-examples:CdbCfgSubscriber)-Run-1: -       duplex:half
    <INFO> 05-Feb-2015::16:10:23,354  ConfigCdbSub
     (cdb-examples:CdbCfgSubscriber)-Run-1: -       mtu:1500
    <INFO> 05-Feb-2015::16:10:23,354  ConfigCdbSub
     (cdb-examples:CdbCfgSubscriber)-Run-1: -       mac:<<60,7,84,113,19,9>>
    <INFO> 05-Feb-2015::16:10:23,354  ConfigCdbSub
     (cdb-examples:CdbCfgSubscriber)-Run-1: -       UNIT
    <INFO> 05-Feb-2015::16:10:23,354  ConfigCdbSub
     (cdb-examples:CdbCfgSubscriber)-Run-1: -        name: {0}
    <INFO> 05-Feb-2015::16:10:23,355  ConfigCdbSub
     (cdb-examples:CdbCfgSubscriber)-Run-1: -        descripton: null
    <INFO> 05-Feb-2015::16:10:23,355  ConfigCdbSub
     (cdb-examples:CdbCfgSubscriber)-Run-1: -        vlan-id:null
    <INFO> 05-Feb-2015::16:10:23,355  ConfigCdbSub
     (cdb-examples:CdbCfgSubscriber)-Run-1: -         ADDRESS-FAMILY
    <INFO> 05-Feb-2015::16:10:23,355  ConfigCdbSub
     (cdb-examples:CdbCfgSubscriber)-Run-1: -           key: {192.168.1.115}
    <INFO> 05-Feb-2015::16:10:23,355  ConfigCdbSub
     (cdb-examples:CdbCfgSubscriber)-Run-1: -           prefixLength: 32
    <INFO> 05-Feb-2015::16:10:23,355  ConfigCdbSub
     (cdb-examples:CdbCfgSubscriber)-Run-1: -           broadCast:192.168.1.255
    <INFO> 05-Feb-2015::16:10:23,356  ConfigCdbSub
     (cdb-examples:CdbCfgSubscriber)-Run-1: -  Device {ex1}
    <INFO> 05-Feb-2015::16:10:23,356  ConfigCdbSub
     (cdb-examples:CdbCfgSubscriber)-Run-1: -  Device {ex2}
    Example: 1-cdb Simple Operational Data
        list stats-item {
          config false;
          tailf:cdb-oper;
          key skey;
          leaf skey {
            type string;
          }
          leaf i {
            type int32;
          }
          container inner {
            leaf  l {
              type string;
            }
          }
        }
    Example: Creating Operational Data using Navu API
        public static void createEntry(String key)
                throws  IOException, ConfException {
    
            Socket socket = new Socket("127.0.0.1", Conf.NCS_PORT);
            Maapi maapi = new Maapi(socket);
            maapi.startUserSession("system", InetAddress.getByName(null),
                                   "system", new String[]{},
                                   MaapiUserSessionFlag.PROTO_TCP);
            NavuContext operContext = new NavuContext(maapi);
            int th = operContext.startOperationalTrans(Conf.MODE_READ_WRITE);
            NavuContainer mroot = new NavuContainer(operContext);
            LOGGER.debug("ROOT --> " + mroot);
    
            ConfNamespace ns = new test();
            NavuContainer testModule = mroot.container(ns.hash());
            NavuList list =  testModule.container("test").list("stats-item");
            LOGGER.debug("LIST: --> " + list);
    
            List<ConfXMLParam> param = new ArrayList<>();
            param.add(new ConfXMLParamValue(ns, "skey", new ConfBuf(key)));
            param.add(new ConfXMLParamValue(ns, "i",
                    new ConfInt32(key.hashCode())));
            param.add(new ConfXMLParamStart(ns, "inner"));
            param.add(new ConfXMLParamValue(ns, "l", new ConfBuf("test-" + key)));
            param.add(new ConfXMLParamStop(ns, "inner"));
            list.setValues(param.toArray(new ConfXMLParam[0]));
            maapi.applyTrans(th, false);
            maapi.finishTrans(th);
            maapi.endUserSession();
            socket.close();
        }
    Example: Deleting Operational Data using CDB API
        public static void deleteEntry(String key)
                throws IOException, ConfException {
            Socket s = new Socket("127.0.0.1", Conf.NCS_PORT);
            Cdb c = new Cdb("writer", s);
    
            CdbSession sess = c.startSession(CdbDBType.CDB_OPERATIONAL,
                                             EnumSet.of(CdbLockType.LOCK_REQUEST,
                                                        CdbLockType.LOCK_WAIT));
            ConfPath path = new ConfPath("/t:test/stats-item{%x}",
                                         new ConfKey(new ConfBuf(key)));
            sess.delete(path);
            sess.endSession();
            s.close();
        }
    Example: CDB Operational Subscriber Java code
    public class OperCdbSub implements ApplicationComponent, CdbDiffIterate {
        private static final Logger LOGGER = LogManager.getLogger(OperCdbSub.class);
    
        // let our ResourceManager inject Cdb sockets to us
        // no explicit creation of creating and opening sockets needed
        @Resource(type = ResourceType.CDB, scope = Scope.INSTANCE,
                  qualifier = "sub-sock")
        private Cdb cdbSub;
        @Resource(type = ResourceType.CDB, scope = Scope.INSTANCE,
                  qualifier = "data-sock")
        private Cdb cdbData;
    
        private boolean requestStop;
        private int point;
        private CdbSubscription cdbSubscription;
    
        public OperCdbSub() {
        }
    
        public void init() {
            LOGGER.info(" init oper subscriber ");
            try {
                cdbSubscription = cdbSub.newSubscription();
                String path = "/t:test/stats-item";
                point = cdbSubscription.subscribe(
                        CdbSubscriptionType.SUB_OPERATIONAL,
                        1, test.hash, path);
                cdbSubscription.subscribeDone();
                LOGGER.info("subscribeDone");
                requestStop = false;
            } catch (Exception e) {
                LOGGER.error("Fail in init", e);
            }
        }
    
        public void run() {
            try {
                while (!requestStop) {
                    try {
                        int[] points = cdbSubscription.read();
                        CdbSession cdbSession
                                = cdbData.startSession(CdbDBType.CDB_OPERATIONAL);
                        EnumSet<DiffIterateFlags> diffFlags
                                = EnumSet.of(DiffIterateFlags.ITER_WANT_PREV);
                        cdbSubscription.diffIterate(points[0], this, diffFlags,
                                                    cdbSession);
                        cdbSession.endSession();
                    } finally {
                        cdbSubscription.sync(
                                        CdbSubscriptionSyncType.DONE_OPERATIONAL);
                    }
                }
            } catch (Exception e) {
                LOGGER.error("Fail in run shouldrun", e);
            }
            requestStop = false;
        }
    
        public void finish() {
            requestStop = true;
            try {
                ResourceManager.unregisterResources(this);
            } catch (Exception e) {
                LOGGER.error("Fail in finish", e);
            }
        }
    
        @Override
        public DiffIterateResultFlag iterate(ConfObject[] kp,
                                             DiffIterateOperFlag op,
                                             ConfObject oldValue,
                                             ConfObject newValue,
                                             Object initstate) {
            LOGGER.info(op + " " + Arrays.toString(kp) + " value: " + newValue);
            switch (op) {
                case MOP_DELETED:
                    break;
                case MOP_CREATED:
                case MOP_MODIFIED: {
                    break;
                }
                default:
                    break;
            }
            return DiffIterateResultFlag.ITER_RECURSE;
        }
    }
    Example: Populating Operational Data
    $ make clean all
    $ ncs
    $ ./setoper eth0
    $ ./setoper ethX
    $ ./deloper ethX
    $ ncs_cli -u admin
    
    admin@ncs# show test
    SKEY  I        L
    --------------------------
    eth0  3123639  test-eth0
    Example: Operational Subscription Output
    <INFO> 05-Feb-2015::16:27:46,583  OperCdbSub
     (cdb-examples:OperSubscriber)-Run-0:
     - MOP_CREATED [{eth0}, t:stats-item, t:test] value: null
    <INFO> 05-Feb-2015::16:27:46,584  OperCdbSub
     (cdb-examples:OperSubscriber)-Run-0:
     - MOP_VALUE_SET [t:skey, {eth0}, t:stats-item, t:test] value: eth0
    <INFO> 05-Feb-2015::16:27:46,584  OperCdbSub
     (cdb-examples:OperSubscriber)-Run-0:
     - MOP_VALUE_SET [t:l, t:inner, {eth0}, t:stats-item, t:test] value: test-eth0
    <INFO> 05-Feb-2015::16:27:46,585  OperCdbSub
     (cdb-examples:OperSubscriber)-Run-0:
     - MOP_VALUE_SET [t:i, {eth0}, t:stats-item, t:test] value: 3123639
    <INFO> 05-Feb-2015::16:27:52,429  OperCdbSub
     (cdb-examples:OperSubscriber)-Run-0:
     - MOP_CREATED [{ethX}, t:stats-item, t:test] value: null
    <INFO> 05-Feb-2015::16:27:52,430  OperCdbSub
     (cdb-examples:OperSubscriber)-Run-0:
     - MOP_VALUE_SET [t:skey, {ethX}, t:stats-item, t:test] value: ethX
    <INFO> 05-Feb-2015::16:27:52,430  OperCdbSub
     (cdb-examples:OperSubscriber)-Run-0:
     - MOP_VALUE_SET [t:l, t:inner, {ethX}, t:stats-item, t:test] value: test-ethX
    <INFO> 05-Feb-2015::16:27:52,431  OperCdbSub
     (cdb-examples:OperSubscriber)-Run-0:
     - MOP_VALUE_SET [t:i, {ethX}, t:stats-item, t:test] value: 3123679
    <INFO> 05-Feb-2015::16:28:00,669  OperCdbSub
     (cdb-examples:OperSubscriber)-Run-0:
     - MOP_DELETED [{ethX}, t:stats-item, t:test] value: null
    Example: Enabling Developer Logging
        <developer-log>
          <enabled>true</enabled>
          <file>
            <name>./logs/devel.log</name>
            <enabled>true</enabled>
          </file>
          <syslog>
            <enabled>true</enabled>
          </syslog>
        </developer-log>
        <developer-log-level>trace</developer-log-level>
    Example: New YANG Module for the ServerManager Package
    module servers {
      namespace "http://example.com/ns/servers";
      prefix servers;
    
      import ietf-inet-types {
        prefix inet;
      }
    
      revision "2007-06-01" {
          description "added protocol.";
      }
    
      revision "2006-09-01" {
          description "Initial servers data model";
      }
    
      /*  A set of server structures  */
      container servers {
        list server {
          key name;
          max-elements 64;
          leaf name {
            type string;
          }
          leaf ip {
            type inet:ip-address;
            mandatory true;
          }
          leaf port {
            type inet:port-number;
            mandatory true;
          }
          leaf protocol {
            type enumeration {
                enum tcp;
                enum udp;
            }
            mandatory true;
          }
        }
      }
    }
    Example: Difference between YANG Modules
    diff ../servers1.5.yang ../servers1.4.yang
    
    9,12d8
    <   revision "2007-06-01" {
    <       description "added protocol.";
    <   }
    <
    31,37d26
    <         mandatory true;
    <       }
    <       leaf protocol {
    <         type enumeration {
    <             enum tcp;
    <             enum udp;
    <         }
    Example: Protocol Upgrade Init File
    <servers:servers xmlns:servers="http://example.com/ns/servers">
      <servers:server>
        <servers:name>www</servers:name>
        <servers:ip>192.168.3.4</servers:ip>
        <servers:port>88</servers:port>
        <servers:protocol>tcp</servers:protocol>
      </servers:server>
      <servers:server>
        <servers:name>www2</servers:name>
        <servers:ip>192.168.3.5</servers:ip>
        <servers:port>80</servers:port>
        <servers:protocol>tcp</servers:protocol>
      </servers:server>
      <servers:server>
        <servers:name>smtp</servers:name>
        <servers:ip>192.168.3.4</servers:ip>
        <servers:port>25</servers:port>
        <servers:protocol>tcp</servers:protocol>
      </servers:server>
      <servers:server>
        <servers:name>dns</servers:name>
        <servers:ip>192.168.3.5</servers:ip>
        <servers:port>53</servers:port>
        <servers:protocol>udp</servers:protocol>
      </servers:server>
    </servers:servers>
    Example: Configuration After Upgrade
        <servers xmlns="http://example.com/ns/servers">
          <server>
            <name>dns</name>
            <ip>192.168.3.5</ip>
            <port>53</port>
            <protocol>udp</protocol>
          </server>
          <server>
            <name>www</name>
            <ip>192.168.3.4</ip>
            <port>88</port>
            <protocol>tcp</protocol>
          </server>
          <server>
            <name>www2</name>
            <ip>192.168.3.5</ip>
            <port>80</port>
            <protocol>tcp</protocol>
          </server>
        </servers>
    Example: Upgrade Package Components
    <ncs-package xmlns="http://tail-f.com/ns/ncs-packages">
        ....
      <component>
        <name>do-upgrade</name>
        <upgrade>
          <java-class-name>com.example.DoUpgrade</java-class-name>
        </upgrade>
      </component>
    </ncs-package>
    Example: VLAN Service v2 YANG Model
    module vlan-service {
      namespace "http://example.com/vlan-service";
      prefix vl;
    
      import tailf-common {
        prefix tailf;
      }
      import tailf-ncs {
        prefix ncs;
      }
    
      description
        "This service creates a vlan iface/unit on all routers in our network. ";
    
      revision 2013-08-30 {
        description
          "Added mandatory leaf global-id.";
      }
      revision 2013-01-08 {
        description
          "Initial revision.";
      }
    
      augment /ncs:services {
        list vlan {
          key name;
          leaf name {
            tailf:info "Unique service id";
            tailf:cli-allow-range;
            type string;
          }
    
          uses ncs:service-data;
          ncs:servicepoint vlanspnt_v2;
    
          tailf:action self-test {
            tailf:info "Perform self-test of the service";
            tailf:actionpoint vlanselftest;
            output {
              leaf success {
                type boolean;
              }
              leaf message {
                type string;
                description
                  "Free format message.";
              }
            }
          }
    
          leaf global-id {
            type string;
            mandatory true;
          }
          leaf iface {
            type string;
            mandatory true;
          }
          leaf unit {
            type int32;
            mandatory true;
          }
          leaf vid {
            type uint16;
            mandatory true;
          }
          leaf description {
            type string;
            mandatory true;
          }
        }
      }
    }
    Example: YANG Service diff
    $ diff vlan/src/yang/vlan-service.yang \
                         vlan_v2/src/yang/vlan-service.yang
    16a18,22
    >   revision 2013-08-30 {
    >     description
    >       "Added mandatory leaf global-id.";
    >   }
    >
    48a55,58
    >       leaf global-id {
    >         type string;
    >         mandatory true;
    >       }
    68c78
    Example: VLAN Service Upgrade Component Java Class
    public class UpgradeService {
    
        public UpgradeService() {
        }
    
        public static void main(String[] args) throws Exception {
            Socket s1 = new Socket("localhost", Conf.NCS_PORT);
            Cdb cdb = new Cdb("cdb-upgrade-sock", s1);
            cdb.setUseForCdbUpgrade();
            CdbUpgradeSession cdbsess =
                cdb.startUpgradeSession(
                        CdbDBType.CDB_RUNNING,
                        EnumSet.of(CdbLockType.LOCK_SESSION,
                                   CdbLockType.LOCK_WAIT));
    
    
            Socket s2 = new Socket("localhost", Conf.NCS_PORT);
            Maapi maapi = new Maapi(s2);
            int th = maapi.attachInit();
    
            int no = cdbsess.getNumberOfInstances("/services/vlan");
            for(int i = 0; i < no; i++) {
                Integer offset = Integer.valueOf(i);
                ConfBuf name = (ConfBuf)cdbsess.getElem("/services/vlan[%d]/name",
                                                        offset);
                ConfBuf iface = (ConfBuf)cdbsess.getElem("/services/vlan[%d]/iface",
                                                        offset);
                ConfInt32 unit =
                    (ConfInt32)cdbsess.getElem("/services/vlan[%d]/unit",
                                               offset);
                ConfUInt16 vid =
                    (ConfUInt16)cdbsess.getElem("/services/vlan[%d]/vid",
                                                offset);
    
                String nameStr = name.toString();
                System.out.println("SERVICENAME = " + nameStr);
    
                String globId = String.format("%1$s-%2$s-%3$s", iface.toString(),
                                              unit.toString(), vid.toString());
                ConfPath gidpath = new ConfPath("/services/vlan{%s}/global-id",
                                                name.toString());
                maapi.setElem(th, new ConfBuf(globId), gidpath);
            }
    
            s1.close();
            s2.close();
        }
    }
    Example: Upgrade Init
            Socket s1 = new Socket("localhost", Conf.NCS_PORT);
            Cdb cdb = new Cdb("cdb-upgrade-sock", s1);
            cdb.setUseForCdbUpgrade();
            CdbUpgradeSession cdbsess =
                cdb.startUpgradeSession(
                        CdbDBType.CDB_RUNNING,
                        EnumSet.of(CdbLockType.LOCK_SESSION,
                                   CdbLockType.LOCK_WAIT));
    Example: Upgrade Get Transaction
            Socket s2 = new Socket("localhost", Conf.NCS_PORT);
            Maapi maapi = new Maapi(s2);
            int th = maapi.attachInit();
            int no = cdbsess.getNumberOfInstances("/services/vlan");
            for(int i = 0; i < no; i++) {
                ConfBuf name = (ConfBuf)cdbsess.getElem("/services/vlan[%d]/name",
                                                        offset);
                ConfBuf iface = (ConfBuf)cdbsess.getElem("/services/vlan[%d]/iface",
                                                        offset);
                ConfInt32 unit =
                    (ConfInt32)cdbsess.getElem("/services/vlan[%d]/unit",
                                               offset);
                ConfUInt16 vid =
                    (ConfUInt16)cdbsess.getElem("/services/vlan[%d]/vid",
                                                offset);
                String globId = String.format("%1$s-%2$s-%3$s", iface.toString(),
                                              unit.toString(), vid.toString());
                ConfPath gidpath = new ConfPath("/services/vlan{%s}/global-id",
                                                name.toString());
                maapi.setElem(th, new ConfBuf(globId), gidpath);
            s1.close();
            s2.close();
    Example: Tunnel Service YANG Model
    module tunnel-service {
      namespace "http://example.com/tunnel-service";
      prefix tl;
    
      import tailf-common {
        prefix tailf;
      }
      import tailf-ncs {
        prefix ncs;
      }
    
      description
        "This service creates a tunnel assembly on all routers in our network. ";
    
      revision 2013-01-08 {
        description
          "Initial revision.";
      }
    
      augment /ncs:services {
        list tunnel {
          key tunnel-name;
          leaf tunnel-name {
            tailf:info "Unique service id";
            tailf:cli-allow-range;
            type string;
          }
    
          uses ncs:service-data;
          ncs:servicepoint tunnelspnt;
    
          tailf:action self-test {
            tailf:info "Perform self-test of the service";
            tailf:actionpoint tunnelselftest;
            output {
              leaf success {
                type boolean;
              }
              leaf message {
                type string;
                description
                  "Free format message.";
              }
            }
          }
    
          leaf gid {
            type string;
            mandatory true;
          }
          leaf interface {
            type string;
            mandatory true;
          }
          leaf assembly {
            type int32;
            mandatory true;
          }
          leaf tunnel-id {
            type uint16;
            mandatory true;
          }
          leaf descr {
            type string;
            mandatory true;
          }
        }
      }
    }
    Example: Tunnel Service Upgrade Java Class
    public class UpgradeService {
    
        public UpgradeService() {
        }
    
        public static void main(String[] args) throws Exception {
            ArrayList<ConfNamespace> nsList = new ArrayList<ConfNamespace>();
            nsList.add(new vlanService());
            Socket s1 = new Socket("localhost", Conf.NCS_PORT);
            Cdb cdb = new Cdb("cdb-upgrade-sock", s1);
            cdb.setUseForCdbUpgrade(nsList);
            CdbUpgradeSession cdbsess =
                cdb.startUpgradeSession(
                        CdbDBType.CDB_RUNNING,
                        EnumSet.of(CdbLockType.LOCK_SESSION,
                                   CdbLockType.LOCK_WAIT));
    
    
            Socket s2 = new Socket("localhost", Conf.NCS_PORT);
            Maapi maapi = new Maapi(s2);
            int th = maapi.attachInit();
    
            int no = cdbsess.getNumberOfInstances("/services/vlan");
            for(int i = 0; i < no; i++) {
                ConfBuf name =(ConfBuf)cdbsess.getElem("/services/vlan[%d]/name",
                                                       Integer.valueOf(i));
                String nameStr = name.toString();
                System.out.println("SERVICENAME = " + nameStr);
    
                ConfCdbUpgradePath oldPath =
                    new ConfCdbUpgradePath("/ncs:services/vl:vlan{%s}",
                                           name.toString());
                ConfPath newPath = new ConfPath("/services/tunnel{%x}", name);
                maapi.create(th, newPath);
    
                ConfXMLParam[] oldparams = new ConfXMLParam[] {
                    new ConfXMLParamLeaf("vl", "global-id"),
                    new ConfXMLParamLeaf("vl", "iface"),
                    new ConfXMLParamLeaf("vl", "unit"),
                    new ConfXMLParamLeaf("vl", "vid"),
                    new ConfXMLParamLeaf("vl", "description"),
                };
                ConfXMLParam[] data =
                    cdbsess.getValues(oldparams, oldPath);
    
                ConfXMLParam[] newparams = new ConfXMLParam[] {
                    new ConfXMLParamValue("tl", "gid",       data[0].getValue()),
                    new ConfXMLParamValue("tl", "interface", data[1].getValue()),
                    new ConfXMLParamValue("tl", "assembly",  data[2].getValue()),
                    new ConfXMLParamValue("tl", "tunnel-id", data[3].getValue()),
                    new ConfXMLParamValue("tl", "descr",     data[4].getValue()),
                };
                maapi.setValues(th, newparams, newPath);
    
                maapi.ncsMovePrivateData(th, oldPath, newPath);
            }
    
            s1.close();
            s2.close();
        }
    }
            ArrayList<ConfNamespace> nsList = new ArrayList<ConfNamespace>();
            nsList.add(new vlanService());
            Socket s1 = new Socket("localhost", Conf.NCS_PORT);
            Cdb cdb = new Cdb("cdb-upgrade-sock", s1);
            cdb.setUseForCdbUpgrade(nsList);
            CdbUpgradeSession cdbsess =
                cdb.startUpgradeSession(
                        CdbDBType.CDB_RUNNING,
                        EnumSet.of(CdbLockType.LOCK_SESSION,
                                   CdbLockType.LOCK_WAIT));
    nslist.add(new ConfNamespaceStub(500805321,
                                     "http://example.com/vlan-service",
                                     "http://example.com/vlan-service",
                                     "vl"));
                ConfPath newPath = new ConfPath("/services/tunnel{%x}", name);
                maapi.create(th, newPath);
                ConfXMLParam[] oldparams = new ConfXMLParam[] {
                    new ConfXMLParamLeaf("vl", "global-id"),
                    new ConfXMLParamLeaf("vl", "iface"),
                    new ConfXMLParamLeaf("vl", "unit"),
                    new ConfXMLParamLeaf("vl", "vid"),
                    new ConfXMLParamLeaf("vl", "description"),
                };
                ConfXMLParam[] data =
                    cdbsess.getValues(oldparams, oldPath);
                ConfXMLParam[] newparams = new ConfXMLParam[] {
                    new ConfXMLParamValue("tl", "gid",       data[0].getValue()),
                    new ConfXMLParamValue("tl", "interface", data[1].getValue()),
                    new ConfXMLParamValue("tl", "assembly",  data[2].getValue()),
                    new ConfXMLParamValue("tl", "tunnel-id", data[3].getValue()),
                    new ConfXMLParamValue("tl", "descr",     data[4].getValue()),
                };
                maapi.setValues(th, newparams, newPath);
                maapi.ncsMovePrivateData(th, oldPath, newPath);

    RFC 5717: Partial Lock Remote Procedure Call (RPC) for NETCONF

  • RFC 6020: YANG - A Data Modeling Language for the Network Configuration Protocol (NETCONF)

  • RFC 6021: Common YANG Data Types

  • RFC 6022: YANG Module for NETCONF Monitoring

  • RFC 6241: Network Configuration Protocol (NETCONF)

  • RFC 6242: Using the NETCONF Configuration Protocol over Secure Shell (SSH)

  • RFC 6243: With-defaults capability for NETCONF

  • RFC 6470: NETCONF Base Notifications

  • RFC 6536: NETCONF Access Control Model

  • RFC 6991: Common YANG Data Types

  • RFC 7895: YANG Module Library

  • RFC 7950: The YANG 1.1 Data Modeling Language

  • RFC 8071: NETCONF Call Home and RESTCONF Call Home

  • RFC 8342: Network Management Datastore Architecture (NMDA)

  • RFC 8525: YANG Library

  • RFC 8528: YANG Schema Mount

  • RFC 8526: NETCONF Extensions to Support the Network Management Datastore Architecture

  • RFC 8639: Subscription to YANG Notifications

  • RFC 8640: Dynamic Subscription to YANG Events and Datastores over NETCONF

  • RFC 8641: Subscription to YANG Notifications for Datastore Updates

  • For the <delete-config> operation specified in RFC 4741 / RFC 6241, only <url> with scheme file is supported for the <target> parameter - i.e. no data stores can be deleted. The concept of deleting a data store is not well defined and is at odds with the transaction-based configuration management of NSO. To delete the entire contents of a data store, with full transactional support, a <copy-config> with an empty <config/> element for the <source> parameter can be used.

    For the <partial-lock> operation, RFC 5717, section 2.4.1 says that if a node in the scope of the lock is deleted by the session owning the lock, it is removed from the scope of the lock. In NSO this is not true; the deleted node is kept in the scope of the lock.

    NSO NETCONF northbound API can be used by arbitrary NETCONF clients. A simple Python-based NETCONF client called netconf-console is shipped as source code in the distribution. See Using netconf-console for details. Other NETCONF clients will work too, as long as they adhere to the NETCONF protocol. If you need a Java client, the open-source client JNC can be used.

    When integrating NSO into larger OSS/NMS environments, the NETCONF API is a good choice of integration point.

    Protocol Capabilities

    The NETCONF server in NSO supports the following capabilities in both NETCONF 1.0 (RFC 4741) and NETCONF 1.1 (RFC 6241).

    Capability
    Description

    :writable-running

    This capability is always advertised.

    :candidate

    Not supported by NSO.

    :confirmed-commit

    Not supported by NSO.

    :rollback-on-error

    This capability allows the client to set the <error-option> parameter to rollback-on-error. The other permitted values are stop-on-error (default) and continue-on-error. Note that the meaning of the word "error" in this context is not defined in the specification. Instead, the meaning of this word must be defined by the data model. Also, note that if stop-on-error or continue-on-error is triggered by the server, it means that some parts of the edit operation succeeded, and some parts didn't. The error partial-operation must be returned in this case. partial-operation is obsolete and should not be returned by a server. If some other error occurs (i.e. an error not covered by the meaning of "error" above), the server generates an appropriate error message, and the data store is unaffected by the operation. The NSO server never allows partial configuration changes, since it might result in inconsistent configurations, and recovery from such a state can be very difficult for a client. This means that regardless of the value of the <error-option>

    :validate

    NSO supports both version 1.0 and 1.1 of this capability.

    :startup

    Not supported by NSO.

    The following list of optional standard capabilities is also supported:

    Capability
    Description

    :notification

    NSO implements the urn:ietf:params:netconf:capability:notification:1.0 capability, including support for the optional replay feature. See for details.

    :with-defaults

    NSO implements the urn:ietf:params:netconf:capability:with-defaults:1.0 capability, which is used by the server to inform the client how default values are handled by the server, and by the client to control whether default values should be generated to replies or not.

    If the capability is enabled, NSO also implements the urn:ietf:params:netconf:capability:with-operational-defaults:1.0 capability, which targets the operational state datastore while the :with-defaults capability targets configuration data stores.

    :yang-library:1.0

    NSO implements the urn:ietf:params:netconf:capability:yang-library:1.0 capability, which informs the client that the server implements the YANG module library , and informs the client about the current module-set-id.

    :yang-library:1.1

    NSO implements the urn:ietf:params:netconf:capability:yang-library:1.1 capability, which informs the client that the server implements the YANG library , and informs the client about the current content-id.

    Protocol YANG Modules

    In addition to the protocol capabilities listed above, NSO also implements a set of YANG modules that are closely related to the protocol.

    • ietf-netconf-nmda: This module from RFC 8526 defines the NMDA extension to NETCONF. It defines the following features:

    • origin: Indicates that the server supports the origin annotation. It is not advertised by default. The support for origin can be enabled in ncs.conf (see ncs.conf(5) in Manual Pages ). If it is enabled, the origin feature is advertised.

    • with-defaults: Advertised if the server supports the :with-defaults capability, which NSO does.

    • ietf-subscribed-notifications: This module from defines operations, configuration data nodes, and operational state data nodes related to notification subscriptions. It defines the following features:

    • configured: Indicates that the server supports configured subscriptions. This feature is not advertised.

    • dscp: Indicates that the server supports the ability to set the Differentiated Services Code Point (DSCP) value in outgoing packets. This feature is not advertised.

    • encode-json: Indicates that the server supports JSON encoding of notifications. This is not applicable to NETCONF, and this feature is not advertised.

    • encode-xml: Indicates that the server supports XML encoding of notifications. This feature is advertised by NSO.

    • interface-designation: Indicates that a configured subscription can be configured to send notifications over a specific interface. This feature is not advertised.

    • qos: Indicates that a publisher supports absolute dependencies of one subscription's traffic over another as well as weighted bandwidth sharing between subscriptions. This feature is not advertised.

    • replay: Indicates that historical event record replay is supported. This feature is advertised by NSO.

    • subtree: Indicates that the server supports subtree filtering of notifications. This feature is advertised by NSO.

    • supports-vrf: Indicates that a configured subscription can be configured to send notifications from a specific VRF. This feature is not advertised.

    • xpath: Indicates that the server supports XPath filtering of notifications. This feature is advertised by NSO.

    In addition to this, NSO does not support pre-configuration or monitoring of subtree filters, and thus advertises a deviation module that deviates /filters/stream-filter/filter-spec/stream-subtree-filter and /subscriptions/subscription/target/stream/stream-filter/within-subscription/filter-spec/stream-subtree-filter as "not-supported".

    NSO does not generate subscription-modified notifications when the parameters of a subscription change, and there is currently no mechanism to suspend notifications so subscription-suspended and subscription-resumed notifications are never generated.

    There is basic support for monitoring subscriptions via the /subscriptions container. Currently, it is possible to view dynamic subscriptions' attributes: subscription-id, stream, encoding, receiver, stop-time, and stream-xpath-filter. Unsupported attributes are: stream-subtree-filter, receiver/sent-event-records, receiver/excluded-event-records, and receiver/state.

    • ietf-yang-push: This module from RFC 8641 extends operations, data nodes, and operational state defined in ietf-subscribed-notifications; and also introduces continuous and customizable notification subscriptions for updates from running and operational datastores. It defines the same features as ietf-subscribed-notifications and also the following feature:

      • on-change: Indicates that on-change triggered notifications are supported. This feature is advertised by NSO but only supported on the running datastore.

    In addition to this, NSO does not support pre-configuration or monitoring of subtree filters and thus advertises a deviation module that deviates /filters/selection-filter/filter-spec/datastore-subtree-filter and /subscriptions/subscription/target/datastore/selection-filter/within-subscription/filter-spec/datastore-subtree-filter as "not-supported".

    The monitoring of subscriptions via the subscriptions container does currently not support the attributes: periodic/period, periodic/state, on-change/dampening-period, on-change/sync-on-start, on-change/excluded-change.

    Advertising Capabilities and YANG Modules

    All enabled NETCONF capabilities are advertised in the hello message that the server sends to the client.

    A YANG module is supported by the NETCONF server if its fxs file is found in NSO's loadPath, and if the fxs file is exported to NETCONF.

    The following YANG modules are built-in, which means that their fxs files need not be present in the loadPath. If they are found in the loadPath they are skipped.

    • ietf-netconf

    • ietf-netconf-with-defaults

    • ietf-yang-library

    • ietf-yang-types

    • ietf-inet-types

    • ietf-restconf

    • ietf-datastores

    • ietf-yang-patch

    All built-in modules are always supported by the server.

    All YANG version 1 modules supported by the server are advertised in the hello message, according to the rules defined in RFC 6020.

    All YANG version 1 and version 1.1 modules supported by the server are advertised in the YANG library.

    If a YANG module (any version) is supported by the server, and its .yang or .yin file is found in the fxs file or in the loadPath, then the module is also advertised in the schema list defined in ietf-netconf-monitoring, made available for download with the RPC operation get-schema, and if RESTCONF is enabled, also advertised in the schema leaf in ietf-yang-library. See Monitoring of the NETCONF Server.

    Advertising Device YANG Modules

    NSO uses YANG Schema Mount to mount the data models for the devices. There are two mount points, one for the configuration (in /devices/device/config), and one for operational state data (in /devices/device/live-status). As defined in YANG Schema Mount, a client can read the module list from the YANG library in each of these mount points to learn which YANG models each device supports via NSO.

    For example, to get the YANG library data for the device x0, we can do:

    The set of modules reported for a device is the set of modules that NSO knows, i.e., the set of modules compiled for the specific device type. This means that all devices of the same device type will report the same set of modules. Also, note that the device may support other modules that are not known to NSO. Such modules are not reported here.

    NETCONF Transport Protocols

    The NETCONF server natively supports the mandatory SSH transport, i.e., SSH is supported without the need for an external SSH daemon (such as sshd). It also supports integration with OpenSSH.

    Using OpenSSH

    NSO is delivered with a program netconf-subsys which is an OpenSSH subsystem program. It is invoked by the OpenSSH daemon after successful authentication. It functions as a relay between the ssh daemon and NSO; it reads data from the ssh daemon from standard input and writes the data to NSO over a loopback socket, and vice versa. This program is delivered as source code in $NCS_DIR/src/ncs/netconf/netconf-subsys.c. It can be modified to fit the needs of the application. For example, it could be modified to read the group names for a user from an external LDAP server.

    When using OpenSSH, the users are authenticated by OpenSSH, i.e., the user names are not stored in NSO. To use OpenSSH, compile the netconf-subsys program, and put the executable in e.g. /usr/local/bin. Then add the following line to the ssh daemon's config file, sshd_config:

    The connection from netconf-subsys to NSO can be arranged in one of two different ways:

    1. Make sure NSO is configured to listen to TCP traffic on localhost, port 2023, and disable SSH in ncs.conf (see ncs.conf(5) in Manual Pages ). (Re)start sshd and NSO. Or:

    2. Compile netconf-subsys to use a connection to the IPC port instead of the NETCONF TCP transport (see the netconf-subsys.c source for details), and disable both TCP and SSH in ncs.conf. (Re)start sshd and NSO. This method may be preferable since it makes it possible to use the IPC Access Check (see Restricting Access to the IPC Port) to restrict the unauthenticated access to NSO that is needed by netconf-subsys.

    By default, the netconf-subsys program sends the names of the UNIX groups the authenticated user belongs to. To test this, make sure that NSO is configured to give access to the group(s) the user belongs to. The easiest for test is to give access to all groups.

    Configuring the NETCONF Server

    NSO itself is configured through a configuration file called ncs.conf. For a description of the parameters in this file, please see the ncs.conf(5) in Manual Pages man page.

    Error Handling

    When NSO processes <get>, <get-config>, and <copy-config> requests, the resulting data set can be very large. To avoid buffering huge amounts of data, NSO streams the reply to the client as it traverses the data tree and calls data provider functions to retrieve the data.

    If a data provider fails to return the data it is supposed to return, NSO can take one of two actions. Either it simply closes the NETCONF transport (default), or it can reply with an inline RPC error and continue to process the next data element. This behavior can be controlled with the /ncs-config/netconf/rpc-errors configuration parameter (see ncs.conf(5) in Manual Pages).

    An inline error is always generated as a child element to the parent of the faulty element. For example, if an error occurs when retrieving the leaf element mac-address of an interface the error might be:

    If a get_next call fails in the processing of a list, a reply might look like this:

    Using netconf-console

    The netconf-console program is a simple NETCONF client. It is delivered as Python source code and can be used as-is or modified.

    When NSO has been started, we can use netconf-console to query the configuration of the NETCONF Access Control groups:

    With the -x flag an XPath expression can be specified, to retrieve only data matching that expression. This is a very convenient way to extract portions of the configuration from the shell or from shell scripts.

    Monitoring the NETCONF Server

    RFC 6022 - YANG Module for NETCONF Monitoring defines a YANG module, ietf-netconf-monitoringfor monitoring of the NETCONF server. It contains statistics objects such as the number of RPCs received, status objects such as user sessions, and an operation to retrieve data models from the NETCONF server.

    This data model defines an RPC operation, get-schema, which is used to retrieve YANG modules from the NETCONF server. NSO will report the YANG modules for all fxs files that are reported as capabilities, and for which the corresponding YANG or YIN file is stored in the fxs file or found in the loadPath. If a file is found in the loadPath, it has priority over a file stored in the fxs file. Note that by default, the module and its submodules are stored in the fxs file by the compiler.

    If the YANG (or YIN files) are copied into the loadPath, they can be stored as is or compressed with gzip. The filename extension MUST be .yang, .yin, .yang.gz, or .yin.gz.

    Also available is a Tail-f-specific data model, tailf-netconf-monitoring, which augments ietf-netconf-monitoring with additional data about files available for usage with the <copy-config> command with a file <url> source or target. /ncs-config/netconf-north-bound/capabilities/url/enabled and /ncs-config/netconf-north-bound/capabilities/url/file/enabled must both be set to true. If rollbacks are enabled, those files are listed as well, and they can be loaded using <copy-config>.

    This data model also adds data about which notification streams are present in the system and data about sessions that subscribe to the streams.

    Notification Capability

    This section describes how NETCONF notifications are implemented within NSO, and how the applications generate these events.

    Central to NETCONF notifications is the concept of a stream. The stream serves two purposes. It works like a high-level filtering mechanism for the client. For example, if the client subscribes to notifications on the security stream, it can expect to get security-related notifications only. Second, each stream may have its own log mechanism. For example, by keeping all debug notifications in a debug stream, they can be logged separately from the security stream.

    Built-in Notification Streams

    NSO has built-in support for the well-known stream NETCONF, defined in RFC 5277 and RFC 8639. NSO supports the notifications defined in RFC 6470 - NETCONF Base Notifications on this stream. If the application needs to send any additional notifications on this stream, it can do so.

    NSO can be configured to listen to notifications from devices and send those notifications to northbound NETCONF clients. The stream device-notifications is used for this purpose. To enable this, the stream device-notifications must be configured in ncs.conf, and additionally, subscriptions must be created in /ncs:devices/device/notifications.

    Defining Notification Streams

    It is up to the application to define which streams it supports. In NSO, this is done in ncs.conf (see ncs.conf(5) in Manual Pages). Each stream must be listed, and whether it supports replay or not. The following example enables the built-in stream device-notifications with replay support, and an additional, application-specific stream debug without replay support:

    The well-known stream NETCONF does not have to be listed, but if it isn't listed, it will not support replay.

    Automatic Replay

    NSO has built-in support for logging of notifications, i.e., if replay support has been enabled for a stream, NSO automatically stores all notifications on disk ready to be replayed should a NETCONF client ask for logged notifications. In the ncs.conf fragment above the security stream has been set up to use the built-in notification log/replay store. The replay store uses a set of wrapping log files on a disk (of a certain number and size) to store the security stream notifications.

    The reason for using a wrap log is to improve replay performance whenever a NETCONF client asks for notifications in a certain time range. Any problems with log files not being properly closed due to hard power failures etc. are also kept to a minimum, i.e., automatically taken care of by NSO.

    Subscribed Notifications

    This section describes how Subscribed Notifications are implemented for NETCONF within NSO.

    Subscribed Notifications is defined in RFC 8639 and the NETCONF transport binding is defined in RFC 8640. Subscribed Notifications build upon NETCONF notifications defined in RFC 5277 and have a number of key improvements:

    • Multiple subscriptions on a single transport session

    • Support for dynamic and configured subscriptions

    • Modification of an existing subscription in progress

    • Per-subscription operational counters

    • Negotiation of subscription parameters (through the use of hints returned as part of declined subscription requests)

    • Subscription state change notifications (e.g., publisher-driven suspension, parameter modification)

    • Independence from transport

    Compatibility with NETCONF Notifications

    Both NETCONF notifications and Subscribed Notifications can be used at the same time and are configured the same way in ncs.conf. However, there are some differences and limitations.

    For Subscribed Notifications, a new subscription is requested by invoking the RPC establish-subscription. For NETCONF notifications, the corresponding RPC is create-subscription.

    A NETCONF session can only have either the subscribers started with create-subscription or establish-subscription simultaneously.

    • If a session has subscribers established with establish-subscription and receives a request to create subscriptions with create-subscription, an <rpc-error> is sent containing <error-tag> operation-not-supported.

      If a session has subscribers created with create-subscription and receives a request to establish subscriptions with establish-subscription, an <rpc-error> is sent containing <error-tag> operation-not-supported.

    Dynamic subscriptions send all notifications on the transport session where they were established.

    Monitoring Subscriptions

    Existing subscriptions and their configuration can be found in the /subscriptions container.

    For example, for viewing all established subscriptions, we can do:

    Limitations

    It is not possible to establish a subscription with a stored filter from /filters.

    The support for monitoring subscriptions has basic functionality. It is possible to read subscription-id, stream, stream-xpath-filter, replay-start-time, stop-time, encoding, receivers/receiver/name, and receivers/receiver/state.

    The leaf stream-subtree-filter is deviated as "not-supported", hence can not be read.

    The unsupported leafs in the subscriptions container are the following: stream-subtree-filter, receiver/sent-event-records, and receiver/excluded-event-records.

    YANG-Push

    This section describes how YANG-Push is implemented for NETCONF within NSO.

    YANG-Push is defined in RFC 8641 and the NETCONF transport binding is defined in RFC 8640. YANG-Push implementation in NSO introduces a subscription service that provides updates from a datastore. This implementation supports dynamic subscriptions on updates of datastore nodes. A subscribed receiver is provided with update notifications according to the terms of the subscription. There are two types of notification messages defined to provide updates and these are used according to subscription terms.

    • push-update notification is a complete, filtered update that reflects the data of the subscribed datastore. It is the type of notification that is used for periodic subscriptions. A push-update notification can also be used for the on-change subscriptions in case of a receiver asks for synchronization, either at the start of a new subscription or by sending a resync request for an established subscription.

      An example push-update notification:

    • push-change-update notification is the most common type of notification that is used for on-change subscriptions. It provides a set of filtered changes that happened on the subscribed datastore since the last update notification. The update records are constructed in the form of YANG-Patch Media Type that is defined in .

      An example push-change-update notification:

    Periodic Subscriptions

    For periodic subscriptions, updates are triggered periodically according to specified time interval. Optionally a reference anchor-time can be provided for a specified period.

    On-Change Subscriptions

    For on-change subscriptions, updates are triggered whenever a change is detected on the subscribed information. In the case of rapidly changing data, instead of receiving frequent notifications for every change, a receiver may specify a dampening-period to receive update notifications in a lower frequency. A receiver may request for synchronization at the start of a subscription by using sync-on-start option. A receiver may filter out specific types of changes by providing a list of excluded-change parameters.

    To provide updates for on-change subscriptions on operational datastore, data provider applications are required to implement push-on-change callbacks. For more details, see the PUSH ON-CHANGE CALLBACKS in the Manual Pages section of confd_lib_dp(3) in Manual Pages.

    YANG-Push Operations

    In addition to RPCs defined in subscribed notifications, YANG-Push defines resync-subscription RPC. Upon receipt of resync-subscription, if the subscription is an on-change triggered type, a push-update notification is sent to the receiver according to the terms of the subscription. Otherwise, an appropriate error response is sent.

    • resync-subscription

    Monitoring the YANG-Push Subscriptions

    YANG-Push subscriptions can be monitored in a similar way to Subscribed Notifications through /subscriptions container. For more information, see Monitoring Subscriptions.

    YANG-Push filters differ from the filters of Subscribed Notifications and they are specified as datastore-xpath-filter and datastore-subtree-filter. The leaf datastore-subtree-filter is deviated as "not-supported", and hence can not be monitored. Also, YANG-Push specific update trigger parameters periodic/period, periodic/anchor-time, on-change/dampening-period, on-change/sync-on-start and on-change/excluded-change are not supported for monitoring.

    Limitations

    • modify-subscriptions operation does not support changing a subscriptions update trigger type from periodic to on-change or vice versa.

    • on-change subscriptions do not work for changes that are made through the CDB-API.

    • on-change subscriptions do not work on internal callpoints such as ncs-state, ncs-high-availability, and live-status.

    Actions Capability

    This capability is deprecated since actions are now supported in standard YANG 1.1. It is recommended to use standard YANG 1.1 for actions.

    This capability introduces a new RPC operation that is used to invoke actions defined in the data model. When an action is invoked, the instance on which the action is invoked is explicitly identified by a hierarchy of configuration or state data.

    Here is a simple example that invokes the action sync-from on the device ce1. It uses the netconf-console command:

    Capability Identifier

    The action capability is identified by the following capability string:

    transactions Capability

    This capability introduces four new RPC operations that are used to control a two-phase commit transaction on the NETCONF server. The normal <edit-config> operation is used to write data in the transaction, but the modifications are not applied until an explicit <commit-transaction> is sent.

    This capability is formally defined in the YANG module tailf-netconf-transactions. It is recommended that this module be enabled.

    A typical sequence of operations looks like this:

    Dependencies

    None.

    Capability Identifier

    The transactions capability is identified by the following capability string:

    New Operation: <start-transaction>

    Description

    Starts a transaction towards a configuration datastore. There can be a single ongoing transaction per session at any time.

    When a transaction has been started, the client can send any NETCONF operation, but any <edit-config> or <copy-config> operation sent from the client must specify the same <target> as the <start-transaction>, and any <get-config> must specify the same <source> as <start-transaction>.

    If the server receives an <edit-config> or <copy-config> with another <target>, or a <get-config> with another <source>, an error must be returned with an <error-tag> set to invalid-value.

    The modifications sent in the <edit-config> operations are not immediately applied to the configuration datastore. Instead, they are kept in the transaction state of the server. The transaction state is only applied when a <commit-transaction> is received.

    The client sends a <prepare-transaction> when all modifications have been sent.

    Parameters

    • target: Name of the configuration datastore towards which the transaction is started.

    • with-inactive: If this parameter is given, the transaction will handle the inactive and active attributes. If given, it must also be given in the <edit-config> and <get-config> invocations in the transaction.

    Positive Response

    If the device can satisfy the request, an <rpc-reply> is sent that contains an <ok> element.

    Negative Response

    An <rpc-error> element is included in the <rpc-reply> if the request cannot be completed for any reason.

    If there is an ongoing transaction for this session already, an error must be returned with <error-app-tag> set to bad-state.

    Example

    New Operation: <prepare-transaction>

    Description

    Prepares the transaction state for commit. The server may reject the prepare request for any reason, for example, due to lack of resources or if the combined changes would result in an invalid configuration datastore.

    After a successful <prepare-transaction>, the next transaction-related RPC operation must be <commit-transaction> or <abort-transaction>. Note that an <edit-config> cannot be sent before the transaction is either committed or aborted.

    Care must be taken by the server to make sure that if <prepare-transaction> succeeds then the <commit-transaction> should not fail, since this might result in an inconsistent distributed state. Thus, <prepare-transaction> should allocate any resources needed to make sure the <commit-transaction> will succeed.

    Parameters

    None.

    Positive Response

    If the device was able to satisfy the request, an <rpc-reply> is sent that contains an <ok> element.

    Negative Response

    An <rpc-error> element is included in the <rpc-reply> if the request cannot be completed for any reason.

    If there is no ongoing transaction in this session, or if the ongoing transaction already has been prepared, an error must be returned with <error-app-tag> set to bad-state.

    Example

    New Operation: <commit-transaction>

    Description

    Applies the changes made in the transaction to the configuration datastore. The transaction is closed after a <commit-transaction>.

    Parameters

    None.

    Positive Response

    If the device was able to satisfy the request, an <rpc-reply> is sent that contains an <ok> element.

    Negative Response

    An <rpc-error> element is included in the <rpc-reply> if the request cannot be completed for any reason.

    If there is no ongoing transaction in this session, or if the ongoing transaction already has not been prepared, an error must be returned with <error-app-tag> set to bad-state.

    Example

    New Operation: <abort-transaction>

    Description

    Aborts the ongoing transaction, and all pending changes are discarded. <abort-transaction> can be given at any time during an ongoing transaction.

    Parameters

    None.

    Positive Response

    If the device was able to satisfy the request, an <rpc-reply> is sent that contains an <ok> element.

    Negative Response

    An <rpc-error> element is included in the <rpc-reply> if the request cannot be completed for any reason.

    If there is no ongoing transaction in this session, an error must be returned with <error-app-tag> set to bad-state.

    Example

    Modifications to Existing Operations

    The <edit-config> operation is modified so that if it is received during an ongoing transaction, the modifications are not immediately applied to the configuration target. Instead, they are kept in the transaction state of the server. The transaction state is only applied when a <commit-transaction> is received.

    Note that it doesn't matter if the <test-option> is 'set' or 'test-then-set' in the <edit-config>, since nothing is actually set when the <edit-config> is received.

    Inactive Capability

    This capability is used by the NETCONF server to indicate that it supports marking nodes as being inactive. A node that is marked as inactive exists in the data store but is not used by the server. Any node can be marked as inactive.

    To not confuse clients who do not understand this attribute, the client has to instruct the server to display and handle the inactive nodes. An inactive node is marked with an inactive XML attribute, and to make it active, the active XML attribute is used.

    This capability is formally defined in the YANG module tailf-netconf-inactive.

    Dependencies

    None.

    Capability Identifier

    The inactive capability is identified by the following capability string:

    New Operations

    None.

    Modifications to Existing Operations

    A new parameter, <with-inactive>, is added to the <get>, <get-config>, <edit-config>, <copy-config>, and <start-transaction> operations.

    The <with-inactive> element is defined in the http://tail-f.com/ns/netconf/inactive/1.0 namespace, and takes no value.

    If this parameter is present in <get>, <get-config>, or <copy-config>, the NETCONF server will mark inactive nodes with the inactive attribute.

    If this parameter is present in <edit-config> or <copy-config>, the NETCONF server will treat inactive nodes as existing so that an attempt to create a node that is inactive will fail, and an attempt to delete a node that is inactive will succeed. Further, the NETCONF server accepts the inactive and active attributes in the data hierarchy, to make nodes inactive or active, respectively.

    If the parameter is present in <start-transaction>, it must also be present in any <edit-config>, <copy-config>, <get>, or <get-config> operations within the transaction. If it is not present in <start-transaction>, it must not be present in any <edit-config> operation within the transaction.

    The inactive and active attributes are defined in the http://tail-f.com/ns/netconf/inactive/1.0 namespace. The inactive attribute's value is the string inactive, and the active attribute's value is the string active.

    Example

    This request creates an inactive interface:

    This request shows the inactive interface:

    This request shows that inactive data is not returned unless the client asks for it:

    This request activates the interface:

    This request creates an inactive interface:

    Rollback ID Capability

    This module extends existing operations with a with-rollback-id parameter which will, when set, extend the result with information about the rollback that was generated for the operation if any.

    The rollback ID returned is the ID from within the rollback file which is stable with regards to new rollbacks being created.

    Dependencies

    None.

    Capability Identifier

    The transactions capability is identified by the following capability string:

    Modifications to Existing Operations

    This module adds a parameter with-rollback-id to the following RPCs:

    If with-rollback-id is given, rollbacks are enabled, and the operation results in a rollback file being created the response will contain a rollback reference.

    NETCONF Extensions in NSO

    The YANG module tailf-netconf-ncs augments some NETCONF operations with additional parameters to control the behavior in NSO over NETCONF. See that YANG module for all the details. In this section, the options are summarized.

    To control the commit behavior of NSO the following input parameters are available:

    • no-revision-drop NSO will not run its data model revision algorithm, which requires all participating managed devices to have all parts of the data models for all data contained in this transaction. Thus, this flag forces NSO to never silently drop any data set operations towards a device.

    • no-overwrite NSO will check that the data that should be modified has not changed on the device compared to NSO's view of the data.

    • no-networking Do not send any data to the devices. This is a way to manipulate CDB in NSO without generating any southbound traffic.

    • no-out-of-sync-check Continue with the transaction even if NSO detects that a device's configuration is out of sync.

    • no-deploy Commit without invoking the service create method, i.e., write the service instance data without activating the service(s). The service(s) can later be redeployed to write the changes of the service(s) to the network.

    • reconcile/keep-non-service-config Reconcile the service data. All data which existed before the service was created will now be owned by the service. When the service is removed that data will also be removed. In technical terms, the reference count will be decreased by one for everything that existed prior to the service. If manually configured data exists below in the configuration tree that data is kept.

    • reconcile/discard-non-service-config Reconcile the service data but do not keep manually configured data that exists below in the configuration tree.

    • use-lsa Force handling of the LSA nodes as such. This flag tells NSO to propagate applicable commit flags and actions to the LSA nodes without applying them on the upper NSO node itself. The commit flags affected are dry-run, no-networking, no-out-of-sync-check, no-overwrite and no-revision-drop.

    • no-lsa Do not handle any of the LSA nodes as such. These nodes will be handled as any other device.

    • commit-queue/async Commit the transaction data to the commit queue. The operation returns successfully if the transaction data has been successfully placed in the queue.

    • commit-queue/sync/timeout Commit the transaction data to the commit queue. The operation does not return until the transaction data has been sent to all devices, or a timeout occurs. The timeout value specifies a maximum number of seconds to wait for the completion.

    • commit-queue/sync/infinity Commit the transaction data to the commit queue. The operation does not return until the transaction data has been sent to all devices.

    • commit-queue/bypass If /devices/global-settings/commit-queue/enabled-by-default is true the data in this transaction will bypass the commit queue. The data will be written directly to the devices.

    • commit-queue/atomic Sets the atomic behavior of the resulting queue item. Possible values are: true and false. If this is set to false, the devices contained in the resulting queue item can start executing if the same devices in other non-atomic queue items ahead of it in the queue are completed. If set to true, the atomic integrity of the queue item is preserved.

    • commit-queue/block-others The resulting queue item will block subsequent queue items, which use any of the devices in this queue item, from being queued.

    • commit-queue/lock Place a lock on the resulting queue item. The queue item will not be processed until it has been unlocked, see the actions unlock and lock in /devices/commit-queue/queue-item. No following queue items, using the same devices, will be allowed to execute as long as the lock is in place.

    • commit-queue/tag The value is a user-defined opaque tag. The tag is present in all notifications and events sent referencing the specific queue item.

    • commit-queue/error-option The error option to use. Depending on the selected error option NSO will store the reverse of the original transaction to be able to undo the transaction changes and get back to the previous state. This data is stored in the /devices/commit-queue/completed tree from where it can be viewed and invoked with the rollback action. When invoked the data will be removed. Possible values are: continue-on-error, rollback-on-error, and stop-on-error. The continue-on-error value means that the commit queue will continue on errors. No rollback data will be created. The rollback-on-error value means that the commit queue item will roll back on errors. The commit queue will place a lock with block-others on the devices and services in the failed queue item. The

    • trace-id Use the provided trace ID as part of the log messages emitted while processing. If no trace ID is given, NSO will generate and assign a trace ID to the processing.

    These optional input parameters are augmented into the following NETCONF operations:

    • commit

    • edit-config

    • copy-config

    • prepare-transaction

    The operation prepare-transaction is also augmented with an optional parameter dry-run, which can be used to show the effects that would have taken place, but not actually commit anything to the datastore or to the devices. dry-run takes an optional parameter outformat, which can be used to select in which format the result is returned. Possible formats are xml (default), cli, and native. The optional reverse parameter can be used together with the native format to display the device commands for getting back to the current running state in the network if the commit is successfully executed. Beware that if any changes are done later on the same data the reverse device commands returned are invalid.

    FASTMAP attributes such as back pointers and reference counters are typically internal to NSO and are not shown by default. The optional parameter with-service-meta-data can be used to include these in the NETCONF reply. The parameter is augmented into the following NETCONF operations:

    • get

    • get-config

    • get-data

    The Query API

    The Query API consists of several RPC operations to start queries, fetch chunks of the result from a query, restart a query, and stop a query.

    In the installed release there are two YANG files named tailf-netconf-query.yang and tailf-common-query.yang that defines these operations. An easy way to find the files is to run the following command from the top directory of the release installation:

    The API consists of the following operations:

    • start-query: Start a query and return a query handle.

    • fetch-query-result: Use a query handle to repeatedly fetch chunks of the result.

    • immediate-query: Start a query and return the entire result immediately.

    • reset-query: (Re)set where the next fetched result will begin from.

    • stop-query: Stop (and close) the query.

    In the following examples, the following data model is used:

    Here is an example of a start-query operation:

    An informal interpretation of this query is:

    For each /x/host where enabled is true, select its name, and address, and return the result sorted by name, in chunks of 100 results at the time.

    Let us discuss the various pieces of this request.

    The actual XPath query to run is specified by the foreach element. The example below will search for all /x/host nodes that have the enabled node set to true:

    Now we need to define what we want to have returned from the node set by using one or more select sections. What to actually return is defined by the XPath expression.

    We must also choose how the result should be represented. Basically, it can be the actual value or the path leading to the value. This is specified per select chunk The possible result types are: string , path , leaf-value and inline.

    The difference between string and leaf-value is somewhat subtle. In this case of string the result will be processed by the XPath function string() (which if the result is a node-set will concatenate all the values). The leaf-value will return the value of the first node in the result. As long as the result is a leaf node, string and leaf-value will return the same result. In the example above, we are using string as shown below. At least one result-type must be specified.

    The result-type inline makes it possible to return the full sub-tree of data in XML format. The data will be enclosed with a tag: data.

    Finally, we can specify an optional label for a convenient way of labeling the returned data. In the example we have the following:

    The returned result can be sorted. This is expressed as XPath expressions, which in most cases are very simple and refer to the found node-set. In this example, we sort the result by the content of the name node:

    To limit the maximum amount of results in each chunk that fetch-query-result will return we can set the limit element. The default is to get all results in one chunk.

    With the offset element we can specify at which node we should start to receive the result. The default is 1, i.e., the first node in the resulting node set.

    Now, if we continue by putting the operation above in a file query.xml we can send a request, using the command netconf-console, like this:

    The result would look something like this:

    The query handle (in this example 12345) must be used in all subsequent calls. To retrieve the result, we can now send:

    Which will result in something like the following:

    If we try to get more data with the fetch-query-result we might get more result entries in return until no more data exists and we get an empty query result back:

    If we want to send the query and get the entire result with only one request, we can do this by using immediate-query. This function takes similar arguments as start-query and returns the entire result analogous fetch-query-result. Note that it is not possible to paginate or set an offset start node for the result list; i.e. the options limit and offset are ignored.

    An example request and response:

    If we want to go back in the "stream" of received data chunks and have them repeated, we can do that with the reset-query operation. In the example below, we ask to get results from the 42nd result entry:

    Finally, when we are done we stop the query:

    Meta-data in Attributes

    NSO supports three pieces of meta-data data nodes: tags, annotations, and inactive.

    An annotation is a string that acts as a comment. Any data node present in the configuration can get an annotation. An annotation does not affect the underlying configuration but can be set by a user to comment what the configuration does.

    An annotation is encoded as an XML attribute annotation on any data node. To remove an annotation, set the annotation attribute to an empty string.

    Any configuration data node can have a set of tags. Tags are set by the user for data organization and filtering purposes. A tag does not affect the underlying configuration.

    All tags on a data node are encoded as a space-separated string in an XML attribute tags. To remove all tags, set the tags attribute to an empty string.

    Annotation, tags, and inactive attributes can be present in <edit-config>, <copy-config>, <get-config>, and <get>. For example:

    Namespace for Additional Error Information

    NSO adds an additional namespace which is used to define elements that are included in the <error-info> element. This namespace also describes which <error-app-tag/> elements the server might generate, as part of an <rpc-error/>.

    RFC 4741
    RFC 4742
    RFC 5277
    <notification xmlns="urn:ietf:params:xml:ns:netconf:notification:1.0">
      <eventTime>2020-06-10T10:00:00.00Z</eventTime>
      <push-update xmlns="urn:ietf:params:xml:ns:yang:ietf-yang-push">
        <id>1</id>
        <datastore-contents>
          <interfaces xmlns="urn:ietf:params:xml:ns:yang:ietf-interfaces">
            <interface>
              <name>eth0</name>
              <oper-status>up</oper-status>
            </interface>
          </interfaces>
        </datastore-contents>
      </push-update>
    </notification>
    $ netconf-console --get -x '/devices/device[name="x0"]/config/yang-library'
    <?xml version="1.0" encoding="UTF-8"?>
    <rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="1">
      <data>
        <devices xmlns="http://tail-f.com/ns/ncs">
          <device>
            <name>x0</name>
            <config>
              <yang-library xmlns="urn:ietf:params:xml:ns:yang:ietf-yang-library">
                <module-set>
                  <name>common</name>
                  <module>
                    <name>a</name>
                    <namespace>urn:a</namespace>
                  </module>
                  <module>
                    <name>b</name>
                    <namespace>urn:b</namespace>
                  </module>
                </module-set>
                <schema>
                  <name>common</name>
                  <module-set>common</module-set>
                </schema>
                <datastore>
                  <name xmlns:ds="urn:ietf:params:xml:ns:yang:ietf-datastores">\
                     ds:running\
                  </name>
                  <schema>common</schema>
                </datastore>
                <datastore>
                  <name xmlns:ds="urn:ietf:params:xml:ns:yang:ietf-datastores">\
                    ds:intended\
                  </name>
                  <schema>common</schema>
                </datastore>
                <datastore>
                  <name xmlns:ds="urn:ietf:params:xml:ns:yang:ietf-datastores">\
                    ds:operational\
                  </name>
                  <schema>common</schema>
                </datastore>
                <content-id>f0071b28c1e586f2e8609da036379a58</content-id>
              </yang-library>
            </config>
          </device>
        </devices>
      </data>
    </rpc-reply>
    Subsystem     netconf   /usr/local/bin/netconf-subsys
    <interface>
      <name>atm1</name>
      <rpc-error xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
        <error-type>application</error-type>
        <error-tag>operation-failed</error-tag>
        <error-severity>error</error-severity>
        <error-message xml:lang="en">Failed to talk to hardware</error-message>
        <error-info>
          <bad-element>mac-address</bad-element>
        </error-info>
      </rpc-error>
      ...
    </interface>
    <interface>
      <!-- successfully retrieved list entry -->
      <name>eth0</name>
      <mtu>1500</mtu>
      <!-- more leafs here -->
    </interface>
    <rpc-error xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
      <error-type>application</error-type>
      <error-tag>operation-failed</error-tag>
      <error-severity>error</error-severity>
      <error-message xml:lang="en">Failed to talk to hardware</error-message>
      <error-info>
        <bad-element>interface</bad-element>
      </error-info>
    </rpc-error>
    $ netconf-console --get-config -x /nacm/groups
    <?xml version="1.0" encoding="UTF-8"?>
    <rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="1">
      <data>
        <nacm xmlns="urn:ietf:params:xml:ns:yang:ietf-netconf-acm">
          <groups>
            <group>
              <name>admin</name>
              <user-name>admin</user-name>
              <user-name>private</user-name>
            </group>
            <group>
              <name>oper</name>
              <user-name>oper</user-name>
              <user-name>public</user-name>
            </group>
          </groups>
        </nacm>
      </data>
    </rpc-reply>
    <notifications>
      <event-streams>
        <stream>
          <name>device-notifications</name>
          <description>Notifications received from devices</description>
          <replay-support>true</replay-support>
          <builtin-replay-store>
            <enabled>true</enabled>
            <dir>/var/log</dir>
            <max-size>S10M</max-size>
            <max-files>50</max-files>
          </builtin-replay-store>
        </stream>
        <stream>
          <name>debug</name>
          <description>Debug notifications</description>
          <replay-support>false</replay-support>
        </stream>
      </event-streams>
    </notifications>
    $ netconf-console --get -x /subscriptions
    <?xml version="1.0" encoding="UTF-8"?>
    <rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="1">
      <data>
        <subscriptions xmlns="urn:ietf:params:xml:ns:yang:ietf-subscribed-notifications">
          subscription>
           <id>3</id>
           <stream-xpath-filter>/if:interfaces/interface[name='eth0']/enabled</stream-xpath-filter>
           <stream>interface</stream>
           <stop-time>2030-10-04T14:00:00+02:00</stop-time>
           <encoding>encode-xml</encoding>
           <receivers>
             <receiver>
               <name>127.0.0.1:57432</name>
               <state>active</state>
             </receiver>
           </receivers>
          /subscription>
        </subsrcriptions>
      </data>
    </rpc-reply>
    $ cat ./sync-from-ce1.xml
    <action xmlns="http://tail-f.com/ns/netconf/actions/1.0">
      <data>
        <devices xmlns="http://tail-f.com/ns/ncs">
          <device>
            <name>ce1</name>
            <sync-from/>
          </device>
        </devices>
      </data>
    </action>
    $ netconf-console --rpc sync-from-ce1.xml
    <?xml version="1.0" encoding="UTF-8"?>
    <rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="1">
      <data>
        <devices xmlns="http://tail-f.com/ns/ncs">
          <device>
            <name>ce1</name>
            <sync-from>
              <result>true</result>
            </sync-from>
          </device>
        </devices>
      </data>
    </rpc-reply>
      http://tail-f.com/ns/netconf/actions/1.0
                   C                           S
                   |                           |
                   |  capability exchange      |
                   |-------------------------->|
                   |<------------------------->|
                   |                           |
                   |   <start-transaction>     |
                   |-------------------------->|
                   |<--------------------------|
                   |         <ok/>             |
                   |                           |
                   |     <edit-config>         |
                   |-------------------------->|
                   |<--------------------------|
                   |         <ok/>             |
                   |                           |
                   |  <prepare-transaction>    |
                   |-------------------------->|
                   |<--------------------------|
                   |         <ok/>             |
                   |                           |
                   |   <commit-transaction>    |
                   |-------------------------->|
                   |<--------------------------|
                   |         <ok/>             |
                   |                           |
      http://tail-f.com/ns/netconf/transactions/1.0
      <rpc message-id="101"
           xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
        <start-transaction xmlns="http://tail-f.com/ns/netconf/transactions/1.0">
          <target>
           <running/>
          </target>
        </start-transaction>
      </rpc>
    
      <rpc-reply message-id="101"
           xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
        <ok/>
      </rpc-reply>
      <rpc message-id="103"
           xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
        <prepare-transaction
           xmlns="http://tail-f.com/ns/netconf/transactions/1.0"/>
      </rpc>
    
      <rpc-reply message-id="103"
           xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
        <ok/>
      </rpc-reply>
      <rpc message-id="104"
           xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
        <commit-transaction
           xmlns="http://tail-f.com/ns/netconf/transactions/1.0"/>
      </rpc>
    
      <rpc-reply message-id="104"
           xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
        <ok/>
      </rpc-reply>
      <rpc message-id="104"
           xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
        <abort-transaction
           xmlns="http://tail-f.com/ns/netconf/transactions/1.0"/>
      </rpc>
    
      <rpc-reply message-id="104"
           xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
        <ok/>
      </rpc-reply>
      http://tail-f.com/ns/netconf/inactive/1.0
      <rpc message-id="101"
           xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
        <edit-config>
          <target>
            <running/>
          </target>
          <with-inactive
             xmlns="http://tail-f.com/ns/netconf/inactive/1.0"/>
          <config>
            <top xmlns="http://example.com/schema/1.2/config">
              <interface inactive="inactive">
                <name>Ethernet0/0</name>
                <mtu>1500</mtu>
              </interface>
            </top>
          </config>
        </edit-config>
      </rpc>
    
      <rpc-reply message-id="101"
           xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
        <ok/>
      </rpc-reply>
      <rpc message-id="102"
           xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
        <get-config>
          <source>
            <running/>
          </source>
          <with-inactive
             xmlns="http://tail-f.com/ns/netconf/inactive/1.0"/>
        </get-config>
      </rpc>
    
      <rpc-reply message-id="102"
           xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
        <data>
          <top xmlns="http://example.com/schema/1.2/config">
            <interface inactive="inactive">
              <name>Ethernet0/0</name>
              <mtu>1500</mtu>
            </interface>
          </top>
        </data>
      </rpc-reply>
      <rpc message-id="103"
           xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
        <get-config>
          <source>
            <running/>
          </source>
        </get-config>
      </rpc>
    
      <rpc-reply message-id="103"
           xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
        <data>
        </data>
      </rpc-reply>
      <rpc message-id="104"
           xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
        <edit-config>
          <target>
            <running/>
          </target>
          <with-inactive
             xmlns="http://tail-f.com/ns/netconf/inactive/1.0"/>
          <config>
            <top xmlns="http://example.com/schema/1.2/config">
              <interface active="active">
                <name>Ethernet0/0</name>
              </interface>
            </top>
          </config>
        </edit-config>
      </rpc>
    
      <rpc-reply message-id="104"
           xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
        <ok/>
      </rpc-reply>
      http://tail-f.com/ns/netconf/with-rollback-id
      o  edit-config
      o  copy-config
      o  commit
      o  commit-transaction
    $ find . -name tailf-netconf-query.yang
    container x {
      list host {
        key number;
        leaf number {
          type int32;
        }
        leaf enabled {
          type boolean;
        }
        leaf name {
          type string;
        }
        leaf address {
          type inet:ip-address;
        }
      }
    }
    <start-query xmlns="http://tail-f.com/ns/netconf/query">
      <foreach>
        /x/host[enabled = 'true']
      </foreach>
      <select>
        <label>Host name</label>
        <expression>name</expression>
        <result-type>string</result-type>
      </select>
      <select>
        <expression>address</expression>
        <result-type>string</result-type>
      </select>
      <sort-by>name</sort-by>
      <limit>100</limit>
      <offset>1</offset>
    </start-query>
    <foreach>
      /x/host[enabled = 'true']
    </foreach>
    <select>
      <label>Host name</label>
      <expression>name</expression>
      <result-type>string</result-type>
    </select>
    <select>
      <expression>address</expression>
      <result-type>string</result-type>
    </select>
    <sort-by>name</sort-by>
    <limit>100</limit>
    <offset>1</offset>
    $ netconf-console --rpc query.xml
    <start-query-result>
      <query-handle>12345</query-handle>
    </start-query-result>
    <fetch-query-result xmlns="http://tail-f.com/ns/netconf/query">
      <query-handle>12345</query-handle>
    </fetch-query-result>
    <query-result xmlns="http://tail-f.com/ns/netconf/query">
      <result>
        <select>
          <label>Host name</label>
          <value>One</value>
        </select>
        <select>
          <value>10.0.0.1</value>
        </select>
      </result>
      <result>
        <select>
          <label>Host name</label>
          <value>Three</value>
        </select>
        <select>
          <value>10.0.0.1</value>
        </select>
      </result>
    </query-result>
    <query-result xmlns="http://tail-f.com/ns/netconf/query">
    </query-result>
    <immediate-query xmlns="http://tail-f.com/ns/netconf/query">
      <foreach>
        /x/host[enabled = 'true']
      </foreach>
      <select>
        <label>Host name</label>
        <expression>name</expression>
        <result-type>string</result-type>
      </select>
      <select>
        <expression>address</expression>
        <result-type>string</result-type>
      </select>
      <sort-by>name</sort-by>
      <timeout>600</timeout>
    </immediate-query>
    <query-result xmlns="http://tail-f.com/ns/netconf/query">
      <result>
        <select>
          <label>Host name</label>
          <value>One</value>
        </select>
        <select>
          <value>10.0.0.1</value>
        </select>
      </result>
      <result>
        <select>
          <label>Host name</label>
          <value>Three</value>
        </select>
        <select>
          <value>10.0.0.3</value>
        </select>
      </result>
    </query-result>
    <reset-query xmlns="http://tail-f.com/ns/netconf/query">
      <query-handle>12345</query-handle>
      <offset>42</offset>
    </reset-query>
    <stop-query xmlns="http://tail-f.com/ns/netconf/query">
      <query-handle>12345</query-handle>
    </stop-query>
    <rpc message-id="101"
         xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
      <edit-config>
        <target>
          <running/>
        </target>
        <config>
          <interfaces xmlns="http://example.com/ns/if">
            <interface annotation="this is the management interface"
                       tags=" important ethernet ">
              <name>eth0</name>
              ...
            </interface>
          </interfaces>
        </config>
      </edit-config>
    </rpc>
    <?xml version="1.0" encoding="UTF-8"?>
    <xs:schema targetNamespace="http://tail-f.com/ns/netconf/params/1.1"
               xmlns:xs="http://www.w3.org/2001/XMLSchema"
               xml:lang="en">
    
      <xs:annotation>
        <xs:documentation>
          Tail-f's namespace for additional error information.
          This namespace is used to define elements which are included
          in the 'error-info' element.
    
          The following are the app-tags used by the NETCONF agent:
    
            o  not-writable
    
              Means that an edit-config or copy-config operation was
              attempted on an element which is read-only
              (i.e. non-configuration data).
    
            o  missing-element-in-choice
    
              Like the standard error missing-element, but generated when
              one of a set of elements in a choice is missing.
    
            o  pending-changes
    
              Means that a lock operation was attempted on the candidate
              database, and the candidate database has uncommitted
              changes. This is not allowed according to the protocol
              specification.
    
            o  url-open-failed
    
              Means that the URL given was correct, but that it could not
              be opened. This can e.g. be due to a missing local file, or
              bad ftp credentials. An error message string is provided in
              the &lt;error-message&gt; element.
    
            o  url-write-failed
    
              Means that the URL given was opened, but write failed. This
              could e.g. be due to lack of disk space. An error message
              string is provided in the &lt;error-message&gt; element.
    
            o  bad-state
    
              Means that an rpc is received when the session is in a state
              which don't accept this rpc.  An example is
              &lt;prepare-transaction&gt; before &lt;start-transaction&gt;
    
        </xs:documentation>
      </xs:annotation>
    
      <xs:element name="bad-keyref">
        <xs:annotation>
          <xs:documentation>
            This element will be present in the 'error-info' container when
            'error-app-tag' is "instance-required".
          </xs:documentation>
        </xs:annotation>
        <xs:complexType>
          <xs:sequence>
            <xs:element name="bad-element" type="xs:string">
              <xs:annotation>
                <xs:documentation>
                  Contains an absolute XPath expression pointing to the element
                  which value refers to a non-existing instance.
                </xs:documentation>
              </xs:annotation>
            </xs:element>
            <xs:element name="missing-element" type="xs:string">
              <xs:annotation>
                <xs:documentation>
                  Contains an absolute XPath expression pointing to the missing
                  element referred to by 'bad-element'.
                </xs:documentation>
              </xs:annotation>
            </xs:element>
          </xs:sequence>
        </xs:complexType>
      </xs:element>
    
      <xs:element name="bad-instance-count">
        <xs:annotation>
          <xs:documentation>
            This element will be present in the 'error-info' container when
            'error-app-tag' is "too-few-elements" or "too-many-elements".
          </xs:documentation>
        </xs:annotation>
        <xs:complexType>
          <xs:sequence>
            <xs:element name="bad-element" type="xs:string">
              <xs:annotation>
                <xs:documentation>
                  Contains an absolute XPath expression pointing to an
                  element which exists in too few or too many instances.
                </xs:documentation>
              </xs:annotation>
            </xs:element>
            <xs:element name="instances" type="xs:unsignedInt">
              <xs:annotation>
                <xs:documentation>
                  Contains the number of existing instances of the element
                  referd to by 'bad-element'.
                </xs:documentation>
              </xs:annotation>
            </xs:element>
            <xs:choice>
              <xs:element name="min-instances" type="xs:unsignedInt">
                <xs:annotation>
                  <xs:documentation>
                    Contains the minimum number of instances that must
                    exist in order for the configuration to be consistent.
                    This element is present only if 'app-tag' is
                    'too-few-elems'.
                  </xs:documentation>
                </xs:annotation>
              </xs:element>
              <xs:element name="max-instances" type="xs:unsignedInt">
                <xs:annotation>
                  <xs:documentation>
                    Contains the maximum number of instances that can
                    exist in order for the configuration to be consistent.
                    This element is present only if 'app-tag' is
                    'too-many-elems'.
                  </xs:documentation>
                </xs:annotation>
              </xs:element>
            </xs:choice>
          </xs:sequence>
        </xs:complexType>
      </xs:element>
    
      <xs:attribute name="annotation" type="xs:string">
        <xs:annotation>
          <xs:documentation>
            This attribute can be present on any configuration data node.  It
            acts as a comment for the node.  The annotation does not affect the
            underlying configuration data.
          </xs:documentation>
        </xs:annotation>
      </xs:attribute>
    
      <xs:attribute name="tags" type="xs:string">
        <xs:annotation>
          <xs:documentation>
            This attribute can be present on any configuration data node.  It
            is a space separated string of tags for the node.  The tags of a
            node does not affect the underlying configuration data, but can
            be used by a user for data organization, and data filtering.
          </xs:documentation>
        </xs:annotation>
      </xs:attribute>
    
    </xs:schema>
    rollback
    action will then automatically be invoked when the queue item has finished its execution. The lock will be removed as part of the rollback. The
    stop-on-error
    means that the commit queue will place a lock with
    block-others
    on the devices and services in the failed queue item. The lock must then either manually be released when the error is fixed or the
    rollback
    action under
    /devices/commit-queue/completed
    be invoked. Read about error recovery in
    for a more detailed explanation.
    parameter, NSO will always behave as if it had the value
    rollback-on-error
    . So in NSO, the meaning of the word "error" in
    stop-on-error
    and
    continue-on-error
    , is something that never can happen. It is possible to configure the NETCONF server to generate an
    operation-not-supported
    error if the client asks for the
    error-option
    continue-on-error
    . See
    in Manual Pages.

    :url

    The URL schemes supported are file, ftp, and sftp (SSH File Transfer Protocol). There is no standard URL syntax for the sftp scheme, but NSO supports the syntax used by curl:

    Note that user name and password must be given for sftp URLs. NSO does not support validate from a URL.

    :xpath

    The NETCONF server supports XPath according to the W3C XPath 1.0 specification (https://www.w3.org/TR/xpath).

    RFC 8639
    RFC 8072
    Notification Capability
    RFC 7895
    RFC 8525
    Commit Queue
    ncs.conf(5)
    sftp://<user>:<password>@<host>/<path>
    <notification xmlns="urn:ietf:params:xml:ns:netconf:notification:1.0">
      <eventTime>2020-06-10T10:05:00.00Z</eventTime>
      <push-change-update
        xmlns="urn:ietf:params:xml:ns:yang:ietf-yang-push">
        <id>2</id>
        <datastore-changes>
          <yang-patch>
            <patch-id>s2-p4</patch-id>
            <edit>
              <edit-id>edit1</edit-id>
              <operation>merge</operation>
              <target>/ietf-interfaces:interfaces</target>
              <value>
                <interfaces xmlns="urn:ietf:params:xml:ns:yang:ietf-interfaces">
                  <interface>
                    <name>eth0</name>
                    <oper-status>down</oper-status>
                  </interface>
                </interfaces>
              </value>
            </edit>
          </yang-patch>
        </datastore-changes>
      </push-change-update>
    </notification>

    RESTCONF API

    Description of the RESTCONF API.

    RESTCONF is an HTTP-based protocol as defined in . RESTCONF standardizes a mechanism to allow Web applications to access the configuration data, state data, data-model-specific Remote Procedure Call (RPC) operations, and event notifications within a networking device.

    RESTCONF uses HTTP methods to provide Create, Read, Update, Delete (CRUD) operations on a conceptual datastore containing YANG-defined data, which is compatible with a server that implements NETCONF datastores as defined in .

    Configuration data and state data are exposed as resources that can be retrieved with the GET method. Resources representing configuration data can be modified with the DELETE, PATCH, POST, and PUT methods. Data is encoded with either XML () or JSON ()

    This chapter describes the NSO implementation and extension to or deviation from respectively.

    As of this writing, the server supports the following specifications:

    RFC 6020 - YANG - A Data Modeling Language for the Network Configuration Protocol (NETCONF)
  • RFC 6021 - Common YANG Data Types

  • RFC 6470 - NETCONF Base Notifications

  • RFC 6536 - NETCONF Access Control Model

  • RFC 6991 - Common YANG Data Types

  • RFC 7950 - The YANG 1.1 Data Modeling Language

  • RFC 7951 - JSON Encoding of Data Modeled with YANG

  • RFC 7952 - Defining and Using Metadata with YANG

  • RFC 8040 - RESTCONF Protocol

  • RFC 8072 - YANG Patch Media Type

  • RFC 8341 - Network Configuration Access Control Model

  • RFC 8525 - YANG Library

  • RFC 8528 - YANG Schema Mount

  • Getting Started

    To enable RESTCONF in NSO, RESTCONF must be enabled in the ncs.conf configuration file. The web server configuration for RESTCONF is shared with the WebUI's config, but you may define a separate RESTCONF transport section. The WebUI does not have to be enabled for RESTCONF to work.

    Here is a minimal example of what is needed in the ncs.conf.

    If you want to run RESTCONF with a different transport configuration than what the WebUI is using, you can specify a separate RESTCONF transport section.

    It is now possible to do a RESTCONF requests towards NSO. Any HTTP client can be used, in the following examples curl will be used. The example below will show what a typical RESTCONF request could look like.

    In the rest of the document, in order to simplify the presentation, the example above will be expressed as:

    Note the HTTP return code (200 OK) in the example, which will be displayed together with any relevant HTTP headers returned and a possible body of content.

    Top-level GET request

    Send a RESTCONF query to get a representation of the top-level resource, which is accessible through the path: /restconf.

    As can be seen from the result, the server exposes three additional resources:

    • data: This mandatory resource represents the combined configuration and state data resources that can be accessed by a client.

    • operations: This optional resource is a container that provides access to the data-model-specific RPC operations supported by the server.

    • yang-library-version: This mandatory leaf identifies the revision date of the ietf-yang-library YANG module that is implemented by this server. This resource exposes which YANG modules are in use by the NSO system.

    Get Resources Under the data Resource

    To fetch configuration, operational data, or both, from the server, a request to the data resource is made. To restrict the amount of returned data, the following example will prune the amount of output to only consist of the topmost nodes. This is achieved by using the depth query argument as shown in the example below:

    Manipulating config data with RESTCONF

    Let's assume we are interested in the dhcp/subnet resource in our configuration. In the following examples, assume that it is defined by a corresponding Yang module that we have named dhcp.yang, looking like this:

    We can issue an HTTP GET request to retrieve the value content of the resource. In this case, we find that there is no such data, which is indicated by the HTTP return code 204 No Content.

    Note also how we have prefixed the dhcp:dhcp resource. This is how RESTCONF handles namespaces, where the prefix is the YANG module name and the namespace is as defined by the namespace statement in the YANG module.

    We can now create the dhcp/subnet resource by sending an HTTP POST request + the data that we want to store. Note the Content-Type HTTP header, which indicates the format of the provided body. Two formats are supported: XML or JSON. In this example, we are using XML, which is indicated by the Content-Type value: application/yang-data+xml.

    Note the HTTP return code (201 Created) indicating that the resource was successfully created. We also got a Location header, which always is returned in a reply to a successful creation of a resource, stating the resulting URI leading to the created resource.

    If we now want to modify a part of our dhcp/subnet config, we can use the HTTP PATCH method, as shown below. Note that the URI used in the request needs to be URL-encoded, such that the key value: 10.254.239.0/27 is URL-encoded as: 10.254.239.0%2F27.

    Also, note the difference of the PATCH URI compared to the earlier POST request. With the latter, since the resource does not yet exist, we POST to the parent resource (dhcp:dhcp), while with the PATCH request we address the (existing) resource (10.254.239.0%2F27).

    We can also replace the subnet with some new configuration. To do this, we make use of the PUT HTTP method as shown below. Since the operation was successful and no body was returned, we will get a 204 No Content return code.

    To delete the subnet, we make use of the DELETE HTTP method as shown below. Since the operation was successful and no body was returned, we will get a 204 No Content return code.

    Root Resource Discovery

    RESTCONF makes it possible to specify where the RESTCONF API is located, as described in the RESTCONF RFC 8040.

    As per default, the RESTCONF API root is /restconf. Typically there is no need to change the default value although it is possible to change this by configuring the RESTCONF API root in the ncs.conf file as:

    The RESTCONF API root will now be /my_own_restconf_root.

    A client may discover the root resource by getting the /.well-known/host-meta resource as shown in the example below:

    In this guide, all examples will assume the RESTCONF API root to be /restconf.

    Capabilities

    A RESTCONF capability is a set of functionality that supplements the base RESTCONF specification. The capability is identified by a uniform resource identifier (URI). The RESTCONF server includes a capability URI leaf-list entry identifying each supported protocol feature. This includes the basic-mode default-handling mode, optional query parameters, and may also include other, NSO-specific, capability URIs.

    How to View the Capabilities of the RESTCONF Server

    To view currently enabled capabilities, use the ietf-restconf-monitoring YANG model, which is available as: /restconf/data/ietf-restconf-monitoring:restconf-state.

    The defaults Capability

    This Capability identifies the basic-mode default-handling mode that is used by the server for processing default leafs in requests for data resources.

    The capability URL will contain a query parameter named basic-mode which value tells us what the default behavior of the RESTCONF server is when it returns a leaf. The possible values are shown in the table below (basic-mode values):

    Value
    Description

    report-all

    Values set to the YANG default value are reported.

    trim

    Values set to the YANG default value are not reported.

    explicit

    Values that has been set by a client to the YANG default value will be reported.

    The values presented in the table above can also be used by the Client together with the with-defaults query parameter to override the default RESTCONF server behavior. Added to these values, the Client can also use the report-all-tagged value.

    The table below lists additional with-defaults value.

    Value
    Description

    report-all-tagged

    Works as the report-all but a default value will include an XML/JSON attribute to indicate that the value is in fact a default value.

    Referring back to the example: Example: NSO RESTCONF Capabilities, where the RESTCONF server returned the default capability:

    It tells us that values that have been set by a client to the YANG default value will be reported but default values that have not been set by the Client will not be returned. Again, note that this is the default RESTCONF server behavior which can be overridden by the Client by using the with-defaults query argument.

    Query Parameter Capabilities

    A set of optional RESTCONF Capability URIs are defined to identify the specific query parameters that are supported by the server. They are defined as:

    The table shows query parameter capabilities.

    Name
    URI

    depth

    urn:ietf:params:restconf:capability:depth:1.0

    fields

    urn:ietf:params:restconf:capability:fields:1.0

    filter

    urn:ietf:params:restconf:capability:filter:1.0

    replay

    urn:ietf:params:restconf:capability:replay:1.0

    with.defaults

    urn:ietf:params:restconf:capability:with.defaults:1.0

    For a description of the query parameter functionality, see Query Parameters.

    Query Parameters

    Each RESTCONF operation allows zero or more query parameters to be present in the request URI. Query parameters can be given in any order, but can appear at most once. Supplying query parameters when invoking RPCs and actions is not supported, if supplied the response will be 400 (Bad Request) and the error-app-tag will be set to invalid-value. However, the query parameters trace-id and unhide are exempted from this rule and supported for RPC and action invocation. The defined query parameters and in what type of HTTP request they can be used are shown in the table below (Query parameters).

    Name
    Method
    Description

    content

    GET,HEAD

    Select config and/or non-config data resources.

    depth

    GET,HEAD

    Request limited subtree depth in the reply content.

    fields

    GET,HEAD

    Request a subset of the target resource contents.

    exclude

    GET,HEAD

    Exclude a subset of the target resource contents.

    The content Query Parameter

    The content query parameter controls if configuration, non-configuration, or both types of data should be returned. The content query parameter values are listed below.

    The allowed values are:

    Value
    Description

    config

    Return only configuration descendant data nodes.

    nonconfig

    Return only non-configuration descendant data nodes.

    all

    Return all descendant data nodes.

    The depth Query Parameter

    The depth query parameter is used to limit the depth of subtrees returned by the server. Data nodes with a value greater than the depth parameter are not returned in response to a GET request.

    The value of the depth parameter is either an integer between 1 and 65535 or the string unbounded. The default value is: unbounded.

    The fields Query Parameter

    The fields query parameter is used to optionally identify data nodes within the target resource to be retrieved in a GET method. The client can use this parameter to retrieve a subset of all nodes in a resource.

    For a full definition of the fields value can be constructed, refer to the RFC 8040, Section 4.8.3.

    Note that the fields query parameter cannot be used together with the exclude query parameter. This will result in an error.

    The exclude Query Parameter

    The exclude query parameter is used to optionally exclude data nodes within the target resource from being retrieved with a GET request. The client can use this parameter to exclude a subset of all nodes in a resource. Only nodes below the target resource can be excluded, not the target resource itself.

    Note that the exclude query parameter cannot be used together with the fields query parameter. This will result in an error.

    The exclude query parameter uses the same syntax and has the same restrictions as the fields query parameter, as defined in RFC 8040, Section 4.8.3.

    Selecting multiple nodes to exclude can be done the same way as for the fields query parameter, as described in RFC 8040, Section 4.8.3.

    exclude using wildcards (*) will exclude all child nodes of the node. For lists and presence containers, the parent node will be visible in the output but not its children, i.e. it will be displayed as an empty node. For non-presence containers, the parent node will be excluded from the output as well.

    exclude can be used together with the depth query parameter to limit the depth of the output. In contrast to fields, where depth is counted from the node selected by fields, for exclude the depth is counted from the target resource, and the nodes are excluded if depth is deep enough to encounter an excluded node.

    When exclude is not used:

    Using exclude to exclude low and high from range, note that these are absent in the output:

    The filter, start-time, and stop-time Query Parameters.

    These query parameters are only allowed on an event stream resource and are further described in Streams.

    The insert Query Parameter

    The insert query parameter is used to specify how a resource should be inserted within an ordered-by user list. The allowed values are shown in the table below (The content query parameter values).

    Value
    Description

    first

    Insert the new data as the new first entry.

    last

    Insert the new data as the new last entry. This is the default value.

    before

    Insert the new data before the insertion point, as specified by the value of the point parameter.

    after

    Insert the new data after the insertion point, as specified by the value of the point parameter.

    This parameter is only valid if the target data represents a YANG list or leaf-list that is ordered-by user. In the example below, we will insert a new router value, first, in the ordered-by user leaf-list of dhcp-options/router values. Remember that the default behavior is for new entries to be inserted last in an ordered-by user leaf-list.

    To verify that the router value really ended up first:

    The point Query Parameter

    The point query parameter is used to specify the insertion point for a data resource that is being created or moved within an ordered-by user list or leaf-list. In the example below, we will insert the new router value: two.acme.org, after the first value: one.acme.org in the ordered-by user leaf-list of dhcp-options/router values.

    To verify that the router value really ended up after our insertion point:

    Additional Query Parameters

    There are additional NSO query parameters available for the RESTCONF API. These additional query parameters are described in the table below (Additional Query Parameters).

    Name
    Methods
    Description

    dry-run

    POST PUT PATCH DELETE

    Validate and display the configuration changes but do not perform the actual commit. Neither CDB nor the devices are affected. Instead, the effects that would have taken place are shown in the returned output. Possible values are: xml, cli, and native. The value used specifies in what format we want the returned diff to be.

    dry-run-reverse

    POST PUT PATCH DELETE

    Used together with the dry-run=native parameter to display the device commands for getting back to the current running state in the network if the commit is successfully executed. Beware that if any changes are done later on the same data the reverse device commands returned are invalid.

    no-networking

    POST PUT PATCH DELETE

    Do not send any data to the devices. This is a way to manipulate CDB in NSO without generating any southbound traffic.

    no-out-of-sync-check

    POST PUT PATCH DELETE

    Continue with the transaction even if NSO detects that a device's configuration is out of sync. Can't be used together with no-overwrite.

    Edit Collision Prevention

    Two edit collision detection and prevention mechanisms are provided in RESTCONF for the datastore resource: a timestamp and an entity tag. Any change to configuration data resources will update the timestamp and entity tag of the datastore resource. This makes it possible for a client to apply precondition HTTP headers to a request.

    The NSO RESTCONF API honors the following HTTP response headers: Etag and Last-Modified, and the following request headers: If-Match, If-None-Match, If-Modified-Since, and If-Unmodified-Since.

    Response Headers

    • Etag: This header will contain an entity tag which is an opaque string representing the latest transaction identifier in the NSO database. This header is only available for the running datastore and hence, only relates to configuration data (non-operational).

    • Last-Modified: This header contains the timestamp for the last modification made to the NSO database. This timestamp can be used by a RESTCONF client in subsequent requests, within the If-Modified-Since and If-Unmodified-Since header fields. This header is only available for the running datastore and hence, only relates to configuration data (non-operational).

    Request Headers

    • If-None-Match: This header evaluates to true if the supplied value does not match the latest Etag entity-tag value. If evaluated to false, an error response with status 304 (Not Modified) will be sent with no body. This header carries only meaning if the entity tag of the Etag response header has previously been acquired. The usage of this could for example be a HEAD operation to get information if the data has changed since the last retrieval.

    • If-Modified-Since: This request-header field is used with an HTTP method to make it conditional, i.e if the requested resource has not been modified since the time specified in this field, the request will not be processed by the RESTCONF server; instead, a 304 (Not Modified) response will be returned without any message-body. Usage of this is for instance for a GET operation to retrieve the information if (and only if) the data has changed since the last retrieval. Thus, this header should use the value of a Last-Modified response header that has previously been acquired.

    • If-Match: This header evaluates to true if the supplied value matches the latest Etag value. If evaluated to false, an error response with status 412 (Precondition Failed) will be sent with no body. This header carries only meaning if the entity tag of the Etag response header has previously been acquired. The usage of this can be in the case of a PUT, where If-Match can be used to prevent the lost update problem. It can check if the modification of a resource that the user wants to upload will not override another change that has been done since the original resource was fetched.

    • If-Unmodified-Since: This header evaluates to true if the supplied value has not been last modified after the given date. If the resource has been modified after the given date, the response will be a 412 (Precondition Failed) error with no body. This header carries only meaning if the Last-Modified response header has previously been acquired. The usage of this can be the case of a POST, where editions are rejected if the stored resource has been modified since the original value was retrieved.

    Using Rollbacks

    Rolling Back Configuration Changes

    If rollbacks have been enabled in the configuration using the rollback-id query parameter, the fixed ID of the rollback file created during an operation is returned in the results. The below examples show the creation of a new resource and the removal of that resource using the rollback created in the first step.

    Then using the fixed ID returned above as input to the apply-rollback-file action:

    Streams

    Introduction

    The RESTCONF protocol supports YANG-defined event notifications. The solution preserves aspects of NETCONF event notifications [RFC5277] while utilizing the Server-Sent Events, W3C.REC-eventsource-20150203, transport strategy.

    RESTCONF event notification streams are described in Sections 6 and 9.2 of RFC 8040, where also notification examples can be found.

    RESTCONF event notification is a way for RESTCONF clients to retrieve notifications for different event streams. Event streams configured in NSO can be subscribed to using different channels such as the RESTCONF or the NETCONF channel.

    More information on how to define a new notification event using Yang is described in RFC 6020.

    How to add and configure notifications support in NSO is described in the ncs.conf(3) man page.

    The design of RESTCONF event notification is inspired by how NETCONF event notification is designed. More information on NETCONF event notification can be found in RFC 5277.

    Configuration

    For this example, we will define a notification stream, named interface in the ncs.conf configuration file as shown below.

    We also enable the built-in replay store which means that NSO automatically stores all notifications on disk, ready to be replayed should a RESTCONF event notification subscriber ask for logged notifications. The replay store uses a set of wrapping log files on a disk (of a certain number and size) to store the notifications.

    To view the currently enabled event streams, use the ietf-restconf-monitoring YANG model. The streams are available under the /restconf/data/ietf-restconf-monitoring:restconf-state/streams container.

    Note the URL value we get in the location element in the example above. This URL should be used when subscribing to the notification events as is shown in the next example.

    Subscribe to Notification Events

    RESTCONF clients can determine the URL for the subscription resource (to receive notifications) by sending an HTTP GET request for the location leaf with the stream list entry. The value returned by the server can be used for the actual notification subscription.

    The client will send an HTTP GET request for the (location) URL returned by the server with the Accept type text/event-stream as shown in the example below. Note that this request works like a long polling request which means that the request will not return. Instead, server-side notifications will be sent to the client where each line of the notification will be prepended with data:.

    Since we have enabled the replay store, we can ask the server to replay any notifications generated since the specific date we specify. After those notifications have been delivered, we will continue waiting for new notifications to be generated.

    Errors

    Errors occurring during streaming of events will be reported as Server-Sent Events (SSE) comments as described in W3C.REC-eventsource-20150203 as shown in the example below.

    Schema Resource

    RFC 8040, Section 3.7 describes the retrieval of YANG modules used by the server via the RPC operation get-schema. The YANG source is made available by NSO in two ways: compiled into the fxs file or put in the loadPath. See Monitoring of the NETCONF Server.

    The example below shows how to list the available Yang modules. Since we are interested in the dhcp module, we only show that part of the output:

    We can now retrieve the dhcp Yang module via the URL we got in the schema leaf of the reply. Note that the actual URL may point anywhere. The URL is configured by the schemaServerUrl setting in the ncs.conf file.

    YANG Patch Media Type

    The NSO RESTCONF API also supports the YANG Patch Media Type, as defined in RFC 8072.

    A YANG Patch is an ordered list of edits that are applied to the target datastore by the RESTCONF server. A YANG Patch request is sent as an HTTP PATCH request containing a body describing the edit operations to be performed. The format of the body is defined in the RFC 8072.

    Referring to the example above (dhcp Yang model) in the Getting Started section; we will show how to use YANG Patch to achieve the same result but with fewer amount of requests.

    Create Two New Resources with the YANG Patch

    To create the resources, we send an HTTP PATCH request where the Content-Type indicates that the body in the request consists of a Yang-Patch message. Our Yang-Patch request will initiate two edit operations where each operation will create a new subnet. In contrast, compare this with using plain RESTCONF where we would have needed two POST requests to achieve the same result.

    Modify and Delete in the Same Yang-Patch Request

    Let us modify the max-lease-time of one subnet and delete the max-lease-time value of the second subnet. Note that the delete will cause the default value of max-lease-time to take effect, which we will verify using a RESTCONF GET request.

    To verify that our modify and delete operations took place we make use of two RESTCONF GET requests as shown below.

    Note how we in the last GET request make use of the with-defaults query parameter to request that a default value should be returned and also be tagged as such.

    NMDA

    Network Management Datastore Architecture (NMDA), as defined in RFC 8527, extends the RESTCONF protocol. This enables RESTCONF clients to discover which datastores are supported by the RESTCONF server, determine which modules are supported in each datastore, and interact with all the datastores supported by the NMDA.

    A RESTCONF client can test if a server supports the NMDA by using either the HEAD or GET methods on /restconf/ds/ietf- datastores:operational, as shown below:

    A RESTCONF client can discover which datastores and YANG modules the server supports by reading the YANG library information from the operational state datastore. Note in the example below that, since the result consists of three top nodes, it can't be represented in XML; hence we request the returned content to be in JSON format. See also Collections.

    Extensions

    To avoid any potential future conflict with the RESTCONF standard, any extensions made to the NSO implementation of RESTCONF are located under the URL path: /restconf/tailf, or is controlled by means of a vendor-specific media type.

    There is no index of extensions under /restconf/tailf. To list extensions, access /restconf/data/ietf-yang-library:modules-state and follow published links for schemas.

    Collections

    The RESTCONF specification states that a result containing multiple instances (e.g. a number of list entries) is not allowed if XML encoding is used. The reason for this is that an XML document can only have one root node.

    This functionality is supported if the http://tail-f.com/ns/restconf/collection/1.0 capability is presented. See also How to View the Capabilities of the RESTCONF Server.

    To remedy this, an HTTP GET request can make use of the Accept: media type: application/vnd.yang.collection+xml as shown in the following example. The result will then be wrapped within a collection element.

    The RESTCONF Query API

    The NSO RESTCONF Query API consists of a number of operations to start a query which may live over several RESTCONF requests, where data can be fetched in suitable chunks. The data to be returned is produced by applying an XPath expression where the data also may be sorted.

    The RESTCONF client can check if the NSO RESTCONF server supports this functionality by looking for the http://tail-f.com/ns/restconf/query-api/1.0 capability. See also How to View the Capabilities of the RESTCONF Server.

    The tailf-rest-query.yang and the tailf-common-query.yang YANG models describe the structure of the RESTCONF Query API messages. By using the Schema Resource functionality, as described in Schema Resource, you can get hold of them.

    Request and Replies

    The API consists of the following requests:

    • start-query: Start a query and return a query handle.

    • fetch-query-result: Use a query handle to repeatedly fetch chunks of the result.

    • immediate-query: Start a query and return the entire result immediately.

    • reset-query: (Re)set where the next fetched result will begin from.

    • stop-query: Stop (and close) the query.

    The API consists of the following replies:

    • start-query-result: Reply to the start-query request.

    • query-result: Reply to the fetch-query-result and immediate-query requests.

    In the following examples, we'll use this data model:

    The actual format of the payload should be represented either in XML or JSON. Note how we indicate the type of content using the Content-Type HTTP header. For XML, it could look like this:

    The same request in JSON format would look like:

    An informal interpretation of this query is:

    For each /x/host where enabled is true, select its name, and address, and return the result sorted by name, in chunks of 100 result items at a time.

    Let us discuss the various pieces of this request. To start with, when using XML, we need to specify the namespace as shown:

    The actual XPath query to run is specified by the foreach element. The example below will search for all /x/host nodes that have the enabled node set to true:

    Now we need to define what we want to have returned from the node set by using one or more select sections. What to actually return is defined by the XPath expression.

    Choose how the result should be represented. Basically, it can be the actual value or the path leading to the value. This is specified per select chunk. The possible result types are string, path, leaf-value, and inline.

    The difference between string and leaf-value is somewhat subtle. In the case of string, the result will be processed by the XPath function: string() (which if the result is a node-set will concatenate all the values). The leaf-value will return the value of the first node in the result. As long as the result is a leaf node, string and leaf-value will return the same result. In the example above, the string is used as shown below. Note that at least one result-type must be specified.

    The result-type inline makes it possible to return the full sub-tree of data, either in XML or in JSON format. The data will be enclosed with a tag: data.

    It is possible to specify an optional label for a convenient way of labeling the returned data:

    The returned result can be sorted. This is expressed as an XPath expression, which in most cases is very simple and refers to the found node-set. In this example, we sort the result by the content of the name node:

    With the offset element, we can specify at which node we should start to receive the result. The default is 1, i.e., the first node in the resulting node set.

    It is possible to set a custom timeout when starting or resetting a query. Each time a function is called, the timeout timer resets. The default is 600 seconds, i.e. 10 minutes.

    The reply to this request would look something like this:

    The query handle (in this example '12345') must be used in all subsequent calls. To retrieve the result, we can now send:

    Which will result in something like the following:

    If we try to get more data with the fetch-query-result, we might get more result entries in return until no more data exists and we get an empty query result back:

    Finally, when we are done we stop the query:

    Reset a Query

    If we want to go back into the stream of received data chunks and have them repeated, we can do that with the reset-query request. In the example below, we ask to get results from the 42nd result entry:

    Immediate Query

    If we want to get the entire result sent back to us, using only one request, we can do this by using the immediate-query. This function takes similar arguments as start-query and returns the entire result analogous with the result from a fetch-query-result request. Note that it is not possible to paginate or set an offset start node for the result list; i.e. the options limit and offset are ignored.

    Partial Responses

    This functionality is supported if the http://tail-f.com/ns/restconf/partial-response/1.0 capability is presented. See also How to View the Capabilities of the RESTCONF Server.

    By default, the server sends back the full representation of a resource after processing a request. For better performance, the server can be instructed to send only the nodes the client really needs in a partial response.

    To request a partial response for a set of list entries, use the offset and limit query parameters to specify a limited set of entries to be returned.

    In the following example, we retrieve only two entries, skipping the first entry and then returning the next two entries:

    Hidden Nodes

    This functionality is supported if the http://tail-f.com/ns/restconf/unhide/1.0 capability is presented. See also How to View the Capabilities of the RESTCONF Server.

    By default, hidden nodes are not visible in the RESTCONF interface. To unhide hidden nodes for retrieval or editing, clients can use the query parameter unhide or set parameter showHidden to true under /confdConfig/restconf in confd.conf file. The query parameter unhide is supported for RPC and action invocation.

    The format of the unhide parameter is a comma-separated list of

    As an example:

    This example unhides the unprotected group extra and the password-protected group debug with the password secret;.

    Configuration Metadata

    It is possible to associate metadata with the configuration data. For RESTCONF, resources such as containers, lists as well as leafs and leaf-lists can have such meta-data. For XML, this meta-data is represented as attributes attached to the XML element in question. For JSON, there does not exist a natural way to represent this info. Hence a special special notation has been introduced, based on the RFC 7952, see the example below.

    For JSON, note how we represent the metadata for a certain object "x" by another object constructed of the object name prefixed with either one or two "@" signs. The meta-data object "@x" refers to the sibling object "x" and the "@@x" object refers to the parent object.

    This differs from the RFC 7952.

    Authentication Cache

    The RESTCONF server maintains an authentication cache. When authenticating an incoming request for a particular User:Password, it is first checked if the User exists in the cache and if so, the request is processed. This makes it possible to avoid the, potentially time-consuming, login procedure that will take place in case of a cache miss.

    Cache entries have a maximum Time-To-Live (TTL) and upon expiry, a cache entry is removed which will cause the next request for that User to perform the normal login procedure. The TTL value is configurable via the auth-cache-ttl parameter, as shown in the example. Note that, by setting the TTL value to PT0S (zero), the cache is effectively turned off.

    It is also possible to combine the Client's IP address with the User name as a key into the cache. This behavior is disabled by default. It can be enabled by setting the enable-auth-cache-client-ip parameter to true. With this enabled, only a Client coming from the same IP address may get a hit in the authentication cache.

    Client IP via Proxy

    It is possible to configure the NSO RESTCONF server to pick up the client IP address via an HTTP header in the request. A list of HTTP headers to look for is configurable via the proxy-headers parameter as shown in the example.

    To avoid misuse of this feature, only requests from trusted sources will be searched for such an HTTP header. The list of trusted sources is configured via the allowed-proxy-ip-prefix as shown in the example.

    External Token Authentication/Validation

    The NSO RESTCONF server can be set up to pass a long, a token used for authentication and/or validation of the client. Note that this requires external authentication/validation to be set up properly. See External Token Validation and External Authentication for details.

    With token authentication, we mean that the client sends a User:Password to the RESTCONF server, which will invoke an external executable that performs the authentication and upon success produces a token that the RESTCONF server will return in the X-Auth-Token HTTP header of the reply.

    With token validation, we mean that the RESTCONF server will pass along any token, provided in the X-Auth-Token HTTP header, to an external executable that performs the validation. This external program may produce a new token that the RESTCONF server will return in the X-Auth-Token HTTP header of the reply.

    To make this work, the following need to be configured in the ncs.conf file:

    It is also possible to have the RESTCONF server to return a HTTP cookie containing the token.

    An HTTP cookie (web cookie, browser cookie) is a small piece of data that a server sends to the user's web browser. The browser may store it and send it back with the next request to the same server. This can be convenient in certain solutions, where typically, it is used to tell if two requests came from the same browser, keeping a user logged in, for example.

    To make this happen, the name of the cookie needs to be configured as well as a directives string which will be sent as part of the cookie.

    Custom Response HTTP Headers

    The RESTCONF server can be configured to reply with particular HTTP headers in the HTTP response. For example, to support Cross-Origin Resource Sharing (CORS, https://www.w3.org/TR/cors/) there is a need to add a couple of headers to the HTTP Response.

    We add the extra configuration parameter in ncs.conf.

    A number of HTTP headers have been deemed so important by security reasons that they, with sensible default values, per default will be included in the RESTCONF reply. The values can be changed by configuration in the ncs.conf file. Note that a configured empty value will effectively turn off that particular header from being included in the RESTCONF reply. The headers and their default values are:

    • xFrameOptions: DENY

      The default value indicates that the page cannot be displayed in a frame/iframe/embed/object regardless of the site attempting to do so.

    • xContentTypeOptions: nosniff

      The default value indicates that the MIME types advertised in the Content-Type headers should not be changed and be followed. In particular, should requests for CSS or Javascript be blocked in case a proper MIME type is not used.

    • xXssProtection: 1; mode=block

      This header is a feature of Internet Explorer, Chrome and Safari that stops pages from loading when they detect reflected cross-site scripting (XSS) attacks. It enables XSS filtering and tells the browser to prevent rendering of the page if an attack is detected.

    • strictTransportSecurity: max-age=15552000; includeSubDomains

      The default value tells browsers that the RESTCONF server should only be accessed using HTTPS, instead of using HTTP. It sets the time that the browser should remember this and states that this rule applies to all of the server's subdomains as well.

    • contentSecurityPolicy: default-src 'self'; block-all-mixed-content; base-uri 'self'; frame-ancestors 'none';

      The default value means that: Resources like fonts, scripts, connections, images, and styles will all only load from the same origin as the protected resource. All mixed contents will be blocked and frame-ancestors like iframes and applets are prohibited.

    Generating Swagger for RESTCONF

    Swagger is a documentation language used to describe RESTful APIs. The resulting specifications are used to both document APIs as well as generating clients in a variety of languages. For more information about the Swagger specification itself and the ecosystem of tools available for it, see swagger.io.

    The RESTCONF API in NSO provides an HTTP-based interface for accessing data. The YANG modules loaded into the system define the schema for the data structures that can be manipulated using the RESTCONF protocol. The yanger tool provides options to generate Swagger specifications from YANG files. The tool currently supports generating specifications according to OpenAPI/Swagger 2.0 using JSON encoding. The tool supports the validation of JSON bodies in body parameters and response bodies, and XML content validation is not supported.

    YANG and Swagger are two different languages serving slightly different purposes. YANG is a data modeling language used to model configuration data, state data, Remote Procedure Calls, and notifications for network management protocols such as NETCONF and RESTCONF. Swagger is an API definition language that documents API resource structure as well as HTTP body content validation for applicable HTTP request methods. Translation from YANG to Swagger is not perfect in the sense that there are certain constructs and features in YANG that is not possible to capture completely in Swagger. The design of the translation is designed such that the resulting Swagger definitions are more restrictive than what is expressed in the YANG definitions. This means that there are certain cases where a client can do more in the RESTCONF API than what the Swagger definition expresses. There is also a set of well-known resources defined in the RESTCONF RFC 8040 that are not part of the generated Swagger specification, notably resources related to event streams.

    Using Yanger to Generate Swagger

    The yanger tool is a YANG parser and validator that provides options to convert YANG modules to a multitude of formats including Swagger. You use the -f swagger option to generate a Swagger definition from one or more YANG files. The following command generates a Swagger file named example.json from the example.yang YANG file:

    It is only supported to generate Swagger from one YANG module at a time. It is possible however to augment this module by supplying additional modules. The following command generates a Swagger document from base.yang which is augmented by base-ext-1.yang and base-ext-2.yang:

    Only supplying augmenting modules is not supported.

    Use the --help option to the yanger command to see all available options:

    The complete list of options related to Swagger generation is:

    Using the example-jukebox.yang from the RESTCONF RFC 8040, the following example generates a comprehensive Swagger definition using a variety of Swagger-related options:

    RFC 8040
    RFC 6241
    W3C.REC-xml-20081126
    RFC 7159
    RFC 8040

    Java API Overview

    Learn about the NSO Java API and its usage.

    The NSO Java library contains a variety of APIs for different purposes. In this section, we introduce these and explain their usage. The Java library deliverables are found as two jar files (ncs.jar and conf-api.jar). The jar files and their dependencies can be found under $NCS_DIR/java/jar/.

    For convenience, the Java build tool Apache ant (https://ant.apache.org/) is used to run all of the examples. However, this tool is not a requirement for NSO.

    General for all APIs is that they communicate with NSO using TCP sockets. This makes it possible to use all APIs from a remote location.

    The following APIs are included in the library:

    MAAPI (Management Agent API) Northbound interface that is transactional and user session-based. Using this interface both configuration and operational data can be read. Configuration data can be written and committed as one transaction. The API is complete in the way that it is possible to write a new northbound agent using only this interface. It is also possible to attach to ongoing transactions in order to read uncommitted changes and/or modify data in these transactions.

    In addition, the Conf API framework contains utility classes for data types, keypaths, etc.

    MAAPI

    The Management Agent API (MAAPI) provides an interface to the Transaction engine in NSO. As such it is very versatile. Here are some examples of how the MAAPI interface can be used.

    • Read and write configuration data stored by NSO or in an external database.

    • Write our own northbound interface.

    • We could access data inside a not yet committed transaction, e.g. as validation logic where our Java code can attach itself to a running transaction and read through the not yet committed transaction, and validate the proposed configuration change.

    • During database upgrade we can access and write data to a special upgrade transaction.

    The first step of a typical sequence of MAAPI API calls when writing a management application would be to create a user session. Creating a user session is the equivalent of establishing an SSH connection from a NETCONF manager. It is up to the MAAPI application to authenticate users. The TCP connection between MAAPI and NSO is neither encrypted, nor authenticated. The Maapi Java package does however include an authenticate() method that can be used by the application to hook into the AAA framework of NSO and let NSO authenticate the user.

    When a Maapi socket has been created the next step is to create a user session and supply the relevant information about the user for authentication.

    When the user has been authenticated and a user session has been created the Maapi reference is now ready to establish a new transaction toward a data store. The following code snippet starts a read/write transaction towards the running data store.

    \

    The startTrans(int db,int mode) method of the Maapi class returns an integer that represents a transaction handler. This transaction handler is used when invoking the various Maapi methods.

    An example of a typical transactional method is the getElem() method:

    The getElem(int th, String fmt, Object ... arguments) first parameter is the transaction handle which is the integer that was returned by the startTrans() method. The fmt is a path that leads to a leaf in the data model. The path is expressed as a format string that contain fixed text with zero to many embedded format specifiers. For each specifier, one argument in the variable argument list is expected.

    The currently supported format specifiers in the Java API are:

    • %d - requiring an integer parameter (type int) to be substituted.

    • %s - requiring a java.lang.String parameter to be substituted.

    • %x - requiring subclasses of type com.tailf.conf.ConfValue to be substituted.

    The return value val contains a reference to a ConfValue which is a superclass of all the ConfValues that maps to the specific yang data type. If the Yang data type ip in the Yang model is ietf-inet-types:ipv4-address, we can narrow it to the subclass which is the corresponding com.tailf.conf.ConfIPv4.

    The opposite operation of the getElem() is the setElem() method which set a leaf with a specific value.

    We have not yet committed the transaction so no modification is permanent. The data is only visible inside the current transaction. To commit the transaction we call:

    The method applyTrans() commits the current transaction to the running datastore.

    It is also possible to run the code above without lock(Conf.DB_RUNNING).

    Calling the applyTrans() method also performs additional validation of the new data as required by the data model and may fail if the validation fails. You can perform the validation beforehand, using the validateTrans() method.

    Additionally, applying transaction can fail in case of a conflict with another, concurrent transaction. The best course of action in this case is to retry the transaction. Please see for details.

    The MAAPI is also intended to attach to already existing NSO transaction to inspect not yet committed data for example if we want to implement validation logic in Java. See the example below (Attach Maapi to the Current Transaction).

    CDB API

    This API provides an interface to the CDB Configuration database which stores all configuration data. With this API the user can:

    • Start a CDB Session to read configuration data.

    • Subscribe to changes in CDB - The subscription functionality makes it possible to receive events/notifications when changes occur in CDB.

    CDB can also be used to store operational data, i.e., data which is designated with a "config false" statement in the YANG data model. Operational data is read/write trough the CDB API. NETCONF and the other northbound agents can only read operational data.

    Java CDB API is intended to be fast and lightweight and the CDB read Sessions are expected to be short lived and fast. The NSO transaction manager is surpassed by CDB and therefore write operations on configurational data is prohibited. If operational data is stored in CDB both read and write operations on this data is allowed.

    CDB is always locked for the duration of the session. It is therefore the responsibility of the programmer to make CDB interactions short in time and assure that all CDB sessions are closed when interaction has finished.

    To initialize the CDB API a CDB socket has to be created and passed into the API base class com.tailf.cdb.Cdb:

    After the cdb socket has been established, a user could either start a CDB Session or start a subscription of changes in CDB:

    We can refer to an element in a model with an expression like /servers/server. This type of string reference to an element is called keypath or just path. To refer to element underneath a list, we need to identify which instance of the list elements that is of interest.

    This can be performed either by pinpointing the sequence number in the ordered list, starting from 0. For instance the path: /servers/server[2]/port refers to the port leaf of the third server in the configuration. This numbering is only valid during the current CDB session. Note, the database is locked during this session.

    We can also refer to list instances using the key values for the list. Remember that we specify in the data model which leaf or leafs in list that constitute the key. In our case, a server has the name leaf as key. The syntax for keys is a space-separated list of key values enclosed within curly brackets: { Key1 Key2 ...}. So, /servers/server{www}/ip refers to the ip leaf of the server whose name is www.

    A YANG list may have more than one key for example the keypath: /dhcp/subNets/subNet{192.168.128.0 255.255.255.0}/routers refers to the routers list of the subnet which has key 192.168.128.0, 255.255.255.0.

    The keypath syntax allows for formatting characters and accompanying substitution arguments. For example, getElem("server[%d]/ifc{%s}/mtu",2,"eth0") is using a keypath with a mix of sequence number and keyvalues with formatting characters and argument. Expressed in text the path will reference the MTU of the third server instance's interface named eth0.

    The CdbSession Java class have a number of methods to control current position in the model.

    • CdbSession.cwd() to get current position.

    • CdbSession.cd() to change current position.

    • CdbSession.pushd() to change and push a new position to a stack.

    Using relative paths and e.g. CdbSession.pushd(), it is possible to write code that can be re-used for common sub-trees.

    The current position also includes the namespace. If an element of another namespace should be read, then the prefix of that namespace should be set in the first tag of the keypath, like: /smp:servers/server where smp is the prefix of the namespace. It is also possible to set the default namespace for the CDB session with the method CdbSession.setNamespace(ConfNamespace).

    The CDB subscription mechanism allows an external Java program to be notified when different parts of the configuration changes. For such a notification, it is also possible to iterate through the change set in CDB for that notification.

    Subscriptions are primarily to the running data store. Subscriptions towards the operational data store in CDB is possible, but the mechanism is slightly different see below.

    The first thing to do is to register in CDB which paths should be subscribed to. This is accomplished with the CdbSubscription.subscribe(...) method. Each registered path returns a subscription point identifier. Each subscriber can have multiple subscription points, and there can be many different subscribers.

    Every point is defined through a path - similar to the paths we use for read operations, with the difference that instead of fully instantiated paths to list instances we can choose to use tag paths i.e. leave out key value parts to be able to subscribe on all instances. We can subscribe either to specific leaves, or entire sub trees. Assume a YANG data model on the form of:

    Explaining this by example we get:

    A subscription on a leaf. Only changes to this leaf will generate a notification.

    Means that we subscribe to any changes in the subtree rooted at /servers. This includes additions or removals of server instances, as well as changes to already existing server instances.

    Means that we only want to be notified when the server "www" changes its ip address.

    Means we want to be notified when the leaf ip is changed in any server instance.

    When adding a subscription point the client must also provide a priority, which is an integer. As CDB is changed, the change is part of a transaction. For example, the transaction is initiated by a commit operation from the CLI or an edit-config operation in NETCONF resulting in the running database being modified. As the last part of the transaction, CDB will generate notifications in lock-step priority order. First, all subscribers at the lowest numbered priority are handled; once they all have replied and synchronized by calling sync(CdbSubscriptionSyncType synctype), the next set - at the next priority level - is handled by CDB. Not until all subscription points have been acknowledged, is the transaction complete.

    This implies that if the initiator of the transaction was, for example, a commit command in the CLI, the command will hang until notifications have been acknowledged.

    Note that even though the notifications are delivered within the transaction, a subscriber can't reject the changes (since this would break the two-phase commit protocol used by the NSO backplane towards all data providers).

    When a client is done subscribing, it needs to inform NSO it is ready to receive notifications. This is done by first calling subscribeDone(), after which the subscription socket is ready to be polled.

    As a subscriber has read its subscription notifications using read(), it can iterate through the changes that caused the particular subscription notification using the diffIterate() method.

    It is also possible to start a new read-session to the CDB_PRE_COMMIT_RUNNING database to read the running database as it was before the pending transaction.

    Subscriptions towards the operational data in CDB are similar to the above, but because the operational data store is designed for light-weight access (and thus, does not have transactions and normally avoids the use of any locks), there are several differences, in particular:

    • Subscription notifications are only generated if the writer obtains the subscription lock, by using the startSession() with the CdbLockType.LOCKREQUEST. In addition, when starting a session towards the operation data, we need to pass the CdbDBType.CDB_OPERATIONAL when starting a CDB session:\

    • No priorities are used.

    • Neither the writer that generated the subscription notifications nor other writers to the same data are blocked while notifications are being delivered. However, the subscription lock remains in effect until notification delivery is complete.

    Essentially a write operation towards the operational data store, combined with the subscription lock, takes on the role of a transaction for configuration data as far as subscription notifications are concerned. This means that if operational data updates are done with many single-element write operations, this can potentially result in a lot of subscription notifications. Thus, it is a good idea to use the multi-element setObject() taking an array of ConfValues which sets a complete container or setValues() taking an array of ConfXMLParam and potent of setting an arbitrary part of the model. This to keep down notifications to subscribers when updating operational data.

    Write operations that do not attempt to obtain the subscription lock, are allowed to proceed even during notification delivery. Therefore, it is the responsibility of the programmer to obtain the lock as needed when writing to the operational data store. E.g. if subscribers should be able to reliably read the exact data that resulted from the write that triggered their subscription, the subscription lock must always be obtained when writing that particular set of data elements. One possibility is of course to obtain the lock for all writes to operational data, but this may have an unacceptable performance impact.

    To view registered subscribers, use the ncs --status command. For details on how to use the different subscription functions, see the Javadoc for NSO Java API.

    The code in the example ${NCS_DIR}/examples.ncs/getting-started/developing-with-ncs/1-cdb. illustrates three different types of CDB subscribers.

    • A simple Cdb config subscriber that utilizes the low-level Cdb API directly to subscribe to changes in the subtree of the configuration.

    • Two Navu Cdb subscribers, one subscribing to configuration changes, and one subscribing to changes in operational data.

    DP API

    The DP API makes it possible to create callbacks which are called when certain events occur in NSO. As the name of the API indicates, it is possible to write data provider callbacks that provide data to NSO that is stored externally. However, this is only one of several callback types provided by this API. There exist callback interfaces for the following types:

    • Service Callbacks - invoked for service callpoints in the YANG model. Implements service to device information mappings. See for example ${NCS_DIR}/examples.ncs/getting-started/developing-with-ncs/4-rfs-service

    • Action Callbacks - invoked for a certain action in the YANG model which is defined with a callpoint directive.

    • Authentication Callbacks - invoked for external authentication functions.

    The callbacks are methods in ordinary java POJOs. These methods are adorned with a specific Java Annotations syntax for that callback type. The annotation makes it possible to add metadata information to NSO about the supplied method. The annotation includes information about which callType and, when necessary, which callpoint the method should be invoked for.

    Only one Java object can be registered on one and the same callpoint. Therefore, when a new Java object registers on a callpoint that already has been registered, the earlier registration (and Java object) will be silently removed.

    Transaction and Data Callbacks

    By default, NSO stores all configuration data in its CDB data store. We may wish to store and configure other data in NSO than what is defined by the NSO built-in YANG models, alternatively, we may wish to store parts of the NSO tree outside NSO (CDB) i.e. in an external database. Say, for example, that we have our customer database stored in a relational database disjunct from NSO. To implement this, we must do a number of things: We must define a callpoint somewhere in the configuration tree, and we must implement what is referred to as a data provider. Also, NSO executes all configuration changes inside transactions and if we want NSO (CDB) and our external database to participate in the same two-phase commit transactions, we must also implement a transaction callback. Altogether, it will appear as if the external data is part of the overall NSO configuration, thus the service model data can refer directly to this external data - typically to validate service instances.

    The basic idea for a data provider is that it participates entirely in each NSO transaction, and it is also responsible for reading and writing all data in the configuration tree below the callpoint. Before explaining how to write a data provider and what the responsibilities of a data provider are, we must explain how the NSO transaction manager drives all participants in a lock-step manner through the phases of a transaction.

    A transaction has a number of phases, the external data provider gets called in all the different phases. This is done by implementing a transaction callback class and then registering that class. We have the following distinct phases of an NSO transaction:

    • init(): In this phase, the transaction callback class init() methods get invoked. We use annotation on the method to indicate that it's the init() method as in:\

      Each different callback method we wish to register must be annotated with an annotation from TransCBType.

      The callback is invoked when a transaction starts, but NSO delays the actual invocation as an optimization. For a data provider providing configuration data, init() is invoked just before the first data-reading callback, or just before the transLock() callback (see below), whichever comes first. When a transaction has started, it is in a state we refer to as READ

    The following picture illustrates the conceptual state machine an NSO transaction goes through.

    All callback methods are optional. If a callback method is not implemented, it is the same as having an empty callback which simply returns.

    Similar to how we have to register transaction callbacks, we must also register data callbacks. The transaction callbacks cover the life span of the transaction, and the data callbacks are used to read and write data inside a transaction. The data callbacks have access to what is referred to as the transaction context in the form of a DpTrans object.

    We have the following data callbacks:

    • getElem(): This callback is invoked by NSO when NSO needs to read the actual value of a leaf element. We must also implement the getElem() callback for the keys. NSO invokes getElem() on a key as an existence test.\

      We define the getElem callback inside a class as:\

    • existsOptional(): This callback is called for all type less and optional elements, i.e. presence containers and leafs of type

    We also have two additional optional callbacks that may be implemented for efficiency reasons.

    • getObject(): If this optional callback is implemented, the work of the callback is to return an entire object, i.e., a list instance. This is not the same getObject() as the one that is used in combination with the iterator()

    • numInstances(): When NSO needs to figure out how many instances we have of a certain element, by default NSO will repeatedly invoke the iterator() callback. If this callback is installed, it will be called instead.

    The following example illustrates an external data provider. The example is possible to run from the examples collection. It resides under ${NCS_DIR}/examples.ncs/getting-started/developing-with-ncs/6-extern-db.

    The example comes with a tailor-made database - MyDb. That source code is provided with the example but not shown here. However, the functionality will be obvious from the method names like newItem(), lock(), save(), etc.

    Two classes are implemented, one for the transaction callbacks and another for the data callbacks.

    The data model we wish to incorporate into NSO is a trivial list of work items. It looks like:

    Note the callpoint directive in the model, it indicates that an external Java callback must register itself using that name. That callback will be responsible for all data below the callpoint.

    To compile the work.yang data model and then also to generate Java code for the data model, we invoke make all in the example package src directory. The Makefile will compile the yang files in the package, generate Java code for those data models, and then also invoke ant in the Java src directory.

    The Data callback class looks as follows:

    First, we see how the Java annotations are used to declare the type of callback for each method. Secondly, we see how the getElem() callback inspects the keyPath parameter passed to it to figure out exactly which element NSO wants to read. The keyPath is an array of ConfObject values. Keypaths are central to the understanding of the NSO Java library since they are used to denote objects in the configuration. A keypath uniquely identifies an element in the instantiated configuration tree.

    Furthermore, the getElem() switches on the tag keyPath[0] which is a ConfTag using symbolic constants from the class "work". The "work" class was generated through the call to ncsc --emit-java ....

    The three write callbacks, setElem(), create() and remove() all return the value Conf.REPLY_ACCUMULATE. If our backend database has real support to abort transactions, it is a good idea to initiate a new backend database transaction in the Transaction callback init() (more on that later), whereas if our backend database doesn't support proper transactions, we can fake real transactions by returning Conf.REPLY_ACCUMULATE instead of actually writing the data. Since the final verdict of the NSO transaction as a whole may very well be to abort the transaction, we must be prepared to undo all write operations. The Conf.REPLY_ACCUMULATE return value means that we ask the library to cache the write for us.

    The transaction callback class looks like this:

    We can see how the prepare() callback goes through all write operations and actually executes them towards our database MyDb.

    Service and Action Callbacks

    Both service and action callbacks are fundamental in NSO.

    Implementing a service callback is one way of creating a service type. This and other ways of creating service types are in-depth described in the section.

    Action callbacks are used to implement arbitrary operations in Java. These operations can be basically anything, e.g. downloading a file, performing some test, resetting alarms, etc, but they should not modify the modeled configuration.

    The actions are defined in the YANG model by means of rpc or tailf:action statements. Input and output parameters can optionally be defined via input and output statements in the YANG model. To specify that the rpc or action is implemented by a callback, the model uses a tailf:actionpoint statement.

    The action callbacks are:

    • init() Similar to the transaction init() callback. However note that, unlike the case with transaction and data callbacks, both init() and action() are registered for each actionpoint (i.e. different action points can have different init() callbacks), and there is no finish() callback - the action is completed when the action() callback returns.

    In the examples.ncs/service-provider/mpls-vpn example, we can define a self-test action. In the packages/l3vpn/src/yang/l3vpn.yang, we locate the service callback definition:

    Beneath the service callback definition, we add an action callback definition so the resulting YANG looks like the following:

    The packages/l3vpn/src/java/src/com/example/l3vpnRFS.java already contains an action implementation but it has been suppressed since no actionpoint with the corresponding name has been defined in the YANG model, before now.

    Validation Callbacks

    In the VALIDATE state of a transaction, NSO will validate the new configuration. This consists of verification that specific YANG constraints such as min-elements, unique, etc, as well as arbitrary constraints specified by must expressions, are satisfied. The use of must expressions is the recommended way to specify constraints on relations between different parts of the configuration, both due to its declarative and concise form and due to performance considerations, since the expressions are evaluated internally by the NSO transaction engine.

    In some cases, it may still be motivated to implement validation logic via callbacks in code. The YANG model will then specify a validation point by means of a tailf:validate statement. By default, the callback registered for a validation point will be invoked whenever a configuration is validated, since the callback logic will typically be dependent on data in other parts of the configuration, and these dependencies are not known by NSO. Thus it is important from a performance point of view to specify the actual dependencies by means of tailf:dependency substatements to the validate statement.

    Validation callbacks use the MAAPI API to attach to the current transaction. This makes it possible to read the configuration data that is to be validated, even though the transaction is not committed yet. The view of the data is effectively the pre-existing configuration "shadowed" by the changes in the transaction, and thus exactly what the new configuration will look like if it is committed.

    Similar to the case of transaction and data callbacks, there are transaction validation callbacks that are invoked when the validation phase starts and stops, and validation callbacks that are invoked for the specific validation points in the YANG model.

    The transaction validation callbacks are:

    • init(): This callback is invoked when the validation phase starts. It will typically attach to the current transaction:

    • stop(): This callback is invoked when the validation phase ends. If init() attached to the transaction, stop() should detach from it.

    The actual validation logic is implemented in a validation callback:

    • validate(): This callback is invoked for a specific validation point.

    Transforms

    Transforms implement a mapping between one part of the data model - the front-end of the transform - and another part - the back-end of the transform. Typically the front-end is visible to northbound interfaces, while the back-end is not, but for operational data (config false in the data model), a transform may implement a different view (e.g. aggregation) of data that is also visible without going through the transform.

    The implementation of a transform uses techniques already described in this section: Transaction and data callbacks are registered and invoked when the front-end data is accessed, and the transform uses the MAAPI API to attach to the current transaction and accesses the back-end data within the transaction.

    To specify that the front-end data is provided by a transform, the data model uses the tailf:callpoint statement with a tailf:transform true substatement. Since transforms do not participate in the two-phase commit protocol, they only need to register the init() and finish() transaction callbacks. The init() callback attaches to the transaction and finish() detaches from it. Also, a transform for operational data only needs to register the data callbacks that read data, i.e. getElem(), existsOptional(), etc.

    Hooks

    Hooks make it possible to have changes to the configuration trigger additional changes. In general, this should only be done when the data that is written by the hook is not visible to northbound interfaces since otherwise, the additional changes will make it difficult e.g. EMS or NMS systems to manage the configuration - the complete configuration resulting from a given change cannot be predicted. However, one use case in NSO for hooks that trigger visible changes is precisely to model-managed devices that have this behavior: hooks in the device model can emulate what the device does on certain configuration changes, and thus the device configuration in NSO remains in sync with the actual device configuration.

    The implementation technique for a hook is very similar to that for a transform. Transaction and data callbacks are registered, and the MAAPI API is used to attach to the current transaction and write the additional changes into the transaction. As for transforms, only the init() and finish() transaction callbacks need to be registered, to do the MAAPI attach and detach. However only data callbacks that write data, i.e. setElem(), create(), etc need to be registered, and depending on which changes should trigger the hook invocation, it is possible to register only a subset of those. For example, if the hook is registered for a leaf in the data model, and only changes to the value of that leaf should trigger invocation of the hook, it is sufficient to register setElem().

    To specify that changes to some part of the configuration should trigger a hook invocation, the data model uses the tailf:callpoint statement with a tailf:set-hook or tailf:transaction-hook substatement. A set-hook is invoked immediately when a northbound agent requests a write operation on the data, while a transaction-hook is invoked when the transaction is committed. For the NSO-specific use case mentioned above, a set-hook should be used. The tailf:set-hook and tailf:transaction-hook statements take an argument specifying the extent of the data model the hook applies to.

    NED API

    NSO can speak southbound to an arbitrary management interface. This is of course not entirely automatic like with NETCONF or SNMP, and depending on the type of interface the device has for configuration, this may involve some programming. Devices with a Cisco-style CLI can however be managed by writing YANG models describing the data in the CLI, and a relatively thin layer of Java code to handle the communication to the devices. Refer to for more information.

    NAVU API

    The NAVU API provides a DOM-driven approach to navigate the NSO service and device models. The main features of the NAVU API are dynamic schema loading at start-up and lazy loading of instance data. The navigation model is based on the YANG language structure. In addition to navigation and reading of values, NAVU also provides methods to modify the data model. Furthermore, it supports the execution of actions modeled in the service model.

    By using NAVU, it is easy to drill down through tree structures with minimal effort using the node-by-node navigation primitives. Alternatively, we can use the NAVU search feature. This feature is especially useful when we need to find information deep down in the model structures.

    NAVU requires all models i.e. the complete NSO service model with all its augmented sub-models. This is loaded at runtime from NSO. NSO has in turn acquired these from loaded .fxs files. The .fxs files are a product from the ncsc tool with compiles these from the .yang files.

    The ncsc tool can also generate Java classes from the .yang files. These files, extending the ConfNamespace base class, are the Java representation of the models and contain all defined nametags and their corresponding hash values. These Java classes can, optionally, be used as help classes in the service applications to make NAVU navigation type-safe, e.g. eliminating errors from misspelled model container names.

    The service models are loaded at start-up and are always the latest version. The models are always traversed in a lazy fashion i.e. data is only loaded when it is needed. This is to minimize the amount of data transferred between NSO and the service applications.

    The most important classes of NAVU are the classes implementing the YANG node types. These are used to navigate the DOM. These classes are as follows.

    • NavuContainer: the NavuContainer is a container representing either the root of the model, a YANG module root, or a YANG container.

    • NavuList: the NavuList represents a YANG list node.

    • NavuListEntry: list node entry.

    The remaining part of this section will guide us through the most useful features of the NAVU. Should further information be required, please refer to the corresponding Javadoc pages.

    NAVU relies on MAAPI as the underlying interface to access NSO. The starting point in NAVU configuration is to create a NavuContext instance using the NavuContext(Maapi maapi) constructor. To read and/or write data a transaction has to be started in Maapi. There are methods in the NavuContext class to start and handle this transaction.

    If data has to be written, the Navu transaction has to be started differently depending on the data being the configuration or operational data. Such a transaction is started by the methods NavuContext.startRunningTrans() or NavuContext.startOperationalTrans() respectively. The Javadoc describes this in more detail.

    When navigating using NAVU we always start by creating a NavuContainer and passing in the NavuContext instance, this is a base container from which navigation can be started. Furthermore, we need to create a root NavuContainer which is the top of the YANG module in which to navigate down. This is done by using the NavuContainer.container(int hash) method. Here the argument is the hash value for the module namespace.

    NAVU maps the YANG node types; container, list, leaf, and leaf-list into its own structure. As mentioned previously NavuContainer is used to represent both the module and the container node type. The NavuListEntry is also used to represent a list node instance (actually NavuListEntry extends NavuContainer). i.e. an element of a list node.

    Consider the YANG excerpt below.

    If the purpose is to directly access a list node, we would typically do a direct navigation to the list element using the NAVU primitives.

    Or if we want to iterate over all elements of a list we could do as follows.

    The above example uses the select() which uses a recursive regexp match against its children.

    Alternatively, if the purpose is to drill down deep into a structure we should use select(). The select() offers a wild card-based search. The search is relative and can be performed from any node in the structure.

    All of the above are valid ways of traversing the lists depending on the purpose. If we know what we want, we use direct access. If we want to apply something to a large amount of nodes, we use select().

    An alternative method is to use the xPathSelect() where an XPath query could be issued instead.

    NavuContainer and NavuList are structural nodes with NAVU. i.e. they have no values. Values are always kept by NavuLeaf. A NavuLeaf represents the YANG node types leaf. A NavuLeaf can be both read and set. NavuLeafList represents the YANG node type leaf-list and has some features in common with both NavuLeaf (which it inherits from) and NavuList.

    To read and update a leaf, we simply navigate to the leaf and request the value. And in the same manner, we can update the value.

    In addition to the YANG standard node types, NAVU also supports the Tailf proprietary node type action. An action is considered being a NavuAction. It differs from an ordinary container in that it can be executed using the call() primitive. Input and output parameters are represented as ordinary nodes. The action extension of YANG allows an arbitrary structure to be defined both for input and output parameters.

    Consider the excerpt below. It represents a module on a managed device. When connected and synchronized to the NSO, the module will appear in the /devices/device/config container.

    To execute the action below we need to access a device with this module loaded. This is done in a similar way to non-action nodes.

    Or, we could do it with xPathSelect().

    The examples above have described how to attach to the NSO module and navigate through the data model using the NAVU primitives. When using NAVU in the scope of the NSO Service manager, we normally don't have to worry about attaching the NavuContainer to the NSO data model. NSO does this for us providing NavuContainer nodes pointing at the nodes of interest.

    ALARM API

    Since this API is potent for both producing and consuming alarms, this becomes an API that can be used both north and eastbound. It adheres to the NSO Alarm model.

    For more information see .

    The com.tailf.ncs.alarmman.consumer.AlarmSource class is used to subscribe to alarms. This class establishes a listener towards an alarm subscription server called com.tailf.ncs.alarmman.consumer.AlarmSourceCentral. The AlarmSourceCentral needs to be instantiated and started prior to the instantiation of the AlarmSource listener. The NSO Java VM takes care of starting the AlarmSourceCentral so any use of the ALARM API inside the NSO Java VM can expect this server to be running.

    For situations where alarm subscription outside of the NSO Java VM is desired, starting the AlarmSourceCentral is performed by opening a Cdb socket, passing this Cdb to the AlarmSourceCentral class, and then calling the start() method.

    To retrieve alarms from the AlarmSource listener, an initial startListening() is required. Then either a blocking takeAlarm() or a timeout-based pollAlarm() can be used to retrieve the alarms. The first method will wait indefinitely for new alarms to arrive while the second will timeout if an alarm has not arrived in the stipulated time. When a listener no longer is needed then a stopListening() call should be issued to deactivate it.

    Both the takeAlarm() and the pollAlarm() method returns a Alarm object from which all alarm information can be retrieved.

    The com.tailf.ncs.alarmman.producer.AlarmSink is used to persistently store alarms in NSO. This can be performed either directly or by the use of an alarm storage server called com.tailf.ncs.alarmman.producer.AlarmSinkCentral.

    To directly store alarms an AlarmSink instance is created using the AlarmSink(Maapi maapi) constructor.

    On the other hand, if the alarms are to be stored using the AlarmSinkServer then the AlarmSink() constructor without arguments is used.

    However, this case requires that the AlarmSinkServer is started prior to the instantiation of the AlarmSink. The NSO Java VM will take care of starting this server so any use of the ALARM API inside the Java VM can expect this server to be running. If it is desired to store alarms in an application outside of the NSO java VM, the AlarmSinkServer needs to be started like the following example:

    To store an alarm using the AlarmSink, an Alarm instance must be created. This alarm alarm instance is then stored by a call to the submitAlarm() method.

    NOTIF API

    Applications can subscribe to certain events generated by NSO. The event types are defined by the com.tailf.notif.NotificationType enumeration. The following notification can be subscribed to:

    • NotificationType.NOTIF_AUDIT: all audit log events are sent from NSO on the event notification socket.

    • NotificationType.NOTIF_COMMIT_SIMPLE: an event indicating that a user has somehow modified the configuration.

    • NotificationType.NOTIF_COMMIT_DIFF: an event indicating that a user has somehow modified the configuration. The main difference between this event and the above-mentioned NOTIF_COMMIT_SIMPLE is that this event is synchronous, i.e. the entire transaction hangs until we have explicitly called

    To receive events from the NSO the application opens a socket and passes it to the notification base class com.tailf.notif.Notif together with an EnumSet of NotificationType for all types of notifications that should be received. Looping over the Notif.read() method will read and deliver notifications which are all subclasses of the com.tailf.notif.Notification base class.

    HA API

    The HA API is used to set up and control High-Availability cluster nodes. This package is used to connect to the High Availability (HA) subsystem. Configuration data can then be replicated on several nodes in a cluster. (see )

    The following example configures three nodes in a HA cluster. One is set as primary and the other two as secondaries.

    Java API Conf Package

    This section describes the types and how these types map to various YANG types and Java classes.

    All types inherit the base class com.tailf.conf.ConfObject.

    Following the type hierarchy of ConfObject subclasses are distinguished by:

    • Value: A concrete value classes which inherits ConfValue that in turn is a subclass of ConfObject.

    • TypeDescriptor: a class representing the type of a ConfValue. A type-descriptor is represented as an instance of ConfTypeDescriptor. Usage is primarily to be able to map a ConfValue to its internal integer value representation or vice versa.

    The class ConfObject defines public int constants for the different value types. Each value type is mapped to a specific YANG type and is also represented by a specific subtype of ConfValue. Having a ConfValue instance it is possible to retrieve its integer representation by the use of the static method getConfTypeDescriptor() in class ConfTypeDescriptor. This function returns a ConfTypeDescriptor instance representing the value from which the integer representation can be retrieved. The values represented as integers are:

    The table lists ConfValue types.

    Constant
    YANG type
    ConfValue
    Description

    An important class in the com.tailf.conf package, not inheriting ConfObject, is ConfPath. ConfPath is used to represent a keypath that can point to any element in an instantiated model. As such it is constructed from an array of ConfObject[] instances where each element is expected to be either a ConfTag or a ConfKey.

    As an example take the keypath /ncs:devices/device{d1}/iosxr:interface/Loopback{lo0}. The following code snippets show the instantiating of a ConfPath object representing this keypath:

    Another more commonly used option is to use the format string + arguments constructor from ConfPath. Where ConfPath parsers and creates the ConfTag/ConfKey representation from the string representation instead.

    The usage of ConfXMLParam is in tagged value arrays ConfXMLParam[] of subtypes of ConfXMLParam. These can in collaboration represent an arbitrary YANG model subtree. It does not view a node as a path but instead, it behaves as an XML instance document representation. We have 4 subtypes of ConfXMLParam:

    • ConfXMLParamStart: Represents an opening tag. Opening node of a container or list entry.

    • ConfXMLParamStop: Represents a closing tag. The closing tag of a container or a list entry.

    • ConfXMLParamValue: Represent a value and a tag. Leaf tag with the corresponding value.

    Each element in the array is associated with the node in the data model.

    The array corresponding to the /servers/server{www} which is a representation of the instance XML document:

    The list entry above could be populated as:

    Namespace Classes and the Loaded Schema

    A namespace class represents the namespace for a YANG module. As such it maps the symbol name of each element in the YANG module to its corresponding hash value.

    A namespace class is a subclass of ConfNamespace and comes in one of two shapes. Either created at compile time using the ncsc compiler or created at runtime with the use of Maapi.loadSchemas. These two types also indicate two main usages of namespace classes. The first is in programming where the symbol names are used e.g. in Navu navigation. This is where the compiled namespaces are used. The other is for internal mapping between symbol names and hash values. This is where the runtime type normally is used, however, compiled namespace classes can be used for these mappings too.

    The compiled namespace classes are generated from compiled .fxs files through ncsc,(ncsc --emit-java).

    Runtime namespace classes are created by calling Maapi.loadschema(). That's it, the rest is dynamic. All namespaces known by NSO are downloaded and runtime namespace classes are created. these can be retrieved by calling Maapi.getAutoNsList().

    The schema information is loaded automatically at the first connect of the NSO server, so no manual method call to Maapi.loadSchemas() is needed.

    With all schemas loaded, the Java engine can make mappings between hash codes and symbol names on the fly. Also, the ConfPath class can find and add namespace information when parsing keypaths provided that the namespace prefixes are added in the start element for each namespace.

    As an option, several APIs e.g. MAAPI can set the default namespace which will be the expected namespace for paths without prefixes. For example, if the namespace class smp is generated with the legal path /smp:servers/server an option in Maapi could be the following:

    Example: NSO Configuration for RESTCONF
    <restconf>
      <enabled>true</enabled>
    </restconf>
    
    <webui>
      <transport>
        <tcp>
          <enabled>true</enabled>
          <ip>0.0.0.0</ip>
          <port>8080</port>
        </tcp>
      </transport>
    </webui>
    Example: NSO Separate Transport Configuration for RESTCONF
    <restconf>
      <enabled>true</enabled>
      <transport>
        <tcp>
          <enabled>true</enabled>
          <ip>0.0.0.0</ip>
          <port>8090</port>
        </tcp>
      </transport>
    </restconf>
    
    <webui>
      <enabled>false</enabled>
      <transport>
        <tcp>
          <enabled>true</enabled>
          <ip>0.0.0.0</ip>
          <port>8080</port>
        </tcp>
      </transport>
    </webui>
    Example: A RESTCONF Request using
    # Note that the command is wrapped in several lines in order to fit.
    #
    # The switch '-i' will include any HTTP reply headers in the output
    # and the '-s' will suppress some superflous output.
    #
    # The '-u' switch specify the User:Password for login authentication.
    #
    # The '-H' switch will add a HTTP header to the request; in this case
    # an 'Accept' header is added, requesting the preferred reply format.
    #
    # Finally, the complete URL to the wanted resource is specified,
    # in this case the top of the configuration tree.
    #
    curl -is -u admin:admin \
    -H "Accept: application/yang-data+xml" \
    http://localhost:8080/restconf/data
    Example: A RESTCONF Request, Simplified
    GET /restconf/data
    Accept: application/yang-data+xml
    
    # Any reply with relevant headers will be displayed here!
    HTTP/1.1 200 OK
    Example: A Top-level RESTCONF Request
    GET /restconf
    Accept: application/yang-data+xml
    
    HTTP/1.1 200 OK
    <restconf xmlns="urn:ietf:params:xml:ns:yang:ietf-restconf">
      <data/>
      <operations/>
      <yang-library-version>2019-01-04</yang-library-version>
    </restconf>
    Example: Get the Top-most Resources Under
    GET /restconf/data?depth=1
    Accept: application/yang-data+xml
    
    <data xmlns="urn:ietf:params:xml:ns:yang:ietf-restconf">
      <yang-library xmlns="urn:ietf:params:xml:ns:yang:ietf-yang-library"/>
      <modules-state xmlns="urn:ietf:params:xml:ns:yang:ietf-yang-library"/>
      <dhcp xmlns="http://yang-central.org/ns/example/dhcp"/>
      <nacm xmlns="urn:ietf:params:xml:ns:yang:ietf-netconf-acm"/>
      <netconf-state xmlns="urn:ietf:params:xml:ns:yang:ietf-netconf-monitoring"/>
      <restconf-state xmlns="urn:ietf:params:xml:ns:yang:ietf-restconf-monitoring"/>
      <aaa xmlns="http://tail-f.com/ns/aaa/1.1"/>
      <confd-state xmls="http://tail-f.com/yang/confd-monitoring"/>
      <last-logins xmlns="http://tail-f.com/yang/last-login"/>
    </data>
    Example: The
    > yanger -f tree examples.confd/restconf/basic/dhcp.yang
    module: dhcp
      +--rw dhcp
      +--rw max-lease-time?       uint32
      +--rw default-lease-time?   uint32
      +--rw subnet* [net]
      |  +--rw net               inet:ip-prefix
      |  +--rw range!
      |  |  +--rw dynamic-bootp?   empty
      |  |  +--rw low              inet:ip-address
      |  |  +--rw high             inet:ip-address
      |  +--rw dhcp-options
      |  |  +--rw router*        inet:host
      |  |  +--rw domain-name?   inet:domain-name
      |  +--rw max-lease-time?   uint32
    Example: Get the
    GET /restconf/data/dhcp:dhcp/subnet
    
    HTTP/1.1 204 No Content
    Example: Create a New
    POST /restconf/data/dhcp:dhcp
    Content-Type: application/yang-data+xml
    
    <subnet xmlns="http://yang-central.org/ns/example/dhcp"
              xmlns:dhcp="http://yang-central.org/ns/example/dhcp">
      <net>10.254.239.0/27</net>
      <range>
        <dynamic-bootp/>
        <low>10.254.239.10</low>
        <high>10.254.239.20</high>
      </range>
      <dhcp-options>
        <router>rtr-239-0-1.example.org</router>
        <router>rtr-239-0-2.example.org</router>
      </dhcp-options>
      <max-lease-time>1200</max-lease-time>
    </subnet>
    
    # If the resource is created, the server might respond as follows:
    
    HTTP/1.1 201 Created
    Location: http://localhost:8080/restconf/data/dhcp:dhcp/subnet=10.254.239.0%2F27
    Example: Modify a Part of the
    PATCH /restconf/data/dhcp:dhcp/subnet=10.254.239.0%2F27
    
    <subnet>
      <max-lease-time>3333</max-lease-time>
    </subnet>
    
    # If our modification is successful, the server might respond as follows:
    
    HTTP/1.1 204 No Content
    Example: Replace a
    PUT /restconf/data/dhcp:dhcp/subnet=10.254.239.0%2F27
    Content-Type: application/yang-data+xml
    
    <subnet xmlns="http://yang-central.org/ns/example/dhcp"
              xmlns:dhcp="http://yang-central.org/ns/example/dhcp">
      <net>10.254.239.0/27</net>
    
      <!-- ...config left out here... -->
    
    </subnet>
    
    # At success, the server will respond as follows:
    
    HTTP/1.1 204 No Content
    Example: Delete a
    DELETE /restconf/data/dhcp:dhcp/subnet=10.254.239.0%2F27
    
    HTTP/1.1 204 No Content
    Example: NSO Configuration for RESTCONF
    <restconf>
      <enabled>true</enabled>
      <root-resource>my_own_restconf_root</root-resource>
    </restconf>
    Example: Example Returning
       The client might send the following:
    
          GET /.well-known/host-meta
          Accept: application/xrd+xml
    
       The server might respond as follows:
    
          HTTP/1.1 200 OK
    
          <XRD xmlns='http://docs.oasis-open.org/ns/xri/xrd-1.0'>
              <Link rel='restconf' href='/restconf'/>
          </XRD>
    Example: NSO RESTCONF Capabilities
    GET /restconf/data/ietf-restconf-monitoring:restconf-state
    Host: example.com
    Accept: application/yang-data+xml
    
    <restconf-state xmlns="urn:ietf:params:xml:ns:yang:ietf-restconf-monitoring"
      xmlns:rcmon="urn:ietf:params:xml:ns:yang:ietf-restconf-monitoring">
    <capabilities>
      <capability>
        urn:ietf:params:restconf:capability:defaults:1.0?basic-mode=explicit
      </capability>
      <capability>urn:ietf:params:restconf:capability:depth:1.0</capability>
      <capability>urn:ietf:params:restconf:capability:fields:1.0</capability>
      <capability>urn:ietf:params:restconf:capability:with-defaults:1.0</capability>
      <capability>urn:ietf:params:restconf:capability:filter:1.0</capability>
      <capability>urn:ietf:params:restconf:capability:replay:1.0</capability>
      <capability>http://tail-f.com/ns/restconf/collection/1.0</capability>
      <capability>http://tail-f.com/ns/restconf/query-api/1.0</capability>
      <capability>http://tail-f.com/ns/restconf/partial-response/1.0</capability>
      <capability>http://tail-f.com/ns/restconf/unhide/1.0</capability>
    </capabilities>
    </restconf-state>
    Example:The
              urn:ietf:params:restconf:capability:defaults:1.0
    urn:ietf:params:restconf:capability:defaults:1.0?basic-mode=explicit
    Example: Example of How to Use the
    GET /restconf/data/dhcp:dhcp?fields=subnet/range(low;high)
    Accept: application/yang-data+xml
    
    HTTP/1.1 200 OK
    <dhcp xmlns="http://yang-central.org/ns/example/dhcp" \
          xmlns:dhcp="http://yang-central.org/ns/example/dhcp">
      <subnet>
        <range>
          <low>10.254.239.10</low>
          <high>10.254.239.20</high>
        </range>
      </subnet>
      <subnet>
        <range>
          <low>10.254.244.10</low>
          <high>10.254.244.20</high>
        </range>
      </subnet>
    </dhcp>
    Example: Example of How to Use the
    GET /restconf/data/dhcp:dhcp/subnet
    Accept: application/yang-data+xml
    
    HTTP/1.1 200 OK
    <subnet xmlns="http://yang-central.org/ns/example/dhcp"
              xmlns:dhcp="http://yang-central.org/ns/example/dhcp">
      <net>10.254.239.0/27</net>
      <range>
        <dynamic-bootp/>
        <low>10.254.239.10</low>
        <high>10.254.239.20</high>
      </range>
      <dhcp-options>
        <router>rtr-239-0-1.example.org</router>
        <router>rtr-239-0-2.example.org</router>
      </dhcp-options>
      <max-lease-time>1200</max-lease-time>
    </subnet>
    GET /restconf/data/dhcp:dhcp/subnet?exclude=range(low;high)
    Accept: application/yang-data+xml
    
    HTTP/1.1 200 OK
    <subnet xmlns="http://yang-central.org/ns/example/dhcp"
              xmlns:dhcp="http://yang-central.org/ns/example/dhcp">
      <net>10.254.239.0/27</net>
      <range>
        <dynamic-bootp/>
      </range>
      <dhcp-options>
        <router>rtr-239-0-1.example.org</router>
        <router>rtr-239-0-2.example.org</router>
      </dhcp-options>
      <max-lease-time>1200</max-lease-time>
    </subnet>
    Example: Insert
    # Note: we have to split the POST line in order to fit the page
    POST /restconf/data/dhcp:dhcp/subnet=10.254.239.0%2F27/dhcp-options?\
         insert=first
    Content-Type: application/yang-data+xml
    
    <router>one.acme.org</router>
    
    # If the resource is created, the server might respond as follows:
    
    HTTP/1.1 201 Created
    Location /restconf/data/dhcp:dhcp/subnet=10.254.239.0%2F27/dhcp-options/\
             router=one.acme.org
    GET /restconf/data/dhcp:dhcp/subnet=10.254.239.0%2F27/dhcp-options
    Accept: application/yang-data+xml
    
    HTTP/1.1 200 OK
    <dhcp-options xmlns="http://yang-central.org/ns/example/dhcp"
                  xmlns:dhcp="http://yang-central.org/ns/example/dhcp">
      <router>one.acme.org</router>
      <router>rtr-239-0-1.example.org</router>
      <router>rtr-239-0-2.example.org</router>
    </dhcp-options>
    Example: Insert
    # Note: we have to split the POST line in order to fit the page
    POST /restconf/data/dhcp:dhcp/subnet=10.254.239.0%2F27/dhcp-options?\
         insert=after&\
         point=/dhcp:dhcp/subnet=10.254.239.0%2F27/dhcp-options/router=one.acme.org
    Content-Type: application/yang-data+xml
    
    <router>two.acme.org</router>
    
    # If the resource is created, the server might respond as follows:
    
    HTTP/1.1 201 Created
    Location /restconf/data/dhcp:dhcp/subnet=10.254.239.0%2F27/dhcp-options/\
             router=one.acme.org
    GET /restconf/data/dhcp:dhcp/subnet=10.254.239.0%2F27/dhcp-options
    Accept: application/yang-data+xml
    
    HTTP/1.1 200 OK
    <dhcp-options xmlns="http://yang-central.org/ns/example/dhcp"
                  xmlns:dhcp="http://yang-central.org/ns/example/dhcp">
      <router>one.acme.org</router>
      <router>two.acme.org</router>
      <router>rtr-239-0-1.example.org</router>
      <router>rtr-239-0-2.example.org</router>
    </dhcp-options>
    Example: Create a New dhcp/subnet Resource
    POST /restconf/data/dhcp:dhcp?rollback-id=true
    Content-Type: application/yang-data+xml
    
    <subnet xmlns="http://yang-central.org/ns/example/dhcp">
      <net>10.254.239.0/27</net>
    </subnet>
    
    HTTP/1.1 201 Created
    Location: http://localhost:8008/restconf/data/dhcp:dhcp/subnet=10.254.239.0%2F27
    
    <result xmlns="http://tail-f.com/ns/tailf-restconf">
    <rollback>
      <id>10002</id>
    </rollback>
    </result>
    POST /restconf/data/tailf-rollback:rollback-files/apply-rollback-file
    Content-Type: application/yang-data+xml
    
    <input xmlns="http://tail-f.com/ns/rollback">
      <fixed-number>10002</fixed-number>
    </input>
    
    HTTP/1.1 204 No Content
    Example: Configure an Example Notification
    <notifications>
      <eventStreams>
        <stream>
          <name>interface</name>
          <description>Example notifications</description>
          <replaySupport>true</replaySupport>
          <builtinReplayStore>
            <dir>./</dir>
            <maxSize>S1M</maxSize>
            <maxFiles>5</maxFiles>
          </builtinReplayStore>
        </stream>
      </eventStreams>
    </notifications>
    Example: View the Example RESTCONF Stream
    GET /restconf/data/ietf-restconf-monitoring:restconf-state/streams
    Accept: application/yang-data+xml
    
    HTTP/1.1 200 OK
    
    <streams xmlns="urn:ietf:params:xml:ns:yang:ietf-restconf-monitoring"
             xmlns:rcmon="urn:ietf:params:xml:ns:yang:ietf-restconf-monitoring">
    
      ...other streams info removed here for brewity reason...
    
      <stream>
        <name>interface</name>
        <description>Example notifications</description>
        <replay-support>true</replay-support>
        <replay-log-creation-time>
          2020-05-04T13:45:31.033817+00:00
        </replay-log-creation-time>
        <access>
          <encoding>xml</encoding>
          <location>https://localhost:8888/restconf/streams/interface/xml</location>
        </access>
        <access>
          <encoding>json</encoding>
          <location>https://localhost:8888/restconf/streams/interface/json</location>
        </access>
      </stream>
    </streams>
    Example: View the Example RESTCONF Stream
    GET /restconf/streams/interface/xml
    Accept: text/event-stream
    
       ...NOTE: we will be waiting here until a notification is generated...
    
    HTTP/1.1 200 OK
    Content-Type: text/event-stream
    
    data: <notification xmlns='urn:ietf:params:xml:ns:netconf:notification:1.0'>
    data:     <eventTime>2020-05-04T13:48:02.291816+00:00</eventTime>
    data:     <link-up xmlns='http://tail-f.com/ns/test/notif'>
    data:       <if-index>2</if-index>
    data:       <link-property>
    data:         <newly-added/>
    data:         <flags>42</flags>
    data:         <extensions>
    data:           <name>1</name>
    data:           <value>3</value>
    data:         </extensions>
    data:         <extensions>
    data:           <name>2</name>
    data:           <value>4668</value>
    data:         </extensions>
    data:       </link-property>
    data:     </link-up>
    data: </notification>
    
       ...NOTE: we will still be waiting here for more notifications to come...
    Example: View the Example RESTCONF Stream
    GET /restconf/streams/interface/xml?start-time=2007-07-28T15%3A23%3A36Z
    Accept: text/event-stream
    
    HTTP/1.1 200 OK
    Content-Type: text/event-stream
    
    data: ...any existing notification since given date will be delivered here...
    
       ...NOTE: when all notifications are delivered, we will be waiting here for more...
    Example: NSO RESTCONF Errors During Streaming
    : error: notification stream NETCONF temporarily unavailable
    Example: List the Available Yang Modules
    GET /restconf/data/ietf-yang-library:modules-state
    Accept: application/yang-data+xml
    
    HTTP/1.1 200 OK
    <modules-state xmlns="urn:ietf:params:xml:ns:yang:ietf-yang-library"
                   xmlns:yanglib="urn:ietf:params:xml:ns:yang:ietf-yang-library">
      <module-set-id>f4709e88d3250bd84f2378185c2833c2</module-set-id>
      <module>
        <name>dhcp</name>
        <revision>2019-02-14</revision>
        <schema>http://localhost:8080/restconf/tailf/modules/dhcp/2019-02-14</schema>
        <namespace>http://yang-central.org/ns/example/dhcp</namespace>
        <conformance-type>implement</conformance-type>
      </module>
    
      ...rest of the output removed here...
    
    </modules-state>
    GET /restconf/tailf/modules/dhcp/2019-02-14
    
    HTTP/1.1 200 OK
    module dhcp {
      namespace "http://yang-central.org/ns/example/dhcp";
      prefix dhcp;
    
      import ietf-yang-types {
    
      ...the rest of the Yang module removed here...
    Example: Create a Two New dhcp/subnet Resources
    PATCH /restconf/data/dhcp:dhcp
    Accept: application/yang-data+xml
    Content-Type: application/yang-patch+xml
    
    <yang-patch xmlns="urn:ietf:params:xml:ns:yang:ietf-yang-patch">
      <patch-id>add-subnets</patch-id>
      <edit>
        <edit-id>add-subnet-239</edit-id>
        <operation>create</operation>
        <target>/subnet=10.254.239.0%2F27</target>
        <value>
          <subnet xmlns="http://yang-central.org/ns/example/dhcp" \
                  xmlns:dhcp="http://yang-central.org/ns/example/dhcp">
            <net>10.254.239.0/27</net>
              ...content removed here for brevity...
            <max-lease-time>1200</max-lease-time>
          </subnet>
        </value>
      </edit>
      <edit>
        <edit-id>add-subnet-244</edit-id>
        <operation>create</operation>
        <target>/subnet=10.254.244.0%2F27</target>
        <value>
          <subnet xmlns="http://yang-central.org/ns/example/dhcp" \
                  xmlns:dhcp="http://yang-central.org/ns/example/dhcp">
            <net>10.254.244.0/27</net>
              ...content removed here for brevity...
            <max-lease-time>1200</max-lease-time>
          </subnet>
        </value>
      </edit>
    </yang-patch>
    
    # If the YANG Patch request was successful,
    # the server might respond as follows:
    
    HTTP/1.1 200 OK
    <yang-patch-status xmlns="urn:ietf:params:xml:ns:yang:ietf-yang-patch">
      <patch-id>add-subnets</patch-id>
      <ok/>
    </yang-patch-status>
    Example: Modify and Delete in the Same Yang-Patch Request
    PATCH /restconf/data/dhcp:dhcp
    Accept: application/yang-data+xml
    Content-Type: application/yang-patch+xml
    
    <yang-patch xmlns="urn:ietf:params:xml:ns:yang:ietf-yang-patch">
      <patch-id>modify-and-delete</patch-id>
      <edit>
        <edit-id>modify-max-lease-time-239</edit-id>
        <operation>merge</operation>
        <target>/dhcp:subnet=10.254.239.0%2F27</target>
        <value>
          <subnet xmlns="http://yang-central.org/ns/example/dhcp" \
                  xmlns:dhcp="http://yang-central.org/ns/example/dhcp">
            <net>10.254.239.0/27</net>
            <max-lease-time>1234</max-lease-time>
          </subnet>
        </value>
      </edit>
      <edit>
        <edit-id>delete-max-lease-time-244</edit-id>
        <operation>delete</operation>
        <target>/dhcp:subnet=10.254.244.0%2F27/max-lease-time</target>
      </edit>
    </yang-patch>
    
    # If the YANG Patch request was successful,
    # the server might respond as follows:
    
    HTTP/1.1 200 OK
    <yang-patch-status xmlns="urn:ietf:params:xml:ns:yang:ietf-yang-patch">
      <patch-id>modify-and-delete</patch-id>
      <ok/>
    </yang-patch-status>
    Example: Verify the Modified
    GET /restconf/data/dhcp:dhcp/subnet=10.254.239.0%2F27/max-lease-time
    Accept: application/yang-data+xml
    
    HTTP/1.1 200 OK
    <max-lease-time xmlns="http://yang-central.org/ns/example/dhcp"
                    xmlns:dhcp="http://yang-central.org/ns/example/dhcp">
                    1234
    </max-lease-time>
    Example: Verify the Default Values after Delete of the
    GET /restconf/data/dhcp:dhcp/subnet=10.254.244.0%2F27/max-lease-time?\
          with-defaults=report-all-tagged
    Accept: application/yang-data+xml
    
    HTTP/1.1 200 OK
    <max-lease-time wd:default="true"
                    xmlns:wd="urn:ietf:params:restconf:capability:defaults:1.0"
                    xmlns="http://yang-central.org/ns/example/dhcp"
                    xmlns:dhcp="http://yang-central.org/ns/example/dhcp">
                    7200
    </max-lease-time>
    Example: Check if the RESTCONF Server Support NMDA
    HEAD /restconf/ds/ietf-datastores:operational
    
    HTTP/1.1 200 OK
    Example: Check Which Datastores the RESTCONF Server Supports
    GET /restconf/ds/ietf-datastores:operational/datastore
    Accept: application/yang-data+json
    
    HTTP/1.1 200 OK
    {
      "ietf-yang-library:datastore": [
        {
          "name": "ietf-datastores:running",
          "schema": "common"
        },
        {
          "name": "ietf-datastores:intended",
          "schema": "common"
        },
        {
          "name": "ietf-datastores:operational",
          "schema": "common"
        }
      ]
    }
    Example: Use of Collections
    GET /restconf/ds/ietf-datastores:operational/\
        ietf-yang-library:yang-library/datastore
    Accept: application/vnd.yang.collection+xml
    
    <collection xmlns="http://tail-f.com/ns/restconf/collection/1.0">
      <datastore xmlns="urn:ietf:params:xml:ns:yang:ietf-yang-library"
                xmlns:yanglib="urn:ietf:params:xml:ns:yang:ietf-yang-library">
        <name xmlns:ds="urn:ietf:params:xml:ns:yang:ietf-datastores">
           ds:running
        </name>
        <schema>common</schema>
      </datastore>
      <datastore xmlns="urn:ietf:params:xml:ns:yang:ietf-yang-library"
                 xmlns:yanglib="urn:ietf:params:xml:ns:yang:ietf-yang-library">
        <name xmlns:ds="urn:ietf:params:xml:ns:yang:ietf-datastores">
          ds:intended
        </name>
        <schema>common</schema>
      </datastore>
      <datastore xmlns="urn:ietf:params:xml:ns:yang:ietf-yang-library
                 xmlns:yanglib="urn:ietf:params:xml:ns:yang:ietf-yang-library">
        <name xmlns:ds="urn:ietf:params:xml:ns:yang:ietf-datastores">
          ds:operational
        </name>
        <schema>common</schema>
      </datastore>
    </collection>
    Example:
    container x {
      list host {
        key number;
        leaf number {
          type int32;
        }
        leaf enabled {
          type boolean;
        }
        leaf name {
          type string;
        }
        leaf address {
          type inet:ip-address;
        }
      }
    }]
    Example: Example of a
    POST /restconf/tailf/query
    Content-Type: application/yang-data+xml
    
    <start-query xmlns="http://tail-f.com/ns/tailf-rest-query">
      <foreach>
        /x/host[enabled = 'true']
      </foreach>
      <select>
        <label>Host name</label>
        <expression>name</expression>
        <result-type>string</result-type>
      </select>
      <select>
        <expression>address</expression>
        <result-type>string</result-type>
      </select>
      <sort-by>name</sort-by>
      <limit>100</limit>
      <offset>1</offset>
      <timeout>600</timeout>
    </start-query>]
    Example: JSON Example of a
    POST /restconf/tailf/query
    Content-Type: application/yang-data+json
    
    {
     "start-query": {
       "foreach": "/x/host[enabled = 'true']",
       "select": [
         {
           "label": "Host name",
           "expression": "name",
           "result-type": ["string"]
         },
         {
           "expression": "address",
           "result-type": ["string"]
         }
       ],
       "sort-by": ["name"],
       "limit": 100,
       "offset": 1,
       "timeout": 600
     }
    }]
    <start-query xmlns="http://tail-f.com/ns/tailf-rest-query">
    <foreach>
      /x/host[enabled = 'true']
    </foreach>
    <select>
      <label>Host name</label>
      <expression>name</expression>
      <result-type>string</result-type>
    </select>
    <select>
      <expression>address</expression>
      <result-type>string</result-type>
    </select>
    <sort-by>name</sort-by>
    <offset>1</offset>
    <timeout>600</timeout>
    <start-query-result>
      <query-handle>12345</query-handle>
    </start-query-result>
    <fetch-query-result xmlns="http://tail-f.com/ns/tailf-rest-query">
      <query-handle>12345</query-handle>
    </fetch-query-result>
    <query-result xmlns="http://tail-f.com/ns/tailf-rest-query">
      <result>
        <select>
          <label>Host name</label>
          <value>One</value>
        </select>
        <select>
          <value>10.0.0.1</value>
        </select>
      </result>
      <result>
        <select>
          <label>Host name</label>
          <value>Three</value>
        </select>
        <select>
          <value>10.0.0.3</value>
        </select>
      </result>
    </query-result>
    <query-result xmlns="http://tail-f.com/ns/tailf-rest-query">
    </query-result>
    <stop-query xmlns="http://tail-f.com/ns/tailf-rest-query">
      <query-handle>12345</query-handle>
    </stop-query>
    <reset-query xmlns="http://tail-f.com/ns/tailf-rest-query">
      <query-handle>12345</query-handle>
      <offset>42</offset>
    </reset-query>
    Example: Partial Response
    GET /restconf/data/example-jukebox:jukebox/library/artist?offset=1&limit=2
    Accept: application/yang-data+json
    
    ...in return we will get the second and third elements of the list...
    <groupname>[;<password>]
    unhide=extra,debug;secret
    Example: XML Representation of Metadata
    <x xmlns="urn:x" xmlns:x="urn:x">
      <id tags=" important ethernet " annotation="hello world">42</id>
      <person annotation="This is a person">
        <name>Bill</name>
        <person annotation="This is another person">grandma</person>
      </person>
    </x>
    Example: JSON Representation of Metadata
    {
      "x": {
        "id": 42,
        "@id": {"tags": ["important","ethernet"],"annotation": "hello world"},
        "person": {
          // NB: the below refers to the parent object
          "@@person": {"annotation": "This is a person"},
          "name": "Bill",
          "person": "grandma",
          // NB: the below refers to the sibling object
          "@person": {"annotation": "This is another person"}
        }
      }
    }
    Example: NSO Configuration of the Authentication Cache TTL
      ...
      <aaa>
         ...
         <restconf>
            <!-- Set the TTL to 10 seconds! -->
            <auth-cache-ttl>PT10S</auth-cache-ttl>
            <!-- Use both "User" and "ClientIP" as key into the AuthCache -->
            <enable-auth-cache-client-ip>false</enable-auth-cache-client-ip>
         </restconf>
         ...
      </aaa>
      ...
    Example: NSO Configuration of Client IP via Proxy
      ...
      <webui>
         ...
        <use-forwarded-client-ip>
          <proxy-headers>X-Forwarded-For</proxy-headers>
          <proxy-headers>X-REAL-IP</proxy-headers>
          <allowed-proxy-ip-prefix>10.12.34.0/24</allowed-proxy-ip-prefix>
          <allowed-proxy-ip-prefix>2001:db8:1234::/48</allowed-proxy-ip-prefix>
        </use-forwarded-client-ip>
         ...
      </webui>
      ...
    Example: Configure RESTCONF External Token Authentication/Validation
      ...
      <restconf>
         ...
        <token-response>
          <x-auth-token>true</x-auth-token>
        </token-response>
         ...
      </restconf>
      ...
    Example: Configure the RESTCONF Token Cookie
      ...
      <restconf>
         ...
         <token-cookie>
           <name>X-JWT-ACCESS-TOKEN</name>
           <directives>path=/; Expires=Tue, 19 Jan 2038 03:14:07 GMT;</directives>
         </token-cookie>
         ...
      </restconf>
      ...
    Example: NSO RESTCONF Custom Header Configuration
        <restconf>
          <enabled>true</enabled>
          <custom-headers>
            <header>
              <name>Access-Control-Allow-Origin</name>
              <value>*</value>
            </header>
          </custom-headers>
        </restconf>
    yanger -t expand -f swagger example.yang -o example.json      
    yanger -t expand -f swagger base.yang base-ext-1.yang base-ext-2.yang -o base.json      
    yanger --help       
    Swagger output specific options:
      --swagger-host                    Add host to the Swagger output
      --swagger-basepath                Add basePath to the Swagger output
      --swagger-version                 Add version url to the Swagger output.
                                        NOTE: this will override any revision
                                        in the yang file
      --swagger-tag-mode                Set tag mode to group resources. Valid
                                        values are: methods, resources, all
                                        [default: all]
      --swagger-terms                   Add termsOfService to the Swagger
                                        output
      --swagger-contact-name            Add contact name to the Swagger output
      --swagger-contact-url             Add contact url to the Swagger output
      --swagger-contact-email           Add contact email to the Swagger output
      --swagger-license-name            Add license name to the Swagger output
      --swagger-license-url             Add license url to the Swagger output
      --swagger-top-resource            Generate only swagger resources from
                                        this top resource. Valid values are:
                                        root, data, operations, all [default:
                                        all]
      --swagger-omit-query-params       Omit RESTCONF query parameters
                                        [default: false]
      --swagger-omit-body-params        Omit RESTCONF body parameters
                                        [default: false]
      --swagger-omit-form-params        Omit RESTCONF form parameters
                                        [default: false]
      --swagger-omit-header-params      Omit RESTCONF header parameters
                                        [default: false]
      --swagger-omit-path-params        Omit RESTCONF path parameters
                                        [default: false]
      --swagger-omit-standard-statuses  Omit standard HTTP response statuses.
                                        NOTE: at least one successful HTTP
                                        status will still be included
                                        [default: false]
      --swagger-methods                 HTTP methods to include. Example:
                                        --swagger-methods "get, post"
                                        [default: "get, post, put, patch,
                                        delete"]
      --swagger-path-filter             Filter out paths matching a path filter.
                                        Example: --swagger-path-filter
                                        "/data/example-jukebox/jukebox"
    Example: Comprehensive Swagger Generation Example
    
    yanger -p . -t expand -f swagger example-jukebox.yang \
           --swagger-host 127.0.0.1:8080 \
           --swagger-basepath /restconf \
           --swagger-version "My swagger version 1.0.0.1" \
           --swagger-tag-mode all \
           --swagger-terms "http://my-terms.example.com" \
           --swagger-contact-name "my contact name" \
           --swagger-contact-url "http://my-contact-url.example.com" \
           --swagger-contact-email "[email protected]" \
           --swagger-license-name "my license name" \
           --swagger-license-url "http://my-license-url.example.com" \
           --swagger-top-resource all \
           --swagger-omit-query-params false \
           --swagger-omit-body-params false \
           --swagger-omit-form-params false \
           --swagger-omit-header-params false \
           --swagger-omit-path-params false \
           --swagger-omit-standard-statuses false \
           --swagger-methods "post, get, patch, put, delete, head, options"
    CdbSession.popd() to change back to an stacked position.

    The previous value for modified leaf is not available when using the diffIterate() method.

    Authorization Callbacks - invoked for external authorization of operations and data. Note, avoid this callback if possible since performance will otherwise be affected.
  • Data Callbacks - invoked for data provision and manipulation for certain data elements in the YANG model which is defined with a callpoint directive.

  • DB Callbacks - invoked for external database stores.

  • Range Action Callbacks - A variant of action callback where ranges are defined for the key values.

  • Range Data Callbacks - A variant of data callback where ranges are defined for the data values.

  • Snmp Inform Response Callbacks - invoked for response on Snmp inform requests on a certain element in the Yang model which is defined by a callpoint directive.

  • Transaction Callbacks - invoked for external participants in the two-phase commit protocol.

  • Transaction Validation Callbacks - invoked for external transaction validation in the validation phase of a two-phase commit.

  • Validation Callbacks - invoked for validation of certain elements in the YANG Model which is designed with a callpoint directive.

  • . NSO will, while the transaction is in the
    READ
    state, execute a series of read operations towards (possibly) different callpoints in the data provider.

    Any write operations performed by the management station are accumulated by NSO and the data provider doesn't see them while in the READ state.

  • transLock(): This callback gets invoked by NSO at the end of the transaction. NSO has accumulated a number of write operations and will now initiate the final write phases. Once the transLock() callback has returned, the transaction is in the VALIDATEstate. In the VALIDATE state, NSO will (possibly) execute a number of read operations to validate the new configuration. Following the read operations for validations comes the invocation of one of the writeStart() or transUnlock() callbacks.

  • transUnlock(): This callback gets invoked by NSO if the validation fails or if the validation was done separately from the commit (e.g. by giving a validate command in the CLI). Depending on where the transaction originated, the behavior after a call to transUnlock() differs. If the transaction originated from the CLI, the CLI reports to the user that the configuration is invalid and the transaction remains in the READ state whereas if the transaction originated from a NETCONF client, the NETCONF operation fails and a NETCONF rpc error is reported to the NETCONF client/manager.

  • writeStart(): If the validation succeeded, the writeStart() callback will be called and the transaction will enter the WRITE state. While in WRITE state, a number of calls to the write data callbacks setElem(), create() and remove() will be performed.

    If the underlying database supports real atomic transactions, this is a good place to start such a transaction.

    The application should not modify the real running data here. If, later, the abort() callback is called, all write operations performed in this state must be undone.

  • prepare(): Once all write operations are executed, the prepare() callback is executed. This callback ensures that all participants have succeeded in writing all elements. The purpose of the callback is merely to indicate to NSO that the data provider is ok, and has not yet encountered any errors.

  • abort(): If any of the participants die or fail to reply in the prepare() callback, the remaining participants all get invoked in the abort() callback. All data written so far in this transaction should be disposed of.

  • commit(): If all participants successfully replied in their respective prepare() callbacks, all participants get invoked in their respective commit() callbacks. This is the place to make all data written by the write callbacks in WRITE state permanent.

  • finish(): And finally, the finish() callback gets invoked at the end. This is a good place to deallocate any local resources for the transaction. The finish() callback can be called from several different states.

  • empty
    . If we have presence containers or leafs of type
    empty
    , we cannot use the
    getElem()
    callback to read the value of such a node, since it does not have a type. An example of a data model could be:\

    The above YANG fragment has three nodes that may or may not exist and that do not have a type. If we do not have any such elements, nor any operational data lists without keys (see below), we do not need to implement the existsOptional() callback.

    If we have the above data model, we must implement the existsOptional(), and our implementation must be prepared to reply to calls of the function for the paths /bs, /bs/b/opt, and /bs/b/foo. The leaf /bs/b/opt/ii is not mandatory, but it does have a type namely int32, and thus the existence of that leaf will be determined through a call to the getElem() callback.

    The existsOptional() callback may also be invoked by NSO as an "existence test" for an entry in an operational data list without keys. Normally this existence test is done with a getElem() request for the first key, but since there are no keys, this callback is used instead. Thus, if we have such lists, we must also implement this callback, and handle a request where the keypath identifies a list entry.

  • iterator() and getKey(): This pair of callbacks is used when NSO wants to traverse a YANG list. The job of the iterator() callback is to return an Iterator object that is invoked by the library. For each Object returned by the iterator, the NSO library will invoke the getKey() callback on the returned object. The getkey callback shall return a ConfKey value.

    An alternative to the getKey() callback is to register the optional getObject() callback whose job it is to return not just the key, but the entire YANG list entry. It is possible to register both getKey() and getObject() or either. If the getObject() is registered, NSO will attempt to use it only when bulk retrieval is executed.

  • action() This callback is invoked to actually execute the rpc or action. It receives the input parameters (if any) and returns the output parameters (if any).
    NavuLeaf: the NavuLeaf represents a YANG leaf node.
    Notif.diffNotificationDone()
    . The purpose of this event is to give the applications a chance to read the configuration diffs from the transaction before it commits. A user subscribing to this event can use the MAAPI API to attach
    Maapi.attach()
    to the running transaction and use
    Maapi.diffIterate()
    to iterate through the diff.
  • NotificationType.NOTIF_COMMIT_FAILED: This event is generated when a data provider fails in its commit callback. NSO executes a two-phase commit procedure towards all data providers when committing transactions. When a provider fails to commit, the system is an unknown state. If the provider is "external", the name of the failing daemon is provided. If the provider is another NETCONF agent, the IP address and port of that agent is provided.

  • NotificationType.NOTIF_COMMIT_PROGRESS: This event provides progress information about the commit of a transaction.

  • NotificationType.NOTIF_PROGRESS: This event provides progress information about the commit of a transaction or an action being applied. Subscribing to this notification type means that all notifications of the type NotificationType.NOTIF_COMMIT_PROGRESS are subscribed to as well.

  • NotificationType.NOTIF_CONFIRMED_COMMIT: This event is generated when a user has started a confirmed commit, when a confirming commit is issued, or when a confirmed commit is aborted; represented by ConfirmNotification.confirm_type. For a confirmed commit, the timeout value is also present in the notification.

  • NotificationType.NOTIF_FORWARD_INFO: This event is generated whenever the server forwards (proxies) a northbound agent.

  • NotificationType.NOTIF_HA_INFO: an event related to NSO's perception of the current cluster configuration.

  • NotificationType.NOTIF_HEARTBEAT: This event can be used by applications that wish to monitor the health and liveness of the server itself. It needs to be requested through a Notif instance which has been constructed with a heartbeat_interval. The server will continuously generate heartbeat events on the notification socket. If the server fails to do so, the server is hung. The timeout interval is measured in milliseconds. The recommended value is 10000 milliseconds to cater for truly high load situations. Values less than 1000 are changed to 1000.

  • NotificationType.NOTIF_SNMPA: This event is generated whenever an SNMP PDU is processed by the server. The application receives an SnmpaNotification with a list of all varbinds in the PDU. Each varbind contains subclasses that are internal to the SnmpaNotification.

  • NotificationType.NOTIF_SUBAGENT_INFO: Only sent if NSO runs as a primary agent with subagents enabled. This event is sent when the subagent connection is lost or reestablished. There are two event types, defined in SubagentNotification.subagent_info_type}: "subagent up" and "subagent down".

  • NotificationType.NOTIF_DAEMON: all log events that also go to the /NCSConf/logs/NSCLog log are sent from NSO on the event notification socket.

  • NotificationType.NOTIF_NETCONF: All log events that also go to the /NCSConf/logs/netconfLog log are sent from NSO on the event notification socket.

  • NotificationType.NOTIF_DEVEL: All log events that also go to the /NCSConf/logs/develLog log are sent from NSO on the event notification socket.

  • NotificationType.NOTIF_TAKEOVER_SYSLOG: If this flag is present, NSO will stop Syslogging. The idea behind the flag is that we want to configure Syslogging for NSO to let NSO log its startup sequence. Once NSO is started we wish to subsume the syslogging done by NSO. Typical applications that use this flag want to pick up all log messages, reformat them, and use some local logging method. Once all subscriber sockets with this flag set are closed, NSO will resume to syslog.

  • NotificationType.NOTIF_UPGRADE_EVENT: This event is generated for the different phases of an in-service upgrade, i.e. when the data model is upgraded while the server is running. The application receives an UpgradeNotification where the UpgradeNotification.event_type gives the specific upgrade event. The events correspond to the invocation of the Maapi functions that drive the upgrade.

  • NotificationType.NOTIF_COMPACTION: This event is generated after each CDB compaction performed by NSO. The application receives a CompactionNotification where CompactionNotification.dbfile indicates which datastore was compacted, and CompactionNotification.compaction_type indicates whether the compaction was triggered manually or automatically by the system.

  • NotificationType.NOTIF_USER_SESSION: An event related to user sessions. There are 6 different user session-related event types, defined in UserSessNotification.user_sess_type: session starts/stops, session locks/unlocks database, and session starts/stop database transaction.

  • Tag: A tag is a representation of an element in the YANG model. A Tag is represented as an instance of com.tailf.conf.Tag. The primary usage of tags are in the representation of keypaths.
  • Key: a key is a representation of the instance key for an element instance. A key is represented as an instance of com.tailf.conf.ConfKey. A ConfKey is constructed from an array of values (ConfValue[]). The primary usage of keys is in the representation of keypaths.

  • XMLParam: subclasses of ConfXMLParam which are used to represent a, possibly instantiated, subtree of a YANG model. Useful in several APIs where multiple values can be set or retrieved in one function call.

  • J_INT16

    int16

    ConfInt16

    16-bit signed integer

    J_INT32

    int32

    ConfInt32

    32-bit signed integer

    J_INT64

    int64

    ConfInt64

    64-bit signed integer

    J_UINT8

    uint8

    ConfUInt8

    8-bit unsigned integer

    J_UINT16

    uint16

    ConfUInt16

    16-bit unsigned integer

    J_UINT32

    uint32

    ConfUInt32

    32-bit unsigned integer

    J_UINT64

    uint64

    ConfUInt64

    64-bit unsigned integer

    J_IPV4

    inet:ipv4-address

    ConfIPv4

    64-bit unsigned

    J_IPV6

    inet:ipv6-address

    ConfIPv6

    IP v6 Address

    J_BOOL

    boolean

    ConfBoolean

    Boolean value

    J_QNAME

    xs:QName

    ConfQName

    A namespace/tag pair

    J_DATETIME

    yang:date-and-time

    ConfDateTime

    Date and Time Value

    J_DATE

    xs:date

    ConfDate

    XML schema Date

    J_ENUMERATION

    enum

    ConfEnumeration

    An enumeration value

    J_BIT32

    bits

    ConfBit32

    32 bit value

    J_BIT64

    bits

    ConfBit64

    64 bit value

    J_LIST

    leaf-list

    -

    -

    J_INSTANCE_IDENTIFIER

    instance-identifier

    ConfObjectRef

    yang builtin

    J_OID

    tailf:snmp-oid

    ConfOID

    -

    J_BINARY

    tailf:hex-list, tailf:octet-list

    ConfBinary, ConfHexList

    -

    J_IPV4PREFIX

    inet:ipv4-prefix

    ConfIPv4Prefix

    -

    J_IPV6PREFIX

    -

    ConfIPv6Prefix

    -

    J_IPV6PREFIX

    inet:ipv6-prefix

    ConfIPv6Prefix

    -

    J_DEFAULT

    -

    ConfDefault

    default value indicator

    J_NOEXISTS

    -

    ConfNoExists

    no value indicator

    J_DECIMAL64

    decimal64

    ConfDecimal64

    yang builtin

    J_IDENTITYREF

    identityref

    ConfIdentityRef

    yang builtin

    ConfXMLParamLeaf: Represents a leaf tag without the leafs value.

    CDB API The southbound interface provides access to the CDB configuration database. Using this interface configuration data can be read. In addition, operational data that is stored in CDB can be read and written. This interface has a subscription mechanism to subscribe to changes. A subscription is specified on a path that points to an element in a YANG model or an instance in the instance tree. Any change under this point will trigger the subscription. CDB has also functions to iterate through the configuration changes when a subscription has been triggered.

    DP API Southbound interface that enables callbacks, hooks, and transforms. This API makes it possible to provide the service callbacks that handle service-to-device mapping logic. Other usual cases are external data providers for operational data or action callback implementations. There are also transaction and validation callbacks, etc. Hooks are callbacks that are fired when certain data is written and the hook is expected to do additional modifications of data. Transforms are callbacks that are used when complete mediation between two different models is necessary.

    NED API (Network Equipment Driver) Southbound interface that mediates communication for devices that do not speak either NETCONF or SNMP. All prepackaged NEDs for different devices are written using this interface. It is possible to use the same interface to write your own NED. There are two types of NEDs, CLI NEDs and Generic NEDs. CLI NEDs can be used for devices that can be controlled by a Cisco-style CLI syntax, in this case, the NED is developed primarily by building a YANG model and a relatively small part in Java. In other cases, the Generic NED can be used for any type of communication protocol.

    NAVU API (Navigation Utilities) API that resides on top of the Maapi and Cdb APIs. It provides schema model navigation and instance data handling (read/write). Uses either a Maapi or Cdb context as data access and incorporates a subset of functionality from these (navigational and data read/write calls). Its major use is in service implementations which normally is about navigating device models and setting device data.

    ALARM API Eastbound API that is used both to consume and produce alarms in alignment with the NSO Alarm model. To consume alarms the AlarmSource interface is used. To produce a new alarm the AlarmSink interface is used. There is also a possibility to buffer produced alarms and make asynchronous writes to CDB to improve alarm performance.

    NOTIF API Northbound API that is used to subscribe to system events from NSO. These events are generated for audit log events, for different transaction states, for HA state changes, upgrade events, user sessions, etc.

    HA API (High Availability) Northbound api used to manage a High Availability cluster of NSO instances. An NSO instance can be in one of three states NONE, PRIMARY or SECONDARY. With the HA API the state can be queried and changed for NSO instances in the cluster.

    J_STR

    string

    ConfBuf

    Human readable string

    J_BUF

    string

    ConfBuf

    Human readable string

    J_INT8

    int8

    ConfInt8

    Handling Conflicts
    Package Development
    Network Element Drivers (NEDs)
    Alarm Manager
    High Availability
    NSO Transaction State Machine
    NAVU Design Support
    NAVU YANG Structure

    8-bit signed integer

    filter

    GET,HEAD

    Boolean notification filter for event stream resources.

    insert

    POST,PUT

    Insertion mode for ordered-by user data resources

    point

    POST,PUT

    Insertion point for ordered-by user data resources

    start-time

    GET,HEAD

    Replay buffer start time for event stream resources.

    stop-time

    GET,HEAD

    Replay buffer stop time for event stream resources.

    with-defaults

    GET,HEAD

    Control the retrieval of default values.

    with-origin

    GET

    Include the "origin" metadata annotations, as detailed in the NMDA.

    no-overwrite

    POST PUT PATCH DELETE

    NSO will check that the data that should be modified has not changed on the device compared to NSO's view of the data. Can't be used together with no-out-of-sync-check.

    no-revision-drop

    POST PUT PATCH DELETE

    NSO will not run its data model revision algorithm, which requires all participating managed devices to have all parts of the data models for all data contained in this transaction. Thus, this flag forces NSO to never silently drop any data set operations towards a device.

    no-deploy

    POST PUT PATCH DELETE

    Commit without invoking the service create method, i.e, write the service instance data without activating the service(s). The service(s) can later be re-deployed to write the changes of the service(s) to the network.

    reconcile

    POST PUT PATCH DELETE

    Reconcile the service data. All data which existed before the service was created will now be owned by the service. When the service is removed that data will also be removed. In technical terms, the reference count will be decreased by one for everything that existed prior to the service. If the manually configured data exists below in the configuration tree, that data is kept unless the option discard-non-service-config is used.

    use-lsa

    POST PUT PATCH DELETE

    Force handling of the LSA nodes as such. This flag tells NSO to propagate applicable commit flags and actions to the LSA nodes without applying them on the upper NSO node itself. The commit flags affected are dry-run, no-networking, no-out-of-sync-check, no-overwrite and no-revision-drop.

    no-lsa

    POST PUT PATCH DELETE

    Do not handle any of the LSA nodes as such. These nodes will be handled as any other device.

    commit-queue

    POST PUT PATCH DELETE

    Commit the transaction data to the commit queue. Possible values are: async, sync, and bypass. If the async value is set the operation returns successfully if the transaction data has been successfully placed in the queue. The sync value will cause the operation to not return until the transaction data has been sent to all devices, or a timeout occurs. The bypass value means that if /devices/global-settings/commit-queue/enabled-by-default is true the data in this transaction will bypass the commit queue. The data will be written directly to the devices.

    commit-queue-atomic

    POST PUT PATCH DELETE

    Sets the atomic behavior of the resulting queue item. Possible values are: true and false. If this is set to false, the devices contained in the resulting queue item can start executing if the same devices in other non-atomic queue items ahead of it in the queue are completed. If set to true, the atomic integrity of the queue item is preserved.

    commit-queue-block-others

    POST PUT PATCH DELETE

    The resulting queue item will block subsequent queue items, which use any of the devices in this queue item, from being queued.

    commit-queue-lock

    POST PUT PATCH DELETE

    Place a lock on the resulting queue item. The queue item will not be processed until it has been unlocked, see the actions unlock and lock in /devices/commit-queue/queue-item. No following queue items, using the same devices, will be allowed to execute as long as the lock is in place.

    commit-queue-tag

    POST PUT PATCH DELETE

    The value is a user-defined opaque tag. The tag is present in all notifications and events sent referencing the specific queue item.

    commit-queue-timeout

    POST PUT PATCH DELETE

    Specifies a maximum number of seconds to wait for completion. Possible values are infinity or a positive integer. If the timer expires, the transaction is kept in the commit-queue, and the operation returns successfully. If the timeout is not set, the operation waits until completion indefinitely.

    commit-queue-error-option

    POST PUT PATCH DELETE

    The error option to use. Depending on the selected error option, NSO will store the reverse of the original transaction to be able to undo the transaction changes and get back to the previous state. This data is stored in the /devices/commit-queue/completed tree from where it can be viewed and invoked with the rollback action. When invoked, the data will be removed. Possible values are: continue-on-error, rollback-on-error, and stop-on-error. The continue-on-error value means that the commit queue will continue on errors. No rollback data will be created. The rollback-on-error value means that the commit queue item will roll back on errors. The commit queue will place a lock with block-others on the devices and services in the failed queue item. The rollback action will then automatically be invoked when the queue item has finished its execution. The lock will be removed as part of the rollback. The stop-on-error means that the commit queue will place a lock with block-others on the devices and services in the failed queue item. The lock must then either manually be released when the error is fixed or the rollback action under /devices/commit-queue/completed be invoked. Read about error recovery in for a more detailed explanation.

    trace-id

    POST PUT PATCH DELETE

    Use the provided trace ID as part of the log messages emitted while processing. If no trace ID is given, NSO is going to generate and assign a trace ID to the processing. The trace-id query parameter can also be used with RPCs and actions to relay a trace-id from northbound requests. The trace-id will be included in the X-Cisco-NSO-Trace-ID header in the response.

    limit

    GET

    Used by the client to specify a limited set of list entries to retrieve. See The value of the limit parameter is either an integer greater than or equal to 1, or the string unbounded. The string unbounded is the default value. See Partial Responses for an example.

    offset

    GET

    Used by the client to specify the number of list elements to skip before returning the requested set of list entries. See The value of the offset parameter is an integer greater than or equal to 0. The default value is 0. See Partial Responses for an example.

    rollback-comment

    POST PUT PATCH DELETE

    Used to specify a comment to be attached to the Rollback File that will be created as a result of the POST operation. This assumes that Rollback File handling is enabled.

    rollback-label

    POST PUT PATCH DELETE

    Used to specify a label to be attached to the Rollback File that will be created as a result of the POST operation. This assume that Rollback File handling is enabled.

    rollback-id

    POST PUT PATCH DELETE

    Return the rollback ID in the response if a rollback file was created during this operation. This requires rollbacks to be enabled in the NSO to take effect.

    with-service-meta-data

    GET

    Include FASTMAP attributes such as backpointers and reference counters in the reply. These are typically internal to NSO and thus not shown by default.

    Example: Establish a MAAPI Connection
        Socket socket = new Socket("localhost",Conf.NCS_PORT);
        Maapi maapi = new Maapi(socket);
    Example: Starting a User Session
        maapi.startUserSession("admin",
                               InetAddress.getByName("localhost"),
                               "maapi",
                               new String[] {"admin"},
                               MaapiUserSessionFlag.PROTO_TCP);
    Example: Start a Read/Write transaction Towards Running
        int th = maapi.startTrans(Conf.DB_RUNNING,
                                  Conf.MODE_READ_WRITE);
    Example: Maapi.getElem()
        public ConfValue getElem(int tid,
                                 String fmt,
                                 Object... arguments)
        ConfValue val = maapi.getElem(th,
                                      "/hosts/host{%x}/interfaces{%x}/ip",
                                      new ConfBuf("host1"),
                                      new ConfBuf("eth0"));
        ConfIPv4 ipv4addr = (ConfIPv4)val;
        maapi.setElem(th ,
                      new ConfUInt16(1500),
                      "/hosts/host{%x}/interfaces{%x}/ip/mtu",
                      new ConfBuf("host1"),
                      new ConfBuf("eth0"));
        maapi.applyTrans(th)
    Example: Commit a Transaction
        int th = maapi.startTrans(Conf.DB_RUNNING, Conf.MODE_READ_WRITE);
        try {
            maapi.lock(Conf.DB_RUNNING);
            /// make modifications to th
            maapi.setElem(th, .....);
            maapi.applyTrans(th);
            maapi.finishTrans(th);
        } catch(Exception e) {
            maapi.finishTrans(th);
        }  finally {
            maapi.unLock(Conf.DB_RUNNING);
        }
    Example: Establish a Connection to CDB
        Socket socket = new Socket("localhost", Conf.NCS_PORT);
        Cdb cdb = new Cdb("MyCdbSock",socket);
    Example: Establish a CDB Session
        CdbSession session = cdb.startSession(CdbDBType.RUNNING);
    
        /*
         * Retrieve the number of children in the list and
         * loop over these children
         */
        for(int i = 0; i < session.numInstances("/servers/server"); i++) {
            ConfBuf name =
               (ConfBuf) session.getElem("/servers/server[%d]/hostname", i);
            ConfIPv4 ip =
               (ConfIPv4) session.getElem("/servers/server[%d]/ip", i);
        }
    Example: Establish a CDB Subscription
        CdbSubscription sub = cdb.newSubscription();
        int subid = sub.subscribe(1, new servers(), "/servers/server/");
    
        // tell CDB we are ready for notifications
        sub.subscribeDone();
    
        // now do the blocking read
        while (true) {
            int[] points = sub.read();
            // now do something here like diffIterate
            .....
        }
       container servers {
         list server {
           key name;
           leaf name { type string;}
           leaf ip { type inet:ip-address; }
           leaf port type inet:port-number; }
           .....
    /servers/server/port
        /servers
        /servers/server{www}/ip
        /servers/server/ip
        CdbSession sess =
             cdb.startSession(CdbDBType.CDB_OPERATIONAL,
                              EnumSet.of(CdbLockType.LOCK_REQUEST));
        public  class MyTransCb {
    
            @TransCallback(callType=TransCBType.INIT)
            public void init(DpTrans trans) throws DpCallbackException {
                return;
            }
    public static class DataCb {
    
        @DataCallback(callPoint="foo", callType=DataCBType.GET_ELEM)
            public ConfValue getElem(DpTrans trans, ConfObject[] kp)
            throws DpCallbackException {
               .....
    Example: work.yang
            module work {
      namespace "http://example.com/work";
      prefix w;
      import ietf-yang-types {
        prefix yang;
      }
      import tailf-common {
        prefix tailf;
      }
      description "This model is used as a simple example model
                   illustrating how to have NCS configuration data
                   that is stored outside of NCS - i.e not in CDB";
    
      revision 2010-04-26 {
        description "Initial revision.";
      }
    
      container work {
        tailf:callpoint workPoint;
        list item {
          key key;
          leaf key {
            type int32;
          }
          leaf title {
            type string;
          }
          leaf responsible {
            type string;
          }
          leaf comment {
            type string;
          }
        }
      }
    }
    Example: DataCb Class
        @DataCallback(callPoint=work.callpoint_workPoint,
                      callType=DataCBType.ITERATOR)
        public Iterator<Object> iterator(DpTrans trans,
                                         ConfObject[] keyPath)
            throws DpCallbackException {
            return MyDb.iterator();
        }
    
        @DataCallback(callPoint=work.callpoint_workPoint,
                      callType=DataCBType.GET_NEXT)
        public ConfKey getKey(DpTrans trans, ConfObject[] keyPath,
                              Object obj)
            throws DpCallbackException {
            Item i = (Item) obj;
            return new ConfKey( new ConfObject[] { new ConfInt32(i.key) });
        }
    
    
        @DataCallback(callPoint=work.callpoint_workPoint,
                      callType=DataCBType.GET_ELEM)
        public ConfValue getElem(DpTrans trans, ConfObject[] keyPath)
            throws DpCallbackException {
    
            ConfInt32 kv = (ConfInt32) ((ConfKey) keyPath[1]).elementAt(0);
            Item i = MyDb.findItem( kv.intValue() );
            if (i == null) return null; // not found
    
            // switch on xml elem tag
            ConfTag leaf = (ConfTag) keyPath[0];
            switch (leaf.getTagHash()) {
            case work._key:
                return new ConfInt32(i.key);
            case work._title:
                return new ConfBuf(i.title);
            case work._responsible:
                return new ConfBuf(i.responsible);
            case work._comment:
                return new ConfBuf(i.comment);
            default:
                throw new DpCallbackException("xml tag not handled");
            }
        }
    
        @DataCallback(callPoint=work.callpoint_workPoint,
                      callType=DataCBType.SET_ELEM)
        public int setElem(DpTrans trans, ConfObject[] keyPath,
                           ConfValue newval)
            throws DpCallbackException {
            return Conf.REPLY_ACCUMULATE;
        }
    
        @DataCallback(callPoint=work.callpoint_workPoint,
                      callType=DataCBType.CREATE)
        public int create(DpTrans trans, ConfObject[] keyPath)
            throws DpCallbackException {
            return Conf.REPLY_ACCUMULATE;
        }
    
        @DataCallback(callPoint=work.callpoint_workPoint,
                      callType=DataCBType.REMOVE)
        public int remove(DpTrans trans, ConfObject[] keyPath)
            throws DpCallbackException {
            return Conf.REPLY_ACCUMULATE;
        }
    
        @DataCallback(callPoint=work.callpoint_workPoint,
                      callType=DataCBType.NUM_INSTANCES)
        public int numInstances(DpTrans trans, ConfObject[] keyPath)
            throws DpCallbackException {
            return MyDb.numItems();
        }
    
    
        @DataCallback(callPoint=work.callpoint_workPoint,
                      callType=DataCBType.GET_OBJECT)
        public ConfValue[] getObject(DpTrans trans, ConfObject[] keyPath)
            throws DpCallbackException {
            ConfInt32 kv = (ConfInt32) ((ConfKey) keyPath[0]).elementAt(0);
            Item i = MyDb.findItem( kv.intValue() );
            if (i == null) return null; // not found
            return getObject(trans, keyPath, i);
        }
    
        @DataCallback(callPoint=work.callpoint_workPoint,
                      callType=DataCBType.GET_NEXT_OBJECT)
        public ConfValue[] getObject(DpTrans trans, ConfObject[] keyPath,
                                     Object obj)
            throws DpCallbackException {
            Item i = (Item) obj;
            return new ConfValue[] {
                new ConfInt32(i.key),
                new ConfBuf(i.title),
                new ConfBuf(i.responsible),
                new ConfBuf(i.comment)
            };
        }
    Example: TransCb Class
        @TransCallback(callType=TransCBType.INIT)
        public void init(DpTrans trans) throws DpCallbackException {
            return;
        }
    
        @TransCallback(callType=TransCBType.TRANS_LOCK)
        public void transLock(DpTrans trans) throws DpCallbackException {
            MyDb.lock();
        }
    
        @TransCallback(callType=TransCBType.TRANS_UNLOCK)
        public void transUnlock(DpTrans trans) throws DpCallbackException {
            MyDb.unlock();
        }
    
        @TransCallback(callType=TransCBType.PREPARE)
        public void prepare(DpTrans trans) throws DpCallbackException {
            Item i;
            ConfInt32 kv;
            for (Iterator<DpAccumulate> it = trans.accumulated();
                 it.hasNext(); ) {
                DpAccumulate ack= it.next();
                // check op
                switch (ack.getOperation()) {
                case DpAccumulate.SET_ELEM:
                    kv = (ConfInt32)  ((ConfKey) ack.getKP()[1]).elementAt(0);
                    if ((i = MyDb.findItem( kv.intValue())) == null)
                        break;
                    // check leaf tag
                    ConfTag leaf = (ConfTag) ack.getKP()[0];
                    switch (leaf.getTagHash()) {
                    case work._title:
                        i.title = ack.getValue().toString();
                        break;
                    case work._responsible:
                        i.responsible = ack.getValue().toString();
                        break;
                    case work._comment:
                        i.comment = ack.getValue().toString();
                        break;
                    }
                    break;
                case DpAccumulate.CREATE:
                    kv = (ConfInt32)  ((ConfKey) ack.getKP()[0]).elementAt(0);
                    MyDb.newItem(new Item(kv.intValue()));
                    break;
                case DpAccumulate.REMOVE:
                    kv = (ConfInt32)  ((ConfKey) ack.getKP()[0]).elementAt(0);
                    MyDb.removeItem(kv.intValue());
                    break;
                }
            }
            try {
                MyDb.save("running.prep");
            } catch (Exception e) {
                throw
                  new DpCallbackException("failed to save file: running.prep",
                                          e);
            }
        }
    
        @TransCallback(callType=TransCBType.ABORT)
        public void abort(DpTrans trans) throws DpCallbackException {
            MyDb.restore("running.DB");
            MyDb.unlink("running.prep");
        }
    
        @TransCallback(callType=TransCBType.COMMIT)
        public void commit(DpTrans trans) throws DpCallbackException {
            try {
                MyDb.rename("running.prep","running.DB");
            } catch (DpCallbackException e) {
                throw new DpCallbackException("commit failed");
            }
        }
    
        @TransCallback(callType=TransCBType.FINISH)
        public void finish(DpTrans trans) throws DpCallbackException {
            ;
        }
    }
    uses ncs:service-data;
    ncs:servicepoint vlanspnt;
    uses ncs:service-data;
    ncs:servicepoint vlanspnt;
    
    tailf:action self-test {
      tailf:info "Perform self-test of the service";
      tailf:actionpoint vlanselftest;
      output {
        leaf success {
          type boolean;
        }
        leaf message {
          type string;
          description
            "Free format message.";
        }
      }
    }
    /**
     * Init method for selftest action
     */
    @ActionCallback(callPoint="l3vpn-self-test",
    callType=ActionCBType.INIT)
    public void init(DpActionTrans trans) throws DpCallbackException {
    }
    
    /**
     * Selftest action implementation for service
     */
    @ActionCallback(callPoint="l3vpn-self-test", callType=ActionCBType.ACTION)
    public ConfXMLParam[] selftest(DpActionTrans trans, ConfTag name,
                                   ConfObject[] kp, ConfXMLParam[] params)
    throws DpCallbackException {
        try {
            // Refer to the service yang model prefix
            String nsPrefix = "l3vpn";
            // Get the service instance key
            String str = ((ConfKey)kp[0]).toString();
    
            return new ConfXMLParam[] {
                  new ConfXMLParamValue(nsPrefix, "success", new ConfBool(true)),
                  new ConfXMLParamValue(nsPrefix, "message", new ConfBuf(str))};
            } catch (Exception e) {
                throw new DpCallbackException("self-test failed", e);
            }
        }
    }
    Example: Attach Maapi to the Current Transaction
    ```
        public class SimpleValidator implements DpTransValidateCallback{
          ...
          @TransValidateCallback(callType=TransValidateCBType.INIT)
          public void init(DpTrans trans) throws DpCallbackException{
            try {
              th = trans.thandle;
              maapi.attach(th, new MyNamesapce().hash(), trans.uinfo.usid);
              ..
            } catch(Exception e) {
              throw new DpCallbackException("failed to attach via maapi: "+
                                           e.getMessage());
            }
          }
        }
    ```
    \
    Example: NSO Module
    module tailf-ncs {
      namespace "http://tail-f.com/ns/ncs";
      ...
    }
    Example: NSO NavuContainer Instance
        .....
          NavuContext context = new NavuContext(maapi);
          context.startRunningTrans(Conf.MODE_READ);
          // This will be the base container "/"
          NavuContainer base = new NavuContainer(context);
    
          // This will be the ncs root container "/ncs"
          NavuContainer root = base.container(new Ncs().hash());
          .....
          // This method finishes the started read transaction and
          // clears the context from this transaction.
          context.finishClearTrans();
    Example: NSO List Element
    submodule tailf-ncs-devices {
      ...
      container devices {
        .....
    
          list device {
    
            key name;
    
            leaf name {
              type string;
            }
            ....
          }
        }
    
        .......
      }
    }
    Example: NAVU List Direct Element Access
        .....
        NavuContext context = new NavuContext(maapi);
        context.startRunningTrans(Conf.MODE_READ);
    
        NavuContainer base = new NavuContainer(context);
        NavuContainer ncs = base.container(new Ncs().hash());
        NavuContainer dev = ncs.container("devices").
                                 list("device").
                                 elem( key);
    
        NavuListEntry devEntry = (NavuListEntry)dev;
        .....
        context.finishClearTrans();
    Example: NAVU List Element Iterating
        .....
        NavuContext context = new NavuContext(maapi);
        context.startRunningTrans(Conf.MODE_READ);
    
        NavuContainer base = new NavuContainer(context);
        NavuContainer ncs = base.container(new Ncs().hash());
        NavuList listOfDevs = ncs.container("devices").
                                 list("device");
    
        for (NavuContainer dev: listOfDevs.elements()) {
            .....
        }
        .....
        context.finishClearTrans();
    Example: NAVU Leaf Access
        .....
        NavuContext context = new NavuContext(maapi);
        context.startRunningTrans(Conf.MODE_READ);
    
        NavuContainer base = new NavuContainer(context);
        NavuContainer ncs = base.container(new Ncs().hash());
    
        for (NavuNode node: ncs.container("devices").select("dev.*/.*")) {
            NavuContainer dev = (NavuContainer)node;
            .....
        }
        .....
        context.finishClearTrans();
    Example: NAVU Leaf Access
        .....
        NavuContext context = new NavuContext(maapi);
        context.startRunningTrans(Conf.MODE_READ);
    
        NavuContainer base = new NavuContainer(context);
        NavuContainer ncs = base.container(new Ncs().hash());
    
        for (NavuNode node: ncs.container("devices").xPathSelect("device/*")) {
            NavuContainer devs = (NavuContainer)node;
            .....
        }
        .....
        context.finishClearTrans();
    Example: NSO Leaf
    module tailf-ncs {
      namespace "http://tail-f.com/ns/ncs";
      ...
      container ncs {
        .....
    
          list service {
    
            key object-id;
    
            leaf object-id {
              type string;
            }
            ....
    
            leaf reference {
              type string;
            }
            ....
    
          }
        }
    
        .......
      }
    }
    Example: NAVU List Element Iterating
        .....
        NavuContext context = new NavuContext(maapi);
        context.startRunningTrans(Conf.MODE_READ);
    
        NavuContainer base = new NavuContainer(context);
        NavuContainer ncs = base.container(new Ncs().hash());
    
        for (NavuNode node: ncs.select("sm/ser.*/.*")) {
            NavuContainer rfs = (NavuContainer)node;
            if (rfs.leaf(Ncs._description_).value()==null) {
                /*
                 * Setting dummy value.
                 */
                rfs.leaf(Ncs._description_).set(new ConfBuf("Dummy value"));
            }
        }
        .....
        context.finishClearTrans();
    Example: YANG Action
    module interfaces {
      namespace "http://router.com/interfaces";
      prefix i;
      .....
    
      list interface {
        key name;
        max-elements 64;
    
        tailf:action ping-test {
          description "ping a machine ";
          tailf:exec "/tmp/mpls-ping-test.sh" {
            tailf:args "-c $(context) -p $(path)";
          }
    
          input {
            leaf ttl {
                type int8;
            }
          }
    
          output {
            container rcon {
              leaf result {
                type string;
              }
              leaf ip {
                type inet:ipv4-address;
              }
              leaf ival {
                type int8;
              }
            }
          }
        }
    
       .....
    
      }
    
      .....
    }
    Example: NAVU Action Execution (1)
        .....
        NavuContext context = new NavuContext(maapi);
        context.startRunningTrans(Conf.MODE_READ);
    
        NavuContainer base = new NavuContainer(context);
        NavuContainer ncs = base.container(new Ncs().hash());
    
        /*
         * Execute ping on all devices with the interface module.
         */
        for (NavuNode node: ncs.container(Ncs._devices_).
                       select("device/.*/config/interface/.*")) {
            NavuContainer if = (NavuContainer)node;
    
            NavuAction ping = if.action(interfaces.i_ping_test_);
    
    
            /*
             * Execute action.
             */
            ConfXMLParamResult[] result = ping.call(new ConfXMLParam[] {
                    new ConfXMLParamValue(new interfaces().hash(),
                                          interfaces._ttl,
                                          new ConfInt64(64))};
    
            //or we could execute it with XML-String
    
            result = ping.call("<if:ttl>64</if:ttl>");
            /*
             * Output the result of the action.
             */
             System.out.println("result_ip: "+
             ((ConfXMLParamValue)result[1]).getValue().toString());
    
             System.out.println("result_ival:" +
             ((ConfXMLParamValue)result[2]).getValue().toString());
        }
        .....
        context.finishClearTrans();
    Example: NAVU Action Execution (2)
        .....
        NavuContext context = new NavuContext(maapi);
        context.startRunningTrans(Conf.MODE_READ);
    
        NavuContainer base = new NavuContainer(context);
        NavuContainer ncs = base.container(new Ncs().hash());
    
        /*
         * Execute ping on all devices with the interface module.
         */
        for (NavuNode node: ncs.container(Ncs._devices_).
                       xPathSelect("device/config/interface")) {
            NavuContainer if = (NavuContainer)node;
    
            NavuAction ping = if.action(interfaces.i_ping_test_);
    
    
            /*
             * Execute action.
             */
            ConfXMLParamResult[] result = ping.call(new ConfXMLParam[] {
                    new ConfXMLParamValue(new interfaces().hash(),
                                          interfaces._ttl,
                                          new ConfInt64(64))};
    
            //or we could execute it with XML-String
    
            result = ping.call("<if:ttl>64</if:ttl>");
            /*
             * Output the result of the action.
             */
             System.out.println("result_ip: "+
             ((ConfXMLParamValue)result[1]).getValue().toString());
    
             System.out.println("result_ival:" +
             ((ConfXMLParamValue)result[2]).getValue().toString());
        }
        .....
        context.finishClearTrans();
        // Set up a CDB socket
        Socket socket = new Socket("127.0.0.1",Conf.NCS_PORT);
        Cdb cdb = new Cdb("my-alarm-source-socket", socket);
    
        // Get and start alarm source - this must only be done once per JVM
        AlarmSourceCentral source =
            AlarmSourceCentral.getAlarmSource(10000, cdb);
        source.start();
            AlarmSource mySource = new AlarmSource();
            try {
                mySource.startListening();
                // Get an alarms.
                Alarm alarm = mySource.takeAlarm();
    
                while (alarm != null){
                    System.out.println(alarm);
    
                    for (Attribute attr: alarm.getCustomAttributes()){
                        System.out.println(attr);
                    }
    
                    alarm = mySource.takeAlarm();
                }
    
            } catch (Exception e) {
                e.printStackTrace();
            } finally {
                mySource.stopListening();
            }
            //
            // Maapi socket used to write alarms directly.
            //
            Socket socket = new Socket("127.0.0.1",Conf.NCS_PORT);
            Maapi maapi = new Maapi(socket);
            maapi.startUserSession("system", InetAddress.getByName(host),
                                   "system", new String[] {},
                                   MaapiUserSessionFlag.PROTO_TCP);
    
            AlarmSink sink = new AlarmSink(maapi);
            AlarmSink sink = new AlarmSink();
           //
           // You will need a Maapi socket to write you alarms.
           //
           Socket socket = new Socket("127.0.0.1",Conf.NCS_PORT);
           Maapi maapi = new Maapi(socket);
           maapi.startUserSession("system", InetAddress.getByName(host),
                                  "system", new String[] {},
                                  MaapiUserSessionFlag.PROTO_TCP);
    
           AlarmSinkCentral sinkCentral = AlarmSinkCentral.getAlarmSink(1000, maapi);
           sinkCentral.start();
        ArrayList<AlarmId> idList = new ArrayList<AlarmId>();
    
        ConfIdentityRef alarmType =
            new ConfIdentityRef(NcsAlarms.hash,
                                   NcsAlarms._ncs_dev_manager_alarm);
    
        ManagedObject managedObject1 =
            new ManagedObject("/ncs:devices/device{device0}/config/root1");
        ManagedObject managedObject2 =
            new ManagedObject("/ncs:devices/device{device0}/config/root2");
    
        idList.add(new AlarmId(new ManagedDevice("device0"),
                               alarmType,
                               managedObject1));
        idList.add(new AlarmId(new ManagedDevice("device0"),
                               alarmType,
                               managedObject2));
    
        ManagedObject managedObject3 =
            new ManagedObject("/ncs:devices/device{device0}/config/root3");
    
        Alarm myAlarm =
            new Alarm(new ManagedDevice("device0"),
                      managedObject3,
                      alarmType,
                      PerceivedSeverity.WARNING,
                      false,
                      "This is a warning",
                      null,
                      idList,
                      null,
                      ConfDatetime.getConfDatetime(),
                      new AlarmAttribute(myAlarm.hash,
                                         myAlarm._custom_alarm_attribute_,
                                         new ConfBuf("An alarm attribute")),
                      new AlarmAttribute(myAlarm.hash,
                                         myAlarm._custom_status_change_,
                                         new ConfBuf("A status change")));
    
         sink.submitAlarm(myAlarm);
        Socket sock = new Socket("localhost", Conf.NCS_PORT);
        EnumSet notifSet = EnumSet.of(NotificationType.NOTIF_COMMIT_SIMPLE,
                                      NotificationType.NOTIF_AUDIT);
        Notif notif = new Notif(sock, notifSet);
    
        while (true) {
            Notification n = notif.read();
    
            if (n instanceof CommitNotification) {
                // handle NOTIF_COMMIT_SIMPLE case
                .....
            } else if (n instanceof AuditNotification) {
                // handle NOTIF_AUDIT case
                .....
            }
        }
    Example: HA Cluster Setup
      ....
    
      Socket s0 = new Socket("host1", Conf.NCS_PORT);
      Socket s1 = new Socket("host2", Conf.NCS_PORT);
      Socket s2 = new Socket("host3", Conf.NCS_PORT);
    
      Ha ha0 = new Ha(s0, "clus0");
      Ha ha1 = new Ha(s1, "clus0");
      Ha ha2 = new Ha(s2, "clus0");
    
      ConfHaNode primary =
          new ConfHaNode(new ConfBuf("node0"),
                         new ConfIPv4(InetAddress.getByName("localhost")));
    
    
      ha0.bePrimary(primary.nodeid);
    
      ha1.beSecondary(new ConfBuf("node1"), primary, true);
    
      ha2.beSecondary(new ConfBuf("node2"), primary, true);
    
      HaStatus status0 = ha0.status();
      HaStatus status1 = ha1.status();
      HaStatus status2 = ha2.status();
    
      ....
        ConfPath keyPath = new ConfPath(new ConfObject[] {
                                        new ConfTag("ncs","devices"),
                                        new ConfTag("ncs","device"),
                                        new ConfKey(new ConfObject[] {
                                                    new ConfBuf("d1")}),
                                        new ConfTag("iosxr","interface"),
                                        new ConfTag("iosxr","Loopback"),
                                        new ConfKey(new ConfObject[] {
                                                    new ConfBuf("lo0")})
                                        });
        // either this way
        ConfPath key1 = new ConfPath("/ncs:devices/device{d1}"+
                                     "/iosxr:interface/Loopback{lo0}"
        // or this way
        ConfPath key2 = new ConfPath("/ncs:devices/device{%s}"+
                                     "/iosxr:interface/Loopback{%s}",
                                     new ConfBuf("d1"),
                                     new ConfBuf("lo0"));
        <servers>
          <server>
            <name>www</name>
          </server>
        </servers>
        ConfXMLParam[] tree = new ConfXMLParam[] {
            new ConfXMLParamStart(ns.hash(),ns._servers),
            new ConfXMLParamStart(ns.hash(),ns._server),
            new ConfXMLParamValue(ns.hash(),ns._name),
            new ConfXMLParamStop(ns.hash(),ns._server),
            new ConfXMLParamStop(ns.hash,ns._servers)};
    ncsc --java-disable-prefix --java-package \
           com.example.app.namespaces \
           --emit-java \
           java/src/com/example/app/namespaces/foo.java \
           foo.fxs
        Socket s = new Socket("localhost", Conf.NCS_PORT);
        Maapi maapi = new Maapi(s);
        maapi.loadSchemas();
    
        ArrayList<ConfNamespace> nsList = maapi.getAutoNsList();
        ConfPath key1 = new ConfPath("/ncs:devices/device{d1}/iosxr:interface");
        Socket s = new Socket("localhost", Conf.NCS_PORT);
        Maapi maapi = new Maapi(s);
        int th =  maapi.startTrans(Conf.DB_CANDIDATE,
                                   Conf.MODE_READ_WRITE);
    
        // Because we will use keypaths without prefixes
        maapi.setNamespace(th, new smp().uri());
    
    
        ConfValue val = maapi.getElem(th, "/devices/device{d1}/address");
      container bs {
        presence "";
        tailf:callpoint bcp;
        list b {
          key name;
          max-elements 64;
          leaf name {
            type string;
          }
          container opt {
            presence "";
            leaf ii {
              type int32;
            }
          }
          leaf foo {
            type empty;
          }
        }
      }
    Commit Queue