Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Learn about different ways to install and deploy NSO.
By installation
By using Cisco-provided container images
Choose this way if you want to install NSO on a system. Before proceeding with the installation, decide on the install type.
The installation of NSO comes in two variants:
Choose this way if you want to run NSO in a container, such as Docker. Visit the link below for more information.
Supporting Information
If you are evaluating NSO, you should have a designated support contact. If you have an NSO support agreement, please use the support channels specified in the agreement. In either case, do not hesitate to reach out to us if you have questions or feedback.
System Install
System Install is used when installing NSO for a centralized, always-on, production-grade, system-wide deployment. It is configured as a system daemon that would start and end with the underlying operating system. The default users of admin and operator are not included and the file structure is more distributed.
Perform actions and activities possible after installing NSO.
The following actions are possible after installing NSO.
Enable your NSO instance for development purposes.
Applies to Local Install
If you intend to use your NSO instance for development purposes, enable the development mode using the command license smart development enable.
Remove Local Install.
Applies to Local Install.
To uninstall Local Install, simply delete the Install Directory.
Deep-dive into advanced NSO concepts.
Get started with NSO CLI.
Perform usage operations on NSO.
Get started with NSO automation by understanding fundamental concepts.
Key concepts in NSO development.
Perform system management tasks on your NSO deployment.
The web-based management interface has been improved to streamline user experience with a modernized look and feel. Also, usability improvements have been made in certain areas, such as device management.
Documentation Updates:
Expanded and improved the Web UI documentation to cover usage instructions.
Get started with the Cisco Crosswork NSO documentation guides.
Use this page to navigate your way through the NSO documentation and access the resources most relevant to your role.
An NSO deployment typically consists of the following roles:
For users new to NSO or wanting to explore it further.
For users working in a production-wide NSO deployment.
Remove System Install.
Applies to System Install.
NSO can be uninstalled using the option only if NSO is installed with --system-install option. Either part of the static files or the full installation can be removed using ncs-uninstall option. Ensure to stop NSO before uninstalling.
Executing the above command removes the Installation Directory /opt/ncs including symbolic links, Configuration Directory /etc/ncs, Run Directory /var/opt/ncs
View currently loaded packages.
NSO Packages contain data models and code for a specific function. It might be a NED for a specific device, a service application like MPLS VPN, a WebUI customization package, etc. Packages can be added, removed, and upgraded in run-time.
The currently loaded packages can be viewed with the following command:
Thus the above command shows that NSO currently has only one package loaded, the NED package for Cisco IOS. The output includes the name and version of the package, the minimum required NSO version, the Java components included, package build details, and finally the operational status of the package. The operational status is of particular importance - if it is anything other than up, it indicates that there was a problem with the loading or the initialization of the package. In this case, an item error-info may also be present, giving additional information about the problem. To show only the operational status for all loaded packages, this command can be used:
Understand different types of northbound APIs and their working mechanism.
This section describes the various northbound programmatic APIs in NSO NETCONF, REST, and SNMP. These APIs are used by external systems that need to communicate with NSO, such as portals, OSS, or BSS systems.
NSO has two northbound interfaces intended for human usage, the CLI and the WebUI. These interfaces are described in and respectively.
There are also programmatic Java, Python, and Erlang APIs intended to be used by applications integrated with NSO itself. See for more information about these APIs.
There are two APIs to choose from when an external system should communicate with NSO:
System Management
Configure & manage your NSO deployment.
Package Managament
Learn about NSO packages and how to use them.
High Availability
Set up multiple nodes in a highly-available (HA) setup.
AAA Infrastructure
Set up user authentication and authorization.
NED Administration
Administer and manage Cisco-provided NEDs.
Locks
Understand how transaction locks work.
Compaction
Set up CDB compaction to reduce the size of logs.
IPC Ports
Learn how client libraries connect to NSO.
Service Manager Restart
Configure the timeout period of Service Manager.
Security Issues
Run NSO tasks that require root privileges.
Run NSO as Non-root User
Start NSO as non-root user & bind ports below 1024.
IPv6 on Northbound
Use IPv6 on Northbound NSO interfaces.
LSA
Learn about Layered Service Architecture.
NETCONF
REST
Which one to choose is mostly a subjective matter. REST may, at first sight, appear to be simpler to use, but is not as feature-rich as NETCONF. By using a NETCONF client library such as the open source Java library JNC or Python library ncclient, the integration task is significantly reduced.
Both NETCONF and REST provide functions for manipulating the configuration (including creating services) and reading the operational state from NSO. NETCONF provides more powerful filtering functions than REST.
NETCONF and SNMP can be used to receive alarms as notifications from NSO. NETCONF provides a reliable mechanism to receive notifications over SSH, whereas SNMP notifications are sent over UDP.
Regardless of the protocol you choose for integration, keep in mind all of them communicate with the NSO server over network sockets, which may be unreliable. Additionally, write transactions in NSO can fail if they conflict with another, concurrent transaction. As a best practice, the client implementation should be able to gracefully handle such errors and be prepared to retry requests. For details on the NSO concurrency, refer to the NSO Concurrency Model.
Administrators
Personnel who deploy & manage an NSO deployment.
Operators
Personnel who use & operate an NSO deployment.
Developers
Personnel who develop NSO services, packages, & more.
Overview of NSO APIs.
Advanced-level NSO development.
Develop services and applications in NSO.
Extend product functionality to add custom service code or expose data through data provider mechanism.
admin@ncs# show packages
packages package cisco-ios
package-version 3.0
description "NED package for Cisco IOS"
ncs-min-version [ 3.0.2 ]
directory ./state/packages-in-use/1/cisco-ios
component upgrade-ned-id
upgrade java-class-name com.tailf.packages.ned.ios.UpgradeNedId
component cisco-ios
ned cli ned-id cisco-ios
ned cli java-class-name com.tailf.packages.ned.ios.IOSNedCli
ned device vendor Cisco
NAME VALUE
---------------------
show-tag interface
build-info date "2015-01-29 23:40:12"
build-info file ncs-3.4_HEAD-cisco-ios-3.0.tar.gz
build-info arch linux.x86_64
build-info java "compiled Java class data, version 50.0 (Java 1.6)"
build-info package name cisco-ios
build-info package version 3.0
build-info package ref 3.0
build-info package sha1 a8f1329
build-info ncs version 3.4_HEAD
build-info ncs sha1 81a1e4c
build-info dev-support version 0.99
build-info dev-support branch e4d3fa7
build-info dev-support sha1 e4d3fa7
oper-status upadmin@ncs# show packages package * oper-status
packages package cisco-ios
oper-status up/var/log/ncssystemd/etc/systemd/system/ncs.servicesystemd/etc/ncs/ncs.systemd.conf/etc/profile.dTo make sure that no license entitlements are consumed after you have uninstalled NSO, be sure to perform the deregister command in the CLI:
# ncs-uninstall --alladmin@ncs# license smart deregisterAlter your examples to work with System Install.
Applies to System Install.
Since all the NSO examples and README steps that come with the installer are primarily aimed at Local Install, you need to modify them to run them on a System Install.
To work with the System Install structure, this may require a little or bigger modification depending on the example.
For example, to port the example.ncs/development-guide/nano-services/basic-vrouter example to the System Install structure:
Make the following changes to the basic-vrouter/ncs.conf file:
Copy the Local Install $NCS_DIR/var/ncs/cdb/aaa_init.xml file to the basic-vrouter/ folder.
Other, more complex examples may require more ncs.conf file changes or require a copy of the Local Install default $NCS_DIR/etc/ncs/ncs.conf file together with the modification described above, or require the Local Install tool $NCS_DIR/bin/ncs-setup to be installed, as the ncs-setup command is usually not useful with a System Install. See for more information.
Connect client libraries to NSO with IPC Ports.
Client libraries connect to NSO using TCP. We tell NSO which address to use for these connections through the /ncs-config/ncs-ipc-address/ip (default value 127.0.0.1) and /ncs-config/ncs-ipc-address/port (default value 4569) elements in ncs.conf. It is possible to change these values, but it requires a number of steps to also configure the clients. Also, there are security implications, see Security Issues.
Some clients read the environment variables NCS_IPC_ADDR and NCS_IPC_PORT to determine if something other than the default is to be used, others might need to be recompiled. This is a list of clients that communicate with NSO, and what needs to be done when ncs-ipc-address is changed.
To run more than one instance of NSO on the same host (which can be useful in development scenarios), each instance needs its own IPC port. For each instance, set /ncs-config/ncs-ipc-address/port in ncs.conf to something different.
There are two more instances of ports that will have to be modified, NETCONF and CLI over SSH. The netconf (SSH and TCP) ports that NSO listens to by default are 2022 and 2023 respectively. Modify /ncs-config/netconf/transport/ssh and /ncs-config/netconf/transport/tcp, either by disabling them or changing the ports they listen to. The CLI over SSH by default listens to 2024; modify /ncs-config/cli/ssh either by disabling or changing the default port.
By default, the clients connecting to the IPC port are considered trusted, i.e. there is no authentication required, and we rely on the use of 127.0.0.1 for /ncs-config/ncs-ipc-address/ip to prevent remote access. In case this is not sufficient, it is possible to restrict access to the IPC port by configuring an access check.
The access check is enabled by setting the ncs.conf element /ncs-config/ncs-ipc-access-check/enabled to true, and specifying a filename for /ncs-config/ncs-ipc-access-check/filename. The file should contain a shared secret, i.e., a random character string. Clients connecting to the IPC port will then be required to prove that they have knowledge of the secret through a challenge handshake before they are allowed access to the NSO functions provided via the IPC port.
To provide the secret to the client libraries and inform them that they need to use the access check handshake, we have to set the environment variable NCS_IPC_ACCESS_FILE to the full pathname of the file containing the secret. This is sufficient for all the clients mentioned above, i.e., there is no need to change the application code to support or enable this check.
Run NSO as non-root user.
A common misfeature found on UNIX operating systems is the restriction that only root can bind to ports below 1024. Many a dollar has been wasted on workarounds and often the results are security holes.
Both FreeBSD and Solaris have elegant configuration options to turn this feature off. On FreeBSD:
# sysctl net.inet.ip.portrange.reservedhigh=0The above is best added to your /etc/sysctl.conf.
Similarly, on Solaris, we can just configure this. Assuming we want to run NSO under a non-root user ncs. On Solaris, we can do that easily by granting the specific right to bind privileged ports below 1024 (and only that) to the ncs user using:
# /usr/sbin/usermod -K defaultpriv=basic,net_privaddr ncsAnd check that we get what we want through:
# grep ncs /etc/user_attr
ncs::::type=normal;defaultpriv=basic,net_privaddrLinux doesn't have anything like the above. There are a couple of options on Linux. The best is to use an auxiliary program like authbind (http://packages.debian.org/stable/authbind) or privbind (http://sourceforge.net/projects/privbind/).
These programs are run by root. To start NCS under e.g., privbind, we can do:
The above command starts NSO as the user ncs and binds to ports below 1024.
Operate NSO using the Web UI.
The NSO Web UI provides an intuitive northbound interface to your NSO deployment. The UI consists of individual views, each with a different purpose, such as device management, service management, commit handling, etc.
The main components of the Web UI are shown in the figure below.
The UI works by auto-rendering the underlying device and service models. This gives the benefit that the Web UI is immediately updated when new devices or services are added to the system. For example, say you have added support for a new device vendor. Then, without any programming requirements, the NSO Web UI provides the capability to configure those devices.
All modern web browsers are supported and no plug-ins are needed. The interface itself is a JavaScript Client.
By default, the Web UI is accessible on port 8080 of the NSO server for an NSO Local Install and port 8888 for a System Install. The port can be changed in the ncs.conf file. A user must authenticate before accessing the (web) UI.
Log in to the NSO Web UI by using the username and password provided by your administrator. SSO SAML login is available if set up by your administrator. If applicable, use the SSO option to log in.
Log out by clicking your username on the top-right corner and choosing Logout.
Access the help options by clicking the help options icon in the UI banner. The following options are available:
Online documentation: Access the Web UI's online help.
Config editor help: Access help on using the configuration editor.
Manage hidden groups: Administer hidden groups, e.g. for debugging. Read more about hide groups in .
NSO version: Information about the version of NSO you are running.
In the Web UI, supplementary help text, whenever applicable, is available on the configuration fields and can be accessed by clicking the info icons.
The Commit Manager is accessible at all times from the UI banner. Working with the Commit Manager is further described in .
Create a new NSO instance for Local Install.
Applies to Local Install.
One of the included scripts with an NSO installation is the ncs-setup, which makes it very easy to create instances of NSO from a Local Install. You can look at the --help or in Manual Pages for more details, but the two options we need to know are:
--dest
Remote commands via the ncs command
Remote commands, such as ncs --reload, check the environment variables NCS_IPC_ADDR and NCS_IPC_PORT.
CDB and MAAPI clients
The address supplied to Cdb.connect() and Maapi.connect() must be changed.
Data provider API clients
The address supplied to Dp constructor socket must be changed.
ncs_cli
The Command Line Interface (CLI) client, ncs_cli, checks the environment variables NCS_IPC_ADDR and NCS_IPC_PORT. Alternatively the port can be provided on the command line (using the -P option).
Notification API clients
The new address must be supplied to the socket for the Nofif constructor.

<enabled>false</enabled>
<ip>0.0.0.0</ip>
<port>8888</port>
-<key-file>${NCS_DIR}/etc/ncs/ssl/cert/host.key</key-file>
-<cert-file>${NCS_DIR}/etc/ncs/ssl/cert/host.cert</cert-file>
+<key-file>${NCS_CONFIG_DIR}/etc/ncs/ssl/cert/host.key</key-file>
+<cert-file>${NCS_CONFIG_DIR}/etc/ncs/ssl/cert/host.cert</cert-file>
</ssl>
</transport># privbind -u ncs /opt/ncs/current/bin/ncs -c /etc/ncs.conf--package defines the NEDs that you want to have installed. You can specify this option multiple times.
To create an NSO instance:
Run the command to set up an NSO instance in the current directory with the IOS, NX-OS, IOS-XR and ASA NEDs. You only need one NED per platform that you want NSO to manage, even if you may have multiple versions in your installer neds directory.
Use the name of the NED folder in ${NCS_DIR}/packages/neds for the latest NED version that you have installed for the target platform. Use the tab key to complete the path, after you start typing (alternatively, copy and paste). Verify that the NED versions below match what is currently on the sandbox to avoid a syntax error. See the example below.
Check the nso-instance directory. Notice that several new files and folders are created.
Following is a description of the important files and folders:
ncs.conf is the NSO application configuration file, and is used to customize aspects of the NSO instance (for example, to change ports, enable/disable features, and so on.) See in Manual Pages for information.
packages/ is the directory that has symlinks to the NEDs that we referenced in the --package arguments at the time of setup. See in Development for more information.
logs/ is the directory that contains all the logs from NSO. This directory is useful for troubleshooting.
Start the NSO instance by navigating to the nso-instance directory and typing the ncs command. You must be situated in the nso-instance directory each time you want to start or stop NSO. If you have multiple instances, you need to navigate to each one and use the ncs command to start or stop each one.
Verify that NSO is running by using the ncs --status | grep status command.
Add Netsim or lab devices using the command ncs-netsim -h.
Traverse and edit NSO configuration using the YANG model.
The Configuration editor view is where you view and manage aspects of your NSO deployment using the underlying YANG model, for example, to configure devices, services, packages, etc.
The Configuration Editor's home page shows all the currently loaded YANG modules in NSO, i.e., the database schema. In this view, you can also browse and manage the configuration defined by the YANG modules.
All NSO configuration is performed in this view. You can edit the configuration data defined YANG model directly in this view or get directed by the Web UI to this view.
An important component of Configuration Editor is the Configuration Navigator which you can use to traverse and edit the configuration defined by the YANG model in a hierarchical tree-like fashion. This provides an efficient way to browse and configure aspects of NSO. Let's say, for example, you want to access all the devices in your deployment and choose a specific one to view and configure. In the Configuration Editor, you can do this by typing in ncs:devices in the navigator, and then choosing further guided options (automatically suggested by the Web UI), e.g., ncs:devices/device/ce0/config/....
As you navigate through the Web UI, the Configuration Navigator automatically displays and updates the path you are located at.
To exit back to the home page from another path, click the home button.
Click the up arrow to go back one step to the parent node.
To fetch information about a property/component, click the info button.
Use the TAB key to complete the config path.
When accessing an item (e.g., a device, service, etc.) using the Configuration Editor, the following tabs are visible:
Edit Config tab, to configure the item's configuration.
Config tab, to view configured items.
Operdata tab, to view the operational data relevant to the item (e.g., last sync time, last modified time, etc).
Actions tab, to apply an action to the item with specified options/parameters.
Depending on the selection of the tabs mentioned above, you may see four additional tabs in the Configuration editor view:
Widgets tab, to view the data defined by YANG modules in different formats.
None tab.
Containers tab, to view container-specific information from the YANG model.
List tab, to view list-specific information from the YANG model.
Run and interact with practice examples provided with the NSO installer.
Applies to Local Install.
This section provides an overview of how to run the examples provided with the NSO installer. By working through the examples, the reader should get a good overview of the various aspects of NSO and hands-on experience from interacting with it.
Convert your current Local Install setup to a System Install.
Applies to Local Install.
If you already have a Local Install with existing data that you would like to convert into a System Install, the following procedure allows you to do so. However, a reverse migration from System to Local Install is not supported.
Manage purchase and licensing of Cisco software.
is a cloud-based approach to licensing and it simplifies the purchase, deployment, and management of Cisco software assets. Entitlements are purchased through a Cisco account via Cisco Commerce Workspace (CCW) and are immediately deposited into a Smart Account for usage. This eliminates the need to install license files on every device. Products that are smart-enabled communicate directly to Cisco to report consumption.
Cisco Smart Software Manager (CSSM) enables the management of software licenses and Smart Account from a single portal. The interface allows you to activate your product, manage entitlements, and renew and upgrade software.
A functioning Smart Account is required to complete the registration process. For detailed information about CSSM, see .
Create and manage service deployment.
The Service manager view is where you create, deploy, and manage services in your NSO deployment. Available services are displayed in this view by default.
Perform handling of ambiguous device models.
When new NED versions with diverging XML namespaces are introduced, adaptations might be needed in the services for these new NEDs. But not necessarily; it depends on where in the specific NED models the ambiguities reside. Existing services might not refer to these parts of the model and in that case, they do not need any adaptations.
Finding out if and where services need adaptations can be non-trivial. An important exception is template services which check and point out ambiguities at load time (NSO startup). In Java or Python code this is harder and essentially falls back to code reviews and testing.
The changes in service code to handle ambiguities are straightforward but different for templates and code.
In templates, there are new processing instructions if-ned-id
Start user-provided Erlang applications.
NSO is capable of starting user-provided Erlang applications embedded in the same Erlang VM as NSO.
The Erlang code is packaged into applications which are automatically started and stopped by NSO if they are located at the proper place. NSO will search all packages for top-level directories called erlang-lib. The structure of such a directory is the same as a standard lib directory in Erlang. The directory may contain multiple Erlang applications. Each one must have a valid .app file. See the Erlang documentation of application and app for more info.
An Erlang package skeleton can be created by making use of the ncs-make-package command:
Multiple applications can be generated by using the option --erlang-application-name NAME
Start and stop the NSO daemon.
Applies to Local Install.
The command ncs -h shows various options when starting NSO. By default, NSO starts in the background without an associated terminal. It is recommended to add NSO to the /etc/init scripts of the deployment hosts. For more information, see the in Manual Pages.
Whenever you start (or reload) the NSO daemon, it reads its configuration from ./ncs.conf or ${NCS_DIR}/etc/ncs/ncs.conf or from the file specified with the
ncs-setup --package ~/nso-6.0/packages/neds/cisco-ios-cli-6.44 \
--package ~/nso-6.0/packages/neds/cisco-nx-cli-5.15 \
--package ~/nso-6.0/packages/neds/cisco-iosxr-cli-7.20 \
--package ~/nso-6.0/packages/neds/cisco-asa-cli-6.8 \
--dest nso-instance$ ls nso-instance/
logs ncs-cdb ncs.conf packages README.ncs scripts state
$ ls -l nso-instance/packages/
total 0
lrwxrwxrwx 1 user docker 51 Mar 19 12:44 cisco-asa-cli-6.8 ->
/home/user/nso-6.0/packages/neds/cisco-asa-cli-6.8
lrwxrwxrwx 1 user docker 52 Mar 19 12:44 cisco-ios-cli-6.44 ->
/home/user/nso-6.0/packages/neds/cisco-ios-cli-6.44
lrwxrwxrwx 1 user docker 54 Mar 19 12:44 cisco-iosxr-cli-7.20 ->
/home/user/nso-6.0/packages/neds/cisco-iosxr-cli-7.20
lrwxrwxrwx 1 user docker 51 Mar 19 12:44 cisco-nx-cli-5.15 ->
/home/user/nso-6.0/packages/neds/cisco-nx-cli-5.15
$Basic Operations
Learn NSO's basic command line operations.
NEDs and Adding Devices
Learn about NEDs and how to add devices in NSO.
Manage Network Services
Manage network services and configure life cycle ops.
NSO Device Manager
Explore device management, related ops.
SSH Key Management
Use NSO as an SSH server or a client.
Alarm Manager
Explore NSO alarm management & related ops.
Plug-and-Play Scripting
Use scripting to add new functionality to NSO.
Compliance Reporting
Implement network compliance in NSO.
Listing Packages
View and list NSO packages.
Lifecycle Operations
Manipulate existing services and devices.
Network Simulator
Simulate a network to be managed by NSO.



Restart strategy for the service manager.
The service manager executes in a Java VM outside of NSO. The NcsMux initializes a number of sockets to NSO at startup. These are Maapi sockets and data provider sockets. NSO can choose to close any of these sockets whenever NSO requests the service manager to perform a task, and that task is not finished within the stipulated timeout. If that happens, the service manager must be restarted. The timeout(s) are controlled by several ncs.conf parameters found under /ncs-config/japi.
elif-ned-idif-ned-idThe processing instruction else can be used in conjunction with if-ned-id and elif-ned-id to capture all other NED IDs.
For the nodes in the XML namespace where no ambiguities occur, this process instruction is not necessary.
In Java, the service code must handle the ambiguities by code where the devices' ned-id is tested before setting the nodes and values for the diverging paths.
The ServiceContext class has a new convenience method, getNEDIdByDeviceName which helps retrieve the ned-id from the device name string.
In the Python API, there is also a need to handle ambiguities by checking the ned-id before setting the diverging paths. Use get_ned_id() from ncs.application to resolve NED IDs.
-cncs.conf.dncs.conf$ ncs
$ ncs --stop
$ ncs -h
...$ ncs --status | grep status
status: started
db=running id=31 priority=1 path=/ncs:devices/device/live-status-protocol/device-type<config-template xmlns="http://tail-f.com/ns/config/1.0">
<devices xmlns="http://tail-f.com/ns/ncs">
<device foreach="{apache-device}">
<name>{current()}</name>
<config>
<?if-ned-id apache-nc-1.0:apache-nc-1.0?>
<vhosts xmlns="urn:apache">
<vhost>
<hostname>{/vhost}</hostname>
<doc-root>/srv/www/{/vhost}</doc-root>
</vhost>
</vhosts>
<?elif-ned-id apache-nc-1.1:apache-nc-1.1?>
<public xmlns="urn:apache">
<vhosts>
<vhost>
<hostname>{/vhost}</hostname>
<aliases>{/vhost}.public</aliases>
<doc-root>/srv/www/{/vhost}</doc-root>
</vhost>
</vhosts>
</public>
<?end?>
</config>
</device>
</devices>
</config-template> @ServiceCallback(servicePoint="websiteservice",
callType=ServiceCBType.CREATE)
public Properties create(ServiceContext context,
NavuNode service,
NavuNode root,
Properties opaque)
throws DpCallbackException {
...
NavuLeaf elemName = elem.leaf(Ncs._name_);
NavuContainer md = root.container(Ncs._devices_).
list(Ncs._device_).elem(elemName.toKey());
String ipv4Str = baseIp + ((subnet<<3) + server);
String ipv6Str = "::ff:ff:" + ipv4Str;
String ipStr = ipv4Str;
String nedIdStr =
context.getNEDIdByDeviceName(elemName.valueAsString());
if ("webserver-nc-1.0:webserver-nc-1.0".equals(nedIdStr)) {
ipStr = ipv4Str;
} else if ("webserver2-nc-1.0:webserver2-nc-1.0"
.equals(nedIdStr)) {
ipStr = ipv6Str;
}
md.container(Ncs._config_).
container(webserver.prefix, webserver._wsConfig_).
list(webserver._listener_).
sharedCreate(new String[] {ipStr, ""+8008});
ms.list(lb._backend_).sharedCreate(
new String[]{baseIp + ((subnet<<3) + server++),
""+8008});
...
return opaque;
} catch (Exception e) {
throw new DpCallbackException("Service create failed", e);
}
}import ncs
def _get_device(service, name):
dev_path = '/ncs:devices/ncs:device{%s}' % (name, )
return ncs.maagic.cd(service, dev_path)
class ServiceCallbacks(Service):
@Service.create
def cb_create(self, tctx, root, service, proplist):
self.log.info('Service create(service=', service._path, ')')
for name in service.apache_device:
self.create_apache_device(service, name)
template = ncs.template.Template(service)
self.log.info(
'applying web-server-template for device {}'.format(name))
template.apply('web-server-template')
self.log.info(
'applying load-balancer-template for device {}'.format(name))
template.apply('load-balancer-template')
def create_apache_device(self, service, name):
dev = _get_device(service, name)
if 'apache-nc-1.0:apache-nc-1.0' == ncs.application.get_ned_id(dev):
self.create_apache1_device(dev)
elif 'apache-nc-1.1:apache-nc-1.1' == ncs.application.get_ned_id(dev):
self.create_apache2_device(dev)
else:
raise Exception('unknown ned-id {}'.format(get_ned_id(dev)))
def create_apache1_device(self, dev):
self.log.info(
'creating config for apache1 device {}'.format(dev.name))
dev.config.ap__listen_ports.listen_port.create(("*", 8080))
dev.config.ap__clash = dev.name
def create_apache2_device(self, dev):
self.log.info(
'creating config for apache2 device {}'.format(dev.name))
dev.config.ap__system.listen_ports.listen_port.create(("*", 8080))
dev.config.ap__clash = dev.nameREADMEMake sure that NSO is installed with a Local Install according to the instructions in Local Install.
Source the ncsrc file in the NSO installation directory to set up a local environment. For example:
Proceed to the example directory:
Follow the instructions in the README files that are located in the example directories.
Every example directory is a complete NSO run-time directory. The README file and the detailed instructions later in this guide show how to generate a simulated network and NSO configuration for running the specific examples. Basically, the following steps are done:
Create a simulated network using the ncs-netsim --create-network command:
This creates 3 Cisco IOS devices called ios0, ios1, and ios2.
Create an NSO run-time environment using the ncs-setup command:
This command uses the --dest option to create local directories for logs, database files, and the NSO configuration file to the current directory (note that . refers to the current directory).
Start NCS netsim:
Start NSO:
It is important to make sure that you stop ncs and ncs-netsim when moving between examples using the stop option of the netsim and the --stop option of the ncs.
Some of the most common mistakes are:
The following procedure assumes that NSO is installed as described in the NSO Local Install process, and will perform an initial System Install of the same NSO version. After following these steps, consult the NSO System Install guide for additional steps that are required for a fully functional System Install.
The procedure also assumes you are using the $HOME/ncs-run folder as the run directory. If this is not the case, modify the following path accordingly.
To migrate to System Install:
Stop the current (local) NSO instance, if it is running.
Take a complete backup of the Runtime Directory for potential disaster recovery.
Change to Super User privileges.
Start the NSO System Install.
If you have multiple versions of NSO installed, verify that the symbolic link in /opt/ncs points to the correct version.
Copy the CDB files containing data to the central location.
Ensure that the /var/opt/ncs/packages directory includes all the necessary packages, appropriate for the NSO version. However, copying the packages directly could later on interfere with the operation of the nct command. It is better to only use symbolic links in that folder. Instead, copy the existing packages to the /opt/ncs/packages directory, either as directories or as tarball files. Make sure that each package includes the NSO version in its name and is not just a symlink, for example:
Link to these packages in the /var/opt/ncs/packages directory.
The reason for prepending ncs-VERSION to the filename is to allow additional NSO commands, such as nct upgrade and software packages to work properly. These commands need to identify which NSO version a package was compiled for.
Edit the /etc/ncs/ncs.conf configuration file and make the necessary changes. If you wish to use the configuration from Local Install, disable the local authentication, unless you fully understand its security implications.
When starting NSO at boot using systemd, make sure that you set the package reload option from the /etc/ncs/ncs.systemd.conf environment file to true. Or, for example, set NCS_RELOAD_PACKAGES=true before starting NSO if using the ncs command.
Review and complete the steps in NSO System Install, except running the installer, which you have done already. Once completed, you should have a running NSO instance with data from the Local Install.
Remove the package reload option if it was set.
Update log file paths for Java and Python VM through the NSO CLI.
Verify that everything is working correctly.
At this point, you should have a complete copy of the previous Local Install running as a System Install. Should the migration fail at some point and you want to back out of it, the Local Install was not changed and you can easily go back to using it as before.
In the unlikely event of Local Install becoming corrupted, you can restore it from the backup.
Visit Cisco Cisco Software Central to learn about how to create and manage Smart Accounts.
The creation of a new Smart Account is a one-time event and subsequent management of users is a capability provided through the tool. To request a Smart Account, visit Cisco Cisco Software Central and take the following steps:
After logging in select Request a Smart Account in the Administration section.
Select the type of Smart Account to create. There are two options: (a) Individual Smart Account requiring agreement to represent your company. By creating this Smart Account you agree to authorization to create and manage product and service entitlements, users, and roles on behalf of your organization. (b) Create the account on behalf of someone else.
Provide the required domain identifier and the preferred account name.
The account request will be pending approval of the Account Domain Identifier. A subsequent email will be sent to the requester to complete the setup process.
Smart Account user management is available in the Administration section of Cisco Cisco Software Central. Take the following steps to add a new user to a Smart Account:
After logging in Select Manage Smart Account in the Administration section.
Choose the Users tab.
Select New User and follow the instructions in the wizard to add a new user.
To create a new token, log into CSSM and select the appropriate Virtual Account.
Click on the Smart Licenses link to enter CSSM.
In CSSM click on New Token.
Follow the dialog to provide a description, expiration, and export compliance applicability before accepting the terms and responsibilities. Click on Create Token to continue.
Click on the new token.
Copy the token from the dialogue window into your clipboard.
Go to the NSO CLI and provide the token to the license smart register idtoken command:
If ncs.conf contains configuration for any of java-executable, java-options, override-url/url, or proxy/url under the configure path /ncs-config/smart-license/smart-agent/ any corresponding configuration done via the CLI is ignored.
The smart licensing component of NSO runs its own Java virtual machine. Usually, the default Java options are sufficient:
If you for some reason need to modify the Java options, remember to include the default values as found in the YANG model.
show license all: Displays all information.
show license status: Displays status information.
show license summary: Displays summary.
show license tech: Displays license tech support information.
show license usage: Displays usage information.
debug smart_lic all: All available Smart Licensing debug flags.
plan: Shows the service plan.
devices: Shows the number of devices associated with the service. Use the refresh button to reload the devices list.
check-sync, re-deploy, and re-deploy dry-run denote the actions that you can perform on a service.
Hide and display the columns of your choice by using the column selection icon.
If you have several services configured, you can use the Search filter to filter down results to the services(s) of your choice. The search filter matches the entered characters to the service name and shows the results accordingly. Results are shown only for the service point that you have selected.
To filter the service list:
Select the desired service point from the list to populate all the services under it.
Enter a partial or full name of the service you are searching for.
Press Enter.
In the Select service point drop-down list, select a service point.
Click the add button.
In the Create service pop-up, enter the name of the service.
Confirm the intent.
Review and commit the service to NSO in the Commit manager.
You can apply actions on a service from the Service manager view or the Configuration editor.
Start by selecting a service point to populate all services under it, and then follow the instructions below.
You can apply an action on a single service or multiple services at once.
To apply an action on a service:
On the desired service in the list, click the action button (i.e., check-sync, re-deploy, or re-deploy dry-run).
To apply an action on multiple services:
Select the desired services from the list.
Using the run action button, select the desired service action.
Confirm the intent.
Actions Possible in the Service Manager View
Available actions include check-sync, re-deploy, and re-deploy dry-run.
See for the details of these actions.
Additional actions are applied per individual service. Use this option if you want to run an action with additional parameters.
Click the service name in the list.
Access the Actions tab in the Configuration editor.
Click the desired action in the list.
Service configuration is viewed and carried out in the Configuration Editor.
To start configuring a service:
Click the service name in the list.
In the Configuration editor, access the Edit config tab to make changes to the service.
Commit the changes in the Commit manager.
In the services list, click the service that you want to delete. You can select multiple services.
Click the remove button.
Review and commit the change in the Commit manager.

All application code should use the prefix ec_ for module names, application names, registered processes (if any), and named ets tables (if any), to avoid conflict with existing or future names used by NSO itself.
The Erlang API to NSO is implemented as an Erlang/OTP application called econfd. This application comes in two flavors. One is built into NSO to support applications running in the same Erlang VM as NSO. The other is a separate library which is included in source form in the NSO release, in the $NCS_DIR/erlang directory. Building econfd as described in the $NCS_DIR/erlang/econfd/README file will compile the Erlang code and generate the documentation.
This API can be used by applications written in Erlang in much the same way as the C and Java APIs are used, i.e. code running in an Erlang VM can use the econfd API functions to make socket connections to NSO for the data provider, MAAPI, CDB, etc. access. However, the API is also available internally in NSO, which makes it possible to run Erlang application code inside the NSO daemon, without the overhead imposed by the socket communication.
When the application is started, one of its processes should make initial connections to the NSO subsystems, register callbacks, etc. This is typically done in the init/1 function of a gen_server or similar. While the internal connections are made using the exact same API functions (e.g. econfd_maapi:connect/2) as for an application running in an external Erlang VM, any Address and Port arguments are ignored, and instead, standard Erlang inter-process communication is used.
There is little or no support for testing and debugging Erlang code executing internally in NSO since NSO provides a very limited runtime environment for Erlang to minimize disk and memory footprints. Thus the recommended method is to develop Erlang code targeted for this by using econfd in a separate Erlang VM, where an interactive Erlang shell and all the other development support included in the standard Erlang/OTP releases are available. When development and testing are completed, the code can be deployed to run internally in NSO without changes.
For information about the Erlang programming language and development tools, refer to www.erlang.org and the available books about Erlang (some are referenced on the website).
The --printlog option to ncs, which prints the contents of the NSO error log, is normally only useful for Cisco support and developers, but it may also be relevant for debugging problems with application code running inside NSO. The error log collects the events sent to the OTP error_logger, e.g. crash reports as well as info generated by calls to functions in the error_logger(3) module. Another possibility for primitive debugging is to run ncs with the --foreground option, where calls to io:format/2 etc will print to standard output. Printouts may also be directed to the developer log by using econfd:log/3.
While Erlang application code running in an external Erlang VM can use basically any version of Erlang/OTP, this is not the case for code running inside NSO, since the Erlang VM is evolving and provides limited backward/forward compatibility. To avoid incompatibility issues when loading the beam files, the Erlang compiler erlc should be of the same version as was used to build the NSO distribution.
NSO provides the VM, erlc and the kernel, stdlib, and crypto OTP applications.
Applications may have dependencies to other applications. These dependencies affect the start order. If the dependent application resides in another package, this should be expressed by using the required package in the package-meta-data.xml file. Application dependencies within the same package should be expressed in the .app. See below.
The following config settings in the .app file are explicitly treated by NSO:
applications
A list of applications that need to be started before this application can be started. This info is used to compute a valid start order.
included_applications
A list of applications that are started on behalf of this application. This info is used to compute a valid start order.
env
A property list, containing [{Key,Val}] tuples. Besides other keys, used by the application itself, a few predefined keys are used by NSO. The key ncs_start_phase is used by NSO to determine which start phase the application is to be started in. Valid values are early_phase0, phase0, phase1, phase1_delayed and phase2. Default is phase1. If the application is not required in the early phases of startup, set ncs_start_phase to phase2 to avoid issues with NSO services being unavailable to the application. The key ncs_restart_type is used by NSO to determine which impact a restart of the application will have. This is the same as the
The examples.ncs/getting-started/developing-with-ncs/18-simple-service-erlang example in the bundled collection shows how to create a service written in Erlang and execute it internally in NSO. This Erlang example is a subset of the Java example examples.ncs/getting-started/developing-with-ncs/4-rfs-service.
Use NSO's network simulator to simulate your network and test functionality.
The ncs-netsim program is a useful tool to simulate a network of devices to be managed by NSO. It makes it easy to test NSO packages towards simulated devices. All you need is the NSO NED packages for the devices that you need to simulate. The devices are simulated with the Tail-f ConfD product.
All the NSO examples use ncs-netsim to simulate the devices. A good way to learn how to use ncs-netsim is to study them.
The ncs-netsim tool takes any number of NED packages as input. The user can specify the number of device instances per package (device type) and a string that is used as a prefix for the name of the devices. The command takes the following parameters:
Assume that you have prepared an NSO package for a device called router. (See the examples.ncs/getting-started/developing-with-ncs/0-router-network example). Also, assume the package is in ./packages/router. At this point, you can create the simulated network by:
This creates three devices; device0, device1, and device2. The simulated network is stored in the ./netsim directory. The output structure is:
There is one separate directory for every ConfD simulating the devices.
The network can be started with:
You can add more devices to the network in a similar way as it was created. E.g. if you created a network with some Juniper devices and want to add some Cisco IOS devices. Point to the NED you want to use (See {NCS_DIR}/packages/neds/) and run the command. Remember to start the new devices after they have been added to the network.
To extract the device data from the simulated network to a file in XML format:
This data is usually used to load the simulated network into NSO. Putting the XML file in the ./ncs-cdb folder will load it when NSO starts. If NSO is already started it can be reloaded while running.
The generated device data creates devices of the same type as the device being simulated. This is true for NETCONF, CLI, and SNMP devices. When simulating generic devices, the simulated device will run as a netconf device.
Under very special circumstances, one can choose to force running the simulation as a generic device with the option --force-generic.
The simulated network device info can be shown with:
Here you can see the device name, the working directory, and the port number for different services to be accessed on the simulated device (NETCONF SSH, SNMP, IPC, and direct access to the CLI).
You can reach the CLI of individual devices with:
The simulated devices actually provide three different styles of CLI:
cli: J-Style
cli-c: Cisco XR Style
cli-i: Cisco IOS Style
Individual devices can be started and stopped with:
You can check the status of the simulated network. Either a short version just to see if the device is running or a more verbose with all the information.
View which packages are used in the simulated network:
It is also possible to reset the network back to the state of initialization:
When you are done, remove the network:
The netsim tool includes a standard ConfD distribution and the ConfD C API library (libconfd) that the ConfD tools use. The library is built with default settings where the values for MAXDEPTH and MAXKEYLEN are 20 and 9, respectively. These values define the size of confd_hkeypath_t struct and this size is related to the size of data models in terms of depth and key lengths. Default values should be big enough even for very large and complex data models. But in some rare cases, one or both of these values might not be large enough for a given data model.
One might observe a limitation when the data models that are used by simulated devices exceed these limits. Then it would not be possible to use the ConfD tools that are provided with the netsim. To overcome this limitation, it is advised to use the corresponding NSO tools to perform desired tasks on devices.
NSO and ConfD tools and Python APIs are basically the same except for naming, the default IPC port and the MAXDEPTH and MAXKEYLEN values, where for NSO tools, the values are set to 60 and 18, respectively. Thus, the advised solution is to use the NSO tools and NSO Python API with netsim.
E.g. Instead of using the below command:
One may use:
The README file in examples.ncs/getting-started/developing-with-ncs/0-router-network gives a good introduction on how to use ncs-netsim.
Learn about compaction mechanism in NSO.
CDB implements write-ahead logging to provide durability in the datastores, appending a new log for each CDB transaction to the target datastore (A.cdb for configuration, O.cdb for operational, and S.cdb for snapshot datastore). Depending on the size and number of transactions towards the system, these files will grow in size leading to increased disk utilization, longer boot times, and longer initial data synchronization time when setting up a high-availability cluster.
Compaction is a mechanism used to reduce the size of the write-ahead logs to a minimum. It works by replacing an existing write-ahead log, which is composed by a number of consecutive transaction logs created in run-time, with a single transaction log representing the full current state of the datastore. From this perspective, it can be seen that a compaction acts similar to a write transaction towards a datastore. To ensure data integrity, 'write' transactions towards the datastore are not permitted during the time compaction takes place.
By default, compaction is handled automatically by the CDB. After each transaction, CDB evaluates whether compaction is required for the affected data store.
This is done by examining the number of added nodes as well as the file size changes since the last performed compaction. The thresholds used can be modified in the ncs.conf file by configuring the /ncs-config/compaction/file-size-relative, /ncs-config/compaction/file-size-absolute, and /ncs-config/compaction/num-node-relative settings.
It is also possible to automatically trigger compaction after a set number of transactions by setting the /ncs-config/compaction/num-transaction property.
Compaction may require a significant amount of time, during which write transactions cannot be performed. In certain use cases, it may be preferable to disable automatic compaction by CDB and instead trigger compaction manually according to the specific needs. If doing so, it is highly recommended to have another automated system in place.
CDB CAPI provides a set of functions that may be used to create an external mechanism for compaction. See cdb_initiate_journal_compaction(), cdb_initiate_journal_dbfile_compaction(), and cdb_get_compaction_info() in in Manual Pages.
Automation of compaction can be done by using a scheduling mechanism such as CRON, or by using the NCS scheduler. See for more information.
By default, CDB may perform compaction during its boot process. This may be disabled if required, by starting NSO with the flag --disable-compaction-on-start.
In the configuration datastore, compaction is by default delayed by 5 seconds when the threshold is reached to prevent any upcoming write transaction from being blocked. If the system is idle during these 5 seconds, meaning that there is no new transaction, the compaction will initiate. Otherwise, compaction is delayed by another 5 seconds. The delay time can be configured in ncs.conf by setting the /ncs-config/compaction/delayed-compaction-timeout property.
Useful information to help you get started with NSO development.
This section describes some recipes, tools, and other resources that you may find useful throughout development. The topics are tailored to novice users and focus on making development with NSO a more enjoyable experience.
Many developers prefer their own, dedicated NSO instance to avoid their work clashing with other team members. You can use either a local or remote Linux machine (such as a VM), or a macOS computer for this purpose.
The advantage of running local Linux with a GUI or macOS is that it is easier to set up the Integrated Development Environment (IDE) and other tools when they run on the same system as NSO. However, many IDEs today also allow working remotely, such as through the SSH protocol, making the choice of local versus remote less of a concern.
For development, using the so-called Local Install of NSO has some distinct advantages:
Home page of NSO Web UI.
The Home view is the default view after logging in. It provides shortcuts to Devices, Services, Config editor, and Tools.
Currently loaded Web UI extension packages are shown in this view under Packages. Web UI packages are used to extend the functionality of your Web UI, for example, to create additional views and functionalities. Examples are creating a view to visualize your MPLS network, etc.
Learn about using IPv6 on NSO's northbound interfaces.
NSO supports access to all northbound interfaces via IPv6, and in the most simple case, i.e. IPv6-only access, this is just a matter of configuring an IPv6 address (typically the wildcard address ::) instead of IPv4 for the respective agents and transports in ncs.conf, e.g. /ncs-config/cli/ssh/ip for SSH connections to the CLI, or /ncs-config/netconf-north-bound/transport/ssh/ip for SSH to the NETCONF agent. The SNMP agent configuration is configured via one of the other northbound interfaces rather than via ncs.conf, see in Northbound APIs. For example, via the CLI, we would set snmp agent ip to the desired address. All these addresses default to the IPv4 wildcard address 0.0.0.0.
In most IPv6 deployments, it will however be necessary to support IPv6 and IPv4 access simultaneously. This requires that both IPv4 and IPv6 addresses are configured, typically
$ source ~/nso-6.0/ncsrc$ cd $NCS_DIR/examples.ncs/getting-started/using-ncs/1-simulated-cisco-ios$ ncs-netsim create-network cisco-ios-cli-3.8 3 ios$ ncs-setup --dest .$ cd $NCS_DIR/examples.ncs/getting-started/1-simulated-cisco-ios
$ ncs-netsim start
$ ncs
$ ncs-netsim stop
$ ncs --stop$ ncs
-bash: ncs: command not found$ ncs
Bad configuration: /etc/ncs/ncs.conf:0: "./state/packages-in-use: \
Failed to create symlink: no such file or directory"
Daemon died status=21$ ncs
Cannot bind to internal socket 127.0.0.1:4569 : address already in use
Daemon died status=20
$ ncs-netsim start
DEVICE c0 Cannot bind to internal socket 127.0.0.1:5010 : \
address already in use
Daemon died status=20
FAIL$ ncs --stop
$ cd ncs-cdb/
$ ls
A.cdb
C.cdb
O.cdb
S.cdb
netsim_devices_init.xml
$ rm *.cdb
$ ncs$ ncs --stop$ tar -czf $HOME/ncs-backup.tar.gz -C $HOME ncs-run$ sudo -s$ sh nso-VERSION.OS.ARCH.installer.bin --system-install$ sudo systemctl stop ncs
$ source $HOME/ncs-VERSION/ncsrc
$ cd $HOME/ncs-run
$ ncs$ rm -rf $HOME/ncs-run
$ tar -xzf $HOME/ncs-backup.tar.gz -C $HOME leaf java-options {
tailf:info "Smart licensing Java VM start options";
type string;
default "-Xmx64M -Xms16M
-Djava.security.egd=file:/dev/./urandom";
description
"Options which NCS will use when starting
the Java VM.";}ncs-make-package --erlang-skeleton --erlang-application-name <appname> <package-name>restart_type()applicationpermanenttransienttemporarytemporaryServices
Learn the concepts of NSO services and automatation.
Implementing Services
Learn NSO service development in detail.
Templates
Develop and deploy NSO templates.
Nano Services
Learn about nano services for staged provisioning.
Packages
Learn about NSO packages and how they work.
Using CDB
Concepts of importance in usage of the CDB.
YANG
Explore YANG data modeling and its use.
NSO Concurrency Model
Understand NSO's concurrency model.
Service Handling of ADMs
Perform Handling of ambiguous device models.
NSO Virtual Machines
Learn about Java and Python virtual machines.
API Overview
Learn concepts and usage of Java and Python APIs.
Northbound APIs
Learn working mechanism of northbound APIs.
Dev Env & Resources
Useful info to get started with NSO development.
Developing Services
Develop and deploy NSO services/nano services.
Developing Packages
Develop and deploy NSO packages.
Developing NEDs
Develop and deploy NSO NEDs.
Developing Alarm Apps
Develop and deploy NSO alarm applications.
Kicker
Trigger declarative notification actions in NSO.
Scaling and Performance
Optimize your NSO automation solution.
Progress Trace
Debug, diagnose, and profile events in NSO.
Web UI Development
Develop enhancements for NSO Web UI.
SNMP Notifications
Configure NSO as SNMP notification receiver.
Web Server
Use embedded server to deliver static/CGI content.
Scheduler
Schedule time-based jobs for background tasks.
External Logging
Send log data to external commands.
Encryption Keys
Store encrypted values in NSO.














0.0.0.0::ipportextra-listen<extra-listen>/ncs-config/cli/sshncs.confTo configure the SNMP agent to accept requests to port 161 on any local IPv6 address, we could similarly use the CLI and give the command:
The extra-listen list can take any number of address/port pairs, thus this method can also be used when we want to accept connections/requests on several specified (IPv4 and/or IPv6) addresses instead of the wildcard address, or we want to use multiple ports.
<cli>
<enabled>true</enabled>
<!-- Use the built-in SSH server -->
<ssh>
<enabled>true</enabled>
<ip>0.0.0.0</ip>
<port>2024</port>
<extra-listen>
<ip>::</ip>
<port>2024</port>
</extra-listen>
</ssh>
...
</cli>admin@ncs(config)# snmp agent extra-listen :: 161$ ncs-netsim start$ ncs# cp $HOME/ncs-run/ncs-cdb/*.cdb /var/opt/ncs/cdb# cd $HOME/ncs-run/packages
# for pkg in *; do cp -RL $pkg /opt/ncs/packages/ncs-VERSION-$pkg; done# cd /var/opt/ncs/packages/
# rm -f *
# for pkg in /opt/ncs/packages/ncs-VERSION-*; do ln -s $pkg; done<local-authentication>
<enabled>false</enabled>
</local-authentication># systemctl daemon-reload
# systemctl start ncs# unset NCS_RELOAD_PACKAGES$ ncs_cli -C -u admin
admin@ncs# config
Entering configuration mode terminal
admin@ncs(config)# unhide debug
admin@ncs(config)# show full-configuration java-vm stdout-capture file
java-vm stdout-capture file ./logs/ncs-java-vm.log
admin@ncs(config)# java-vm stdout-capture file /var/log/ncs/ncs-java-vm.log
admin@ncs(config)# commit
Commit complete.
admin@ncs(config)# show full-configuration java-vm stdout-capture file
java-vm stdout-capture file /var/log/ncs/ncs-java-vm.log
admin@ncs(config)# show full-configuration python-vm logging log-file-prefix
python-vm logging log-file-prefix ./logs/ncs-python-vm
admin@ncs(config)# python-vm logging log-file-prefix /var/log/ncs/ncs-python-vm
admin@ncs(config)# commit
Commit complete.
admin@ncs(config)# show full-configuration python-vm logging log-file-prefix
python-vm logging log-file-prefix /var/log/ncs/ncs-python-vm
admin@ncs(config)# exit
admin@ncs#
admin@ncs# exitadmin@ncs# license smart register idtoken YzY2YjFlOTYtOWYzZi00MDg1...
Registration process in progress.
Use the 'show license status' command to check the progress and result.admin$ ncs-netsim --help
Usage ncs-netsim [--dir <NetsimDir>]
create-network <NcsPackage> <NumDevices> <Prefix> |
create-device <NcsPackage> <DeviceName> |
add-to-network <NcsPackage> <NumDevices> <Prefix> |
add-device <NcsPackage> <DeviceName> |
delete-network |
[-a | --async] start [devname] |
[-a | --async ] stop [devname] |
[-a | --async ] reset [devname] |
[-a | --async ] restart [devname] |
list |
is-alive [devname] |
status [devname] |
whichdir |
ncs-xml-init [devname] |
ncs-xml-init-remote <RemoteNodeName> [devname] |
[--force-generic] |
packages |
netconf-console devname [XpathFilter] |
[-w | --window] [cli | cli-c | cli-i] devname$ ncs-netsim create-network ./packages/router 3 device --dir ./netsim ./netsim/device/
device0/<ConfD files>, <log files>
device1/
....$ ncs-netsim start$ ncs-netsim add-to-network ${NCS_DIR}/packages/neds/cisco-ios 2 c-device --dir ./netsim$ ncs-netsim ncs-xml-init > devices.xml$ ncs_load -l -m devices.xml $ ncs-netsim list
...
name=device0 netconf=12022 snmp=11022 ipc=5010 cli=10022 dir=examples.ncs/getting-started/developing-
with-ncs/0-router-network/netsim/device/device0
...$ ncs-netsim cli-c device0$ ncs-netsim start device0
$ ncs-netsim stop device0$ ncs-netsim is-alive device0
$ ncs-netsim status device0$ ncs-netsim packages$ ncs-netsim reset$ ncs-netsim delete-network$ CONFD_IPC_PORT=5010 ${NCS_DIR}/netsim/confd/bin/confd_load -m -l *.xml$ NCS_IPC_PORT=5010 ncs_load -m -l *.xmlAt this point, you can configure different parameters.
(Use the Reset action parameters option to reset all parameters to default value).
Run the action.
Actions Possible in the Configuration Editor -> Actions Tab
Access the service in the Configuration editor to run the following actions: reactive-re-deploy, un-deploy, deep-check-sync, touch, set-rank, get-modifications, purge.
See Lifecycle Operations for the details of these actions.

You can search for a device by its name, IP address, or other parameters. Narrow down the results by using the Select device group filter.
To add a new device to NSO:
Click Add device.
In the Add device pop-up, specify the device name.
Click Add.
Configure the specifics of the device in the Configuration editor.
Review and commit the changes in the Commit manager when done.
Actions can be applied on a device from the Device management view or the Configuration editor -> Actions tab.
An action can be applied to a single or multiple devices at once.
Select the device(s) from the list.
Using the Choose actions button, select the desired action. The result of the action is returned momentarily.
Actions Possible in the Device Management View
Available actions include Connect, Ping, Sync from, Sync to, Check sync, Compare config, Fetch ssh host keys, Apply template, Modify in Config editor, and Delete.
See for the details of these actions.
Additional actions are applied per individual device. Use this option if you want to run an action with additional parameters.
Click the device name in the list.
Access the Actions tab in the Configuration editor.
Click the desired action in the list.
To edit the device configuration of an existing device:
In the Devices view, click the desired device from the list.
In the Configuration editor, click the Edit config tab.
Make the desired changes.
(Press Enter to save the changes. An uncommitted change in a field's value is marked by a green color, and is referred to as a 'dirty state').
Review and commit the change in the Commit manager.
The Device groups view lists all the available groups and devices belonging to them. You can add new device groups in this view as well as carry out actions on devices belonging to a group.
Device groups allow for the grouping and collective management of devices.
Click Add device group.
In the Create device group pop-up, specify the group name.
If you want to place the new device group under a parent group, select the Place under parent device group option and specify the parent group.
Click Create. You will be redirected to the group's details page. Here, the following panes are available:
Details: Displays basic details of the group, i.e., its name and parent/subgroup information. To link a sub-group, use the Connect sub device group option.
Devices in this group: Displays currently added devices in the group and provides the option to remove them from the group.
Add devices: Displays all available NSO devices and provides the option to add them to the group.
In the Add devices pane, select the device(s) that you want to add to the new group and click Add devices to group. The added devices become visible under the Devices in this group pane.
Finally, click Save.
Click the desired device group to access the group's detail page.
In the Devices in this group pane, select the device(s) to be removed from the group.
Click Remove from device group. The devices are removed immediately (without a Commit Manager review).
Click Save.
Device group actions let you perform an action on all the devices belonging to a group.
Select the desired device group from the list. It is possible to select multiple groups at once.
Choose the desired action from the Choose actions button.
Actions Possible in the Device Groups View
The available group actions are the same as in the section called Apply an Action on a Device (e.g., Connect, Sync from, Sync to, etc.) and are described in Lifecycle Operations.

It does not require elevated privileges to install or run.
It keeps all NSO files in the same place (user-defined).
It allows you to quickly switch between projects and NSO versions.
If you work with multiple projects in parallel, local install also allows you to take advantage of Python virtual environments to separate Python packages per project; simply start the NSO instance in an environment you have activated.
The main downside of using a local install is that it differs slightly from a system (production) install, such as in the filesystem paths used and the out-of-the-box configuration.
See Local Install for installation instructions.
There are a number of examples and showcases in this guide. We encourage you to follow them through. They are also a great reference if you are experimenting with a new feature and have trouble getting it to work; you can inspect and compare with the implementation in the example.
To run the examples, you will need access to an NSO instance. A development instance described in this chapter is the perfect option for running locally. See Running NSO Examples.
Cisco also provides an online sandbox and containerized environments, such as a Learning Lab or NSO Sandbox, designed for this purpose. Refer to the NSO documentation for additional resources.
Modern IDEs offer many features on top of advanced file editing support, such as code highlighting, syntax checks, and integrated debugging. While the initial setup takes some effort, the benefits of using an IDE are immense.
Visual Studio Code (VS Code) is a freely available and extensible IDE. You can add support for Java, Python, and YANG languages, as well as remote access through SSH via VS Code extensions. Consider installing the following extensions:
Python by Microsoft: Adds Python support.
Language Support for Java(TM) by Red Hat: Adds Java support.
NSO Developer Studio by Cisco: Adds NSO-specific features as described in NSO Developer Studio.
Remote - SSH by Microsoft: Adds support for remote development.
The Remote - SSH extension is especially useful when you must work with a system through an SSH session. Once you connect to the remote host by clicking the >< button (typically found in the bottom-left corner of the VS Code window), you can open and edit remote files with ease. If you also want language support (syntax highlighting and alike), you may need to install VS Code extensions remotely. That is, install the extensions after you have connected to the remote host, otherwise the extension installation screen might not show the option for installation on the connected host.
You will also benefit greatly from setting up SSH certificate authentication if you are using an SSH session for your work.
Once you get familiar with NSO development and gain some experience, a single NSO instance is likely to be insufficient; either because you need instances for unit testing, because you need one-off (throwaway) instances for an experiment, or something else entirely.
NSO includes tooling to help you quickly set up new local instances when such a need arises.
The following recipe relies on the ncs-setup command, which is available in the local install variant and requires a correctly set up shell environment (e.g. running source ncsrc). See Local Install for details.
A new instance typically needs a few things to be useful:
Packages
Initial data
Devices to manage
In its simplest form, the ncs-setup invocation requires only a destination directory. However, you can specify additional packages to use with the --package option. Use the option to add as many packages as you need.
Running ncs-setup creates the required filesystem structure for an NSO instance. If you wish to include initial configuration data, put the XML-encoded data in the ncs-cdb subdirectory and NSO will load it at the first start, as described in Initialization Files.
NSO also needs to know about the managed devices. In case you are using ncs-netsim simulated devices (described in Network Simulator), you can use the --netsim-dir option with ncs-setup to add them directly. Otherwise, you may need to create some initial XML files with the relevant device configuration data — much like how you would add a device to NSO manually.
Most of the time, you must also invoke a sync with the device so that it performs correctly with NSO. If you wish to push some initial configuration to the device, you may add the configuration in the form of initial XML data and perform a sync-to. Alternatively, you can simply do a sync-from. You can use the ncs_cmd command for this purpose.
Combining all of this together, consider the following example:
Start by creating a new directory to hold the files:
Create and start a few simulated devices with ncs-netsim, using ./netsim as directory:
Next, create the running directory with the NED package for the simulated devices and one more package. Also, add configuration data to NSO on how to connect to these simulated devices.
Now you can add custom initial data as XML files to ncs-run/ncs-cdb/. Usually, you would use existing files but you can also create them on-the-fly.
At this point, you are ready to start NSO:
Finally, request an initial sync-from:
The instance is now ready for work. Once you are finished, you can stop it with ncs --stop. Remember to also stop the simulated devices with ncs-netsim stop if you no longer need them. Then, delete the containing folder (nso-throwaway) to remove all the leftover files and data.
Implement network automation in your NSO deployment using services.
Services are the cornerstone of network automation with NSO. A service is not just a reusable recipe for provisioning network configurations; it allows you to manage the full configuration life-cycle with minimal effort.
This section examines in greater detail how services work, how to design them, and the different ways to implement them.
For a quicker introduction and a simple showcase of services, see Develop a Simple Service.
In NSO, the term service has a special meaning and represents an automation construct that orchestrates create, modify, and delete of a service instance into the resulting native commands to devices in the network. In its simplest form, a service takes some input parameters and maps them to device-specific configurations. It is a recipe or a set of instructions.
Much like you can bake many cakes using a single cake recipe, you can create many service instances using the same service. But unlike cakes, having the recipe produce exactly the same output, is not very useful. That is why service instances define a set of input parameters, which the service uses to customize the produced configuration.
A network engineer on the CLI, or an API call from a northbound system, provides the values for input parameters when requesting a new service instance, and NSO uses the service recipe, called a 'service mapping', to configure the network.
A similar process takes place when deleting the service instance or modifying the input parameters. The main task of a service is therefore: from a given set of input parameters, calculate the minimal set of device operations to achieve the desired service change. Here, it is very important that the service supports any change; create, delete, and update of any service parameter.
Device configuration is usually the primary goal of a service. However, there may be other supporting functions that are expected from the service, such as service-specific actions. The complete service application, implementing all the service functionality, is packaged in an NSO service package.
The following definitions are used throughout this section:
Service type: Often referred to simply as a service, denotes a specific type of service, such as "L2 VPN", "L3 VPN", "Firewall", or "DNS".
Service instance: A specific instance of a service type, such as "L3 VPN for ACME" or "Firewall for user X".
Service model: The schema definition for a service type, defined in YANG. It specifies the names and format of input parameters for the service.
Service mapping
Developing a service that transforms a service instance request to the relevant device configurations is done differently in NSO than in most other tools on the market. As a service developer, you create a mapping from a YANG service model to the corresponding device YANG model.
This is a declarative, model-to-model mapping. Irrespective of the underlying device type and its native device interface, the mapping is towards a YANG device model and not the native CLI (or any other protocol/API). As you write the service mapping, you do not have to worry about the syntax of different CLI commands or in which order these commands are sent to the device. It is all taken care of by the NSO device manager and device NEDs. Implementing a service in NSO is reduced to transforming the input data structure, described in YANG, to device data structures, also described in YANG.
Who writes the models?
Developing the service model is part of developing the service application and is covered later in this section.
Every device NED comes with a corresponding device YANG model. This model has been designed by the NED developer to capture the configuration data that is supported by the device.
A service application then has two primary artifacts: a YANG service model and a mapping definition to the device YANG, as illustrated in the following figure.
To reiterate:
The mapping is not defined using workflows, or sequences of device commands.
The mapping is not defined in the native device interface language.
This approach may seem somewhat unorthodox at first, but allows NSO to streamline and greatly simplify how you implement services.
A common problem for traditional automation systems is that a set of instructions needs to be defined for every possible service instance change. Take for example a VPN service. During a service life cycle, you want to:
Create the initial VPN.
Add a new site or leg to the VPN.
Remove a site or leg from the VPN.
Modify the parameters of a VPN leg, such as the IP addresses used.
The possible run-time changes for an existing service instance are numerous. If a developer must define instructions for every possible change, such as a script or a workflow, the task is daunting, error-prone, and never-ending.
NSO reduces this problem to a single data-mapping definition for the "create" scenario. At run-time, NSO renders the minimum resulting change for any possible change in the service instance. It achieves this with the FASTMAP algorithm.
Another challenge in traditional systems is that a lot of code goes into managing error scenarios. The NSO built-in transaction manager takes that burden away from the developer of the service application by providing automatic rollback of incomplete changes.
Another benefit of this approach is that NSO can automatically generate the northbound APIs and database schema from the YANG models, enabling a true DevOps way of working with service models. A new service model can be defined as part of a package and loaded into NSO. An existing service model can be modified and the package upgraded, and all northbound APIs and User Interfaces are automatically regenerated to reflect the new or updated models.
$ mkdir nso-throwaway
$ cd nso-throwaway$ ncs-netsim ncs-netsim create-network $NCS_DIR/packages/neds/cisco-ios-cli-3.8 3 c
DEVICE c0 CREATED
DEVICE c1 CREATED
DEVICE c2 CREATED
$ ncs-netsim start $ ncs-setup --dest ncs-run --netsim-dir ./netsim \
--package $NCS_DIR/packages/neds/cisco-ios-cli-3.8 \
--package $NCS_DIR/packages/neds/cisco-iosxr-cli-3.0Device configuration: Network devices are configured to perform network functions. A service instance results in corresponding device configuration changes.
Service application: The code and models implementing the complete service functionality, including service mapping, actions, models for auxiliary data, and so on.
...
Delete the VPN.



$ cat >ncs-run/ncs-cdb/my_init.xml <<'EOF'
<config xmlns="http://tail-f.com/ns/config/1.0">
<session xmlns="http://tail-f.com/ns/aaa/1.1">
<idle-timeout>0</idle-timeout>
</session>
</config>
EOF$ cd ncs-run
$ ncs$ ncs_cmd -u admin -c 'maction /devices/sync-from'
sync-result begin
device c0
result true
sync-result end
sync-result begin
device c1
result true
sync-result end
sync-result begin
device c2
result true
sync-result endAt this point, you can configure different parameters.
(To reset all the parameters to their default value, use the Reset action parameters option).
Run the action.
Actions Possible in the Configuration Editor -> Actions Tab
If you access the device in the Configuration editor, the following additional actions are available: migrate, copy-capabilities, find-capabilities, add-capability, instantiate-from-other-device, check-yang-modules, disconnect, delete-config, scp-to scp-from, load-native-config.
See Lifecycle Operations for the details of these actions.





Create generic NEDs.
As described in previous sections, the CLI NEDs are almost programming-free. The NSO CLI engine takes care of parsing the stream of characters that come from "show running-config [toptag]" and also automatically produces the sequence of CLI commands required to take the system from one state to another.
A generic NED is required when we want to manage a device that neither speaks NETCONF or SNMP nor can be modeled so that ConfD - loaded with those models - gets a CLI that looks almost/exactly like the CLI of the managed device. For example, devices that have other proprietary CLIs, devices that can only be configured over other protocols such as REST, Corba, XML-RPC, SOAP, other proprietary XML solutions, etc.
In a manner similar to the CLI NED, the Generic NED needs to be able to connect to the device, return the capabilities, perform changes to the device, and finally, grab the entire configuration of the device.
The interface that a Generic NED has to implement is very similar to the interface of a CLI NED. The main differences are:
When NSO has calculated a diff for a specific managed device, it will for CLI NEDS also calculate the exact set of CLI commands to send to the device, according to the YANG models loaded for the device. In the case of a generic NED, NSO will instead send an array of operations to perform towards the device in the form of DOM manipulations. The generic NED class will receive an array of NedEditOp objects. Each NedEditOp object contains:
The operation to perform, i.e. CREATED, DELETED, VALUE_SET, etc.
The keypath to the object in case.
An optional value
When NSO wants to sync the configuration from the device to NSO, the CLI NED only has to issue a series of show running-config [toptag] commands and reply with the output received from the device. A generic NED has to do more work. It is given a transaction handler, which it must attach to over the Maapi interface. Then the NED code must - by some means - retrieve the entire configuration and write into the supplied transaction, again using the Maapi interface.
Once the generic NED is implemented, all other functions in NSO work precisely in the same manner as with NETCONF and CLI NED devices. NSO still has the capability to run network-wide transactions. The caveat is that to abort a transaction towards a device that doesn't support transactions, we calculate the reverse diff and send it to the device, i.e. we automatically calculate the undo operations.
Another complication with generic NEDs is how the NED class shall authenticate towards the managed device. This depends entirely on the protocol between the NED class and the managed device. If SSH is used to a proprietary CLI, the existing authgroup structure in NSO can be used as is. However, if some other authentication data is needed, it is up to the generic NED implementer to augment the authgroups in tailf-ncs.yang accordingly.
We must also configure a managed device, indicating that its configuration is handled by a specific generic NED. Below we see that the NED with identity xmlrpc is handling this device.
The example examples.ncs/generic-ned/xmlrpc-device in the NSO examples collection implements a generic NED that speaks XML-RPC to 3 HTTP servers. The HTTP servers run the Apache XML-RPC server code and the NED code manipulates the 3 HTTP servers using a number of predefined XML RPC calls.
A good starting point when we wish to implement a new generic NED is the ncs-make-package --generic-ned-skeleton ... command, which is used to generate a skeleton package for a generic NED.
A generic NED always requires more work than a CLI NED. The generic NED needs to know how to map arrays of NedEditOp objects into the equivalent reconfiguration operations on the device. Depending on the protocol and configuration capabilities of the device, this may be arbitrarily difficult.
Regardless of the device, we must always write a YANG model that describes the device. The array of NedEditOp objects that the generic NED code gets exposed to is relative the YANG model that we have written for the device. Again, this model doesn't necessarily have to cover all aspects of the device.
Often a useful technique with generic NEDs can be to write a pyang plugin to generate code for the generic NED. Again, depending on the device it may be possible to generate Java code from a pyang plugin that covers most or all aspects of mapping an array of NedEditOp objects into the equivalent reconfiguration commands for the device.
Pyang is an extensible and open-source YANG parser (written by Tail-f) available at http://www.yang-central.org. pyang is also part of the NSO release. A number of plugins are shipped in the NSO release, for example $NCS_DIR/lib/pyang/pyang/plugins/tree.py is a good plugin to start with if we wish to write our own plugin.
$NCS_DIR/examples.ncs/generic-ned/xmlrpc-device is a good example to start with if we wish to write a generic NED. It manages a set of devices over the XML-RPC protocol. In this example, we have:
Defined a fictitious YANG model for the device.
Implemented an XML-RPC server exporting a set of RPCs to manipulate that fictitious data model. The XML-RPC server runs the Apache org.apache.xmlrpc.server.XmlRpcServer Java package.
Implemented a Generic NED which acts as an XML-RPC client speaking HTTP to the XML-RPC servers.
The example is self-contained, and we can, using the NED code, manipulate these XML-RPC servers in a manner similar to all other managed devices.
NedEditOp ObjectsAs it was mentioned earlier the NedEditOp objects are relative to the YANG model of the device, and they are to be translated into the equivalent reconfiguration operations on the device. Applying reconfiguration operations may only be valid in a certain order.
For Generic NEDs, NSO provides a feature to ensure dependency rules are being obeyed when generating a diff to commit. It controls the order of operations delivered in the NedEditOp array. The feature is activated by adding the following option to package-meta-data.xml:
When the ordered-diff flag is set, the NedEditOp objects follow YANG schema order and consider dependencies between leaf nodes. Dependencies can be defined using leafrefs and the tailf:cli-diff-after, tailf:cli-diff-create-after, tailf:cli-diff-modify-after, tailf:cli-diff-set-after, tailf:cli-diff-delete-after YANG extensions. Read more about the above YANG extensions in the Tail-f CLI YANG extensions man page.
A device we wish to manage using a NED usually has not just configuration data that we wish to manipulate from NSO, but the device usually has a set of commands that do not relate to configuration.
The commands on the device we wish to be able to invoke from NSO must be modeled as actions. We model this as actions and compile it using a special ncsc command to compile NED data models that do not directly relate to configuration data on the device.
The NSO example $NCS_DIR/examples.ncs/generic-ned/xmlrpc-device contains an example where the managed device, a fictitious XML-RPC device contains a YANG snippet :
When that action YANG is imported into NSO it ends up under the managed device. We can invoke the action on the device as :
The NED code is obviously involved here. All NEDs must always implement:
The command() method gets invoked in the NED, the code must then execute the command. The input parameters in the params parameter correspond to the data provided in the action. The command() method must reply with another array of ConfXMLParam objects.
The above code is fake, on a real device, the job of the command() method is to establish a connection to the device, invoke the command, parse the output, and finally reply with an ConfXMLParam array.
The purpose of implementing NED commands is usually that we want to expose device commands to the programmatic APIs in the NSO DOM tree.
Tools to view NSO status and perform specialized tasks.
The Tools view offers individual utilities that you can use to run specific tasks on your deployment, such as running compliance reports, etc.
The following tools are available:
Insights: Gathers and displays useful statistics of your deployment.
Package upgrade: Used to perform upgrades to the packages running in NSO.
: Used to manage a High Availability (HA) setup in your deployment.
: Shows current alarms/events in your deployment and provides options to manage them.
: Shortcut to the Commit Manager.
: Used to run compliance checks on your NSO network.
The Insights view collects and displays the following types of operational information using the /ncs:metrics data model to present useful statistics:
Real-time data about transactions, commit queues, and northbound sessions.
Sessions created and closed towards northbound interfaces since the last restart (CLI, JSON-RPC, NETCONF, RESTCONF, SNMP).
Transactions since last restart (committed, aborted, and conflicting). You can select between the running and operational data stores.
Devices and their sync statuses.
In the Package upgrade view, you can load custom packages in NSO.
The Reload button on the Packages pane is the equivalent of the packages reload command in CLI. Read more about the reload action in .
In the Package upgrade view, click Browse files to select a new package (.tar or .tar.gz) from your local disk.
Click Upload. The package becomes visible under the Available pane. (The Progress Trace shows the real-time progress of the upload).
Click Install.
To uninstall a package, simply click Deinstall next to the package in the Loaded packages list.
The High Availability view is used to visualize your HA setup (rule-based or Raft).
Actions can be performed on the cluster using the Configuration editor -> Actions tab. Possible actions are further described in the High Availability documentation under .
The Alarm manager view displays current alarms in the system. The alarms are categorized as criticals, majors, and minors and can be filtered by device.
You can run actions on an alarm by selecting it and using the run action button.
The Commit manager displays notifications about commits pending to be approved. Any time a change (a transaction) is made in NSO, the Commit Manager displays a notification to review the change. You can then choose to confirm or revert the commit.
Transactions and Commits
Take special note of the Commit Manager. Whenever a transaction has started, the active configuration data changes can be inspected and evaluated before they are committed and pushed to the network. The data is saved to the NSO datastore and pushed to the network when a user presses Commit.
Any network-wide configuration change can be picked up as a rollback file. The rollback can then be applied to undo whatever happened to the network.
Access the Commit Manager by clicking its icon in the banner.
Review the available changes appearing as Current transaction. If there are errors in the change, the Commit Manager alerts you and suggests possible corrections. You can then fix them and press Re-validate to clear the errors.
Click Revert to undo or Commit to confirm the changes in the transaction.
Start a transaction to load or save configuration data using the Load/Save option which you can then review for commit. The following tabs are available:
Rollback, to load data that reverts an earlier change.
Files, to load data from a local file on your disk.
Paste, to load data by pasting it in.
Save, to save loaded data to a file on your local disk.
In the Commit manager view, the following tabs are shown.
changes tab, to list the changes and actions done in the system, e.g., deleting a device or changing its properties.
errors tab, to list the errors encountered while doing changes. You can review the errors, make changes, and revalidate the error using the Re-validate option.
warnings tab, to list the warnings encountered while doing changes.
The Compliance reporting view is used to create and run compliance reports to check the current situation, check historical events, or both.
The recommended and preferred way of running the compliance reports is through the Web UI.
The following main tabs are available in this view:
Compliance reports, to create, run, manage, and view existing compliance reports.
Report results, to view report results and compliance report status.
In the Compliance reporting view, click Add list item .
In the New report pop-up, enter the report name and confirm.
Next, set up the compliance report using the following tabs. For a more detailed description of Compliance Reporting concepts and related configuration options, see documentation.
In the Compliance reports tab, click the Run button on the desired report.
Specify the following:
Report title
Historical time interval. The report runs with the maximum possible interval if you do not specify an interval.
The report's results, available from the Report results tab, show if the report was compliant or has violations. Click Show details to fetch additional details.
Explore NSO contents after finishing the installation.
Applies to Local Install.
Before starting NSO, it is recommended to explore the installation contents.
Navigate to the newly created Installation Directory, for example:
The installation directory includes the following contents:
Along with the binaries, NSO installs a full set of documentation available in the doc/ folder in the Installation Directory. There is also an online version of the documentation available on .
Run index.html in your browser to explore further.
Local Install comes with a rich set of examples to start using NSO.
In order to communicate with the network, NSO uses NEDs as device drivers for different device types. Cisco has NEDs for hundreds of different devices available for customers, and several are included in the installer in the /packages/neds directory.
In the example below, NEDs for Cisco ASA, IOS, IOS XR, and NX-OS are shown. Also included are NEDs for other vendors including Juniper JunOS, A10, ALU, and Dell.
A large number of pre-built supported NEDs are available which can be acquired and downloaded by the customers from . Note that the specific file names and versions that you download may be different from the ones in this guide. Therefore, remember to update the paths accordingly.
Like the NSO installer, the NEDs are signed.bin files that need to be run to validate the download and extract the new code.
To install new NEDs:
Change to the working directory where your downloads are. The filenames indicate which version of NSO the NEDs are pre-compiled for (in this case NSO 6.0), and the version of the NED. An example output is shown below.
Use the sh command to run signed.bin to verify the certificate and extract the NED tar.gz and other files. Repeat for all files. An example output is shown below.
You now have three tar (.tar.gz) files. These are compressed versions of the NEDs. List the files to verify as shown in the example below.
The last thing to note is the files ncsrc and ncsrc.tsch. These are shell scripts for bash and tsch that set up your PATH and other environment variables for NSO. Depending on your shell, you need to source this file before starting NSO.
For more information on sourcing shell script, see the .
Description of SNMP agent.
The SNMP agent in NSO is used mainly for monitoring and notifications. It supports SNMPv1, SNMPv2c, and SNMPv3.
The following standard MIBs are supported by the SNMP agent:
SNMPv2-MIB
SNMP-FRAMEWORK-MIB
SNMP-USER-BASED-SM-MIB
Learn about different transaction locks in NSO and their interactions.
This section explains the different locks that exist in NSO and how they interact. It is important to understand the architecture of NSO with its management backplane, and the transaction state machine as described in to be able to understand how the different locks fit into the picture.
The NSO management backplane keeps a lock on the datastore running. This lock is usually referred to as the global lock and it provides a mechanism to grant exclusive access to the datastore.
The global is the only lock that can explicitly be taken through a northbound agent, for example by the NETCONF <lock> operation, or by calling Maapi.lock()
cd ~/nso-6.0Navigate to the packages/neds directory for your Local Install, for example:
In the /packages/neds directory, extract the .tar files into this directory using the tar command with the path to where the compressed NED is located. An example is shown below.
Here is a sample list of the newer NEDs extracted along with the ones bundled with the installation:
A global lock can be taken for the whole datastore, or it can be a partial lock (for a subset of the data model). Partial locks are exposed through NETCONF and MAAPI and are only supported for operations toward the running datastore.
An agent can request a global lock to ensure that it has exclusive write access. When a global lock is held by an agent, it is not possible for anyone else to write to the datastore that the lock guards - this is enforced by the transaction engine. A global lock on running is granted to an agent if there are no other holders of it (including partial locks) and if all data providers approve the lock request. Each data provider (CDB and/or external data providers) will have its lock() callback invoked to get a chance to refuse or accept the lock. The output of ncs --status includes locking status. For each user session locks (if any) per datastore is listed.
A northbound agent starts a user session towards NSO's management backplane. Each user session can then start multiple transactions. A transaction is either read/write or read-only.
The transaction engine has its internal locks towards the running datastore. These transaction locks exist to serialize configuration updates towards the datastore and are separate from the global locks.
As a northbound agent wants to update the running datastore with a new configuration, it will implicitly grab and release the transactional lock. The transaction engine takes care of managing the locks, as it moves through the transaction state machine and there is no API that exposes the transactional locks to the northbound agents.
When the transaction engine wants to take a lock for a transaction (for example when entering the validate state), it first checks that no other transaction has the lock. Then it checks that no user session has a global lock on that datastore. Finally, each data provider is invoked by its transLock() callback.
In contrast to the implicit transactional locks, some northbound agents expose explicit access to the global locks. This is done a bit differently by each agent.
The management API exposes the global locks by providing Maapi.lock() and Maapi.unlock() methods (and the corresponding Maapi.lockPartial() Maapi.unlockPartial() for partial locking). Once a user session is established (or attached to) these functions can be called.
In the CLI, the global locks are taken when entering different configure modes as follows:
config exclusive: The running datastore global lock will be taken.
config terminal: Does not grab any locks.
The global lock is then kept by the CLI until the configure mode is exited.
The Web UI behaves in the same way as the CLI (it presents three edit tabs called Edit private, Edit exclusive, and which correspond to the CLI modes described above).
The NETCONF agent translates the <lock> operation into a request for the global lock for the requested datastore. Partial locks are also exposed through the partial-lock RPC.
Implementing the lock() and unlock() callbacks is not required of an external data provider. NSO will never try to initiate the transLock() state transition (see the transaction state diagram in Package Development) towards a data provider while a global lock is taken - so the reason for a data provider to implement the locking callbacks is if someone else can write (or lock for example to take a backup) to the data providers database.
CDB ignores the lock() and unlock() callbacks (since the data-provider interface is the only write interface towards it).
CDB has its own internal locks on the database. The running datastore has a single write and multiple read locks. It is not possible to grab the write-lock on a datastore while there are active read-locks on it. The locks in CDB exist to make sure that a reader always gets a consistent view of the data (in particular it becomes very confusing if another user is able to delete configuration nodes in between calls to getNext() on YANG list entries).
During a transaction transLock() takes a CDB read-lock towards the transactions datastore and writeStart() tries to release the read-lock and grab the write-lock instead.
A CDB external reader client implicitly takes a CDB read-lock between Cdb.startSession() and Cdb.endSession() This means that while a CDB client is reading, a transaction can not pass through writeStart() (and conversely, a CDB reader can not start while a transaction is in between writeStart() and commit() or abort()).
The Operational store in CDB does not have any locks. NSO's transaction engine can only read from it, and the CDB client writes are atomic per write operation.
When a session tries to modify a data store that is locked in some way, it will fail. For example, the CLI might print:
Since some of the locks are short-lived (such as a CDB read-lock), NSO is by default, configured to retry the failing operation for a short period of time. If the data store still is locked after this time, the operation fails.
To configure this, set /ncs-config/commit-retry-timeout in ncs.conf.
admin@ncs# show running-config devices device x1
address 127.0.0.1
port 12023
authgroup default
device-type generic ned-id xmlrpc
state admin-state unlocked
...$ ncs-make-package --generic-ned-skeleton abc --build$ ncs-setup --ned-package abc --dest ncs$ cd ncs$ ncs -c ncs.conf$ ncs_cli -C -u adminadmin@ncs# show packages package abc
packages package abc
package-version 1.0
description "Skeleton for a generic NED"
ncs-min-version [ 3.3 ]
component MyDevice
callback java-class-name [ com.example.abc.abcNed ]
ned generic ned-id abc
ned device vendor "Acme abc"
...
oper-status up$ cd $NCS_DIR/generic-ned/xmlrpc-device$ make all start$ ncs_cli -C -u adminadmin@ncs# devices sync-from
sync-result {
device r1
result true
}
sync-result {
device r2
result true
}
sync-result {
device r3
result true
}admin@ncs# show running-config devices r1 config
ios:interface eth0
macaddr 84:2b:2b:9e:af:0a
ipv4-address 192.168.1.129
ipv4-mask 255.255.255.0
status Up
mtu 1500
alias 0
ipv4-address 192.168.1.130
ipv4-mask 255.255.255.0
!
alias 1
ipv4-address 192.168.1.131
ipv4-mask 255.255.255.0
!
speed 100
txqueuelen 1000
!<option>
<name>ordered-diff</name>
</option>container commands {
tailf:action idle-timeout {
tailf:actionpoint ncsinternal {
tailf:internal;
}
input {
leaf time {
type int32;
}
}
output {
leaf result {
type string;
}
}
}
}admin@ncs# devices device r1 config ios:commands idle-timeout time 55result OKvoid command(NedWorker w, String cmdName, ConfXMLParam[] params)
throws NedException, IOException;public void command(NedWorker worker, String cmdname, ConfXMLParam[] p)
throws NedException, IOException {
session.setTracer(worker);
if (cmdname.compareTo("idle-timeout") == 0) {
worker.commandResponse(new ConfXMLParam[]{
new ConfXMLParamValue(new interfaces(),
"result",
new ConfBuf("OK"))
});
}cd ~/nso-6.0/packages/nedstar -zxvf ~/Downloads/ncs-6.0-cisco-nx-5.13.1.1.tar.gz
tar -zxvf ~/Downloads/ncs-6.0-cisco-ios-6.42.1.tar.gz
tar -zxvf ~/Downloads/ncs-6.0-cisco-asa-6.7.7.tar.gzdrwxr-xr-x 13 user staff 416 Nov 29 05:17 a10-acos-cli-3.0
drwxr-xr-x 12 user staff 384 Nov 29 05:17 alu-sr-cli-3.4
drwxr-xr-x 13 user staff 416 Nov 29 05:17 cisco-asa-cli-6.6
drwxr-xr-x 13 user staff 416 Dec 12 21:11 cisco-asa-cli-6.7
drwxr-xr-x 12 user staff 384 Nov 29 05:17 cisco-ios-cli-3.0
drwxr-xr-x 12 user staff 384 Nov 29 05:17 cisco-ios-cli-3.8
drwxr-xr-x 13 user staff 416 Dec 13 22:58 cisco-ios-cli-6.42
drwxr-xr-x 13 user staff 416 Nov 29 05:17 cisco-iosxr-cli-3.0
drwxr-xr-x 13 user staff 416 Nov 29 05:17 cisco-iosxr-cli-3.5
drwxr-xr-x 13 user staff 416 Nov 29 05:17 cisco-nx-cli-3.0
drwxr-xr-x 14 user staff 448 Dec 18 09:09 cisco-nx-cli-5.13
drwxr-xr-x 13 user staff 416 Nov 29 05:17 dell-ftos-cli-3.0
drwxr-xr-x 10 user staff 320 Nov 29 05:17 juniper-junos-nc-3.0ls -l doc/
drwxr-xr-x 5 user staff 160B Nov 29 05:19 api/
drwxr-xr-x 14 user staff 448B Nov 29 05:19 html/
-rw-r--r-- 1 user staff 202B Nov 29 05:19 index.html
drwxr-xr-x 17 user staff 544B Nov 29 05:19 pdf/$ ls -1 examples.ncs/
README
crypto
datacenter
development-guide
generic-ned
getting-started
misc
service-provider
snmp-ned
snmp-notification-receiver
web-server-farm
web-ui$ ls -1 packages/neds
a10-acos-cli-3.0
alu-sr-cli-3.4
cisco-asa-cli-6.6
cisco-ios-cli-3.0
cisco-ios-cli-3.8
cisco-iosxr-cli-3.0
cisco-iosxr-cli-3.5
cisco-nx-cli-3.0
dell-ftos-cli-3.0
juniper-junos-nc-3.0cd ~/Downloads/
ls -l ncs*.bin
# Output
-rw-r--r--@ 1 user staff 9708091 Dec 18 12:05 ncs-6.0-cisco-asa-6.7.7.signed.bin
-rw-r--r--@ 1 user staff 51233042 Dec 18 12:06 ncs-6.0-cisco-ios-6.42.1.signed.bin
-rw-r--r--@ 1 user staff 8400190 Dec 18 12:05 ncs-6.0-cisco-nx-5.13.1.1.signed.binsh ncs-6.0-cisco-nx-5.13.1.1.signed.bin
Unpacking...
Verifying signature...
Downloading CA certificate from http://www.cisco.com/security/pki/certs/crcam2.cer ...
Successfully downloaded and verified crcam2.cer.
Downloading SubCA certificate from http://www.cisco.com/security/pki/certs/innerspace.cer ...
Successfully downloaded and verified innerspace.cer.
Successfully verified root, subca and end-entity certificate chain.
Successfully fetched a public key from tailf.cer.
Successfully verified the signature of ncs-6.0-cisco-nx-5.13.1.1.tar.gz using tailf.cerls -l ncs*.tar.gz
-rw-r--r-- 1 user staff 9704896 Dec 12 21:11 ncs-6.0-cisco-asa-6.7.7.tar.gz
-rw-r--r-- 1 user staff 51260488 Dec 13 22:58 ncs-6.0-cisco-ios-6.42.1.tar.gz
-rw-r--r-- 1 user staff 8409288 Dec 18 09:09 ncs-6.0-cisco-nx-5.13.1.1.tar.gzadmin@ncs(config)# commit
Aborted: the configuration database is lockedCDB info about its size, compaction, etc.
On the Loaded pane, click Reload and confirm the intent.
Commit Options: When committing a transaction, you have the possibility to choose Commit options and perform a commit with the specified commit option(s). Examples of commit options are: No revision drop, No deploy, No networking, etc. Commit options are described in detail in the JSON-RPC API documentation under Methods - transaction - commit changes.
native config tab, to list the device configuration data in the native config.
commit queue tab, to manage commit queues. See Commit Queue for more information.
Report name: Displays the report name and allows editing of the report name.
Devices tab: to configure device compliance checks. Configuration options include:
Current out of sync: Check the device's current status and report if the device is in sync or out of sync. Possible values are true (yes, request a check-sync) and false (no, do not request a check-sync).
Historic changes: Include or exclude previous changes to devices using the commit log. Possible values are true (yes, include), and false (no, exclude).
Device choice: Include All devices or only Some devices. If Some devices is selected, specify the devices using an XPath expression, device groups, or devices.
Compliance templates: If a compliance template should be used to check for compliance. See the section called .
Services tab: to configure service compliance checks. Configuration options include:
Current out of sync: Check the service's current status and report if the service is in sync or out of sync. Possible values are true (yes, request a check-sync) and false (no, do not request a check-sync).
Historic changes: Include or exclude previous changes to services using the commit log. Possible values are true (yes, include), and false (no, exclude).
Service choice: Include All services or only Some services. If Some services is selected, specify the services using an XPath expression or service instances.
Click Save when the report setup is complete.
Click Run report.



SNMP-VIEW-BASED-ACM-MIB RFC 3415
SNMP-COMMUNITY-MIB RFC 3584
SNMP-TARGET-MIB and SNMP-NOTIFICATION-MIB RFC 3413
SNMP-MPD-MIB RFC 3412
TRANSPORT-ADDRESS-MIB RFC 3419
SNMP-USM-AES-MIB RFC 3826
IPV6-TC RFC 2465
The SNMP agent is configured through any of the normal NSO northbound interfaces. It is possible to control most aspects of the agent through for example the CLI.
The YANG models describing all configuration capabilities of the SNMP agent reside under $NCS_DIR/src/ncs/snmp/snmp-agent-config/*.yang in the NSO distribution.
An example session configuring the SNMP agent through the CLI may look like:
The SNMP agent configuration data is stored in CDB as any other configuration data, but is handled as a transformation between the data shown above and the data stored in the standard MIBs.
If you want to have a default configuration of the SNMP agent, you must provide that in an XML file. The initialization data of the SNMP agent is stored in an XML file that has precisely the same format as CDB initialization XML files, but it is not loaded by CDB, rather it is loaded at first startup by the SNMP agent. The XML file must be called snmp_init.xml and it must reside in the load path of NSO. In the NSO distribution, there is such an initialization file in $NCS_DIR/etc/ncs/snmp/snmp_init.xml. It is strongly recommended that this file be customized with another engine ID and other community strings and v3 users.
If no snmp_init.xml file is found in the load path a default configuration with the agent disabled is loaded. Thus, the easiest way to start NSO without the SNMP agent is to ensure that the directory $NCS_DIR/etc/ncs/snmp/ is not part of the NSO load path.
Note, that this only relates to initialization the first time NSO is started. On subsequent starts, all the SNMP agent configuration data is stored in CDB and the snmp_init.xml is never used again.
The NSO SNMP alarm MIB is designed for ease of use in alarm systems. It defines a table of alarms and SNMP alarm notifications corresponding to alarm state changes. Based on the alarm model in NSO (see NSO Alarms), the notifications as well as the alarm table contain the parameters that are required for alarm standards compliance (X.733 and 3GPP). The MIB files are located in $NCS_DIR/src/ncs/snmp/mibs.
TAILF-TOP-MIB.mib The tail-f enterprise OID.
TAILF-TC-MIB.mib Textual conventions for the alarm mib.
TAILF-ALARM-MIB.mib The actual alarm MIB.
IANA-ITU-ALARM-TC-MIB.mib Import of IETF mapping of X.733 parameters.
ITU-ALARM-TC-MIB.mib Import of IETF mapping of X.733 parameters.
The alarm table has the following columns:
tfAlarmIndex An imaginary index for the alarm row that is persistent between restarts.
tfAlarmType This provides an identification of the alarm type and together with tfAlarmSpecificProblem forms a unique identification of the alarm.
tfAlarmDevice The alarming network device - can be NSO itself.
tfAlarmObject The alarming object within the device.
tfAlarmObjectOID In case the original alarm notification was an SNMP notification this column identifies the alarming SNMP object.
tfAlarmObjectStr Name of alarm object based on any other naming.
tfAlarmSpecificProblem This object is used when the 'tfAlarmType' object cannot uniquely identify the alarm type.
tfAlarmEventType The event type according to X.733 and based on the mapping of the alarm type in the NSO alarm model.
tfAlarmProbableCause The probable cause to X.733 and based on the mapping of the alarm type in the NSO alarm model. Note that you can configure this to match the probable cause values in the receiving alarm system.
tfAlarmOrigTime The time for the first occurrence of this alarm.
tfAlarmTime The time for the last state change of this alarm.
tfAlarmSeverity The latest severity (non-clear) reported for this alarm.
tfAlarmCleared Boolean indicated if the latest state change reports a clear.
tfAlarmText The latest alarm text.
tfAlarmOperatorState The latest operator alarm state such as ack.
tfAlarmOperatorNote The latest operator note.
The MIB defines separate notifications for every severity level to support SNMP managers that only can map severity levels to individual notifications. Every notification contains the parameters of the alarm table.
Alarm Managers should subscribe to the notifications and read the alarm table to synchronize the alarm list. To do this you need an access view that matches the alarm MIB and creates a SNMP target. Default SNMP settings in NSO let you read the alarm MIB with v2c and community public. A target is set up in the following way, (assuming the SNMP Alarm Manager has IP address 192.168.1.1 and wants community string public in the v2c notifications):
Perform package management tasks.
All user code that needs to run in NSO must be part of a package. A package is basically a directory of files with a fixed file structure or a tar archive with the same directory layout. A package consists of code, YANG modules, etc., that are needed to add an application or function to NSO. Packages are a controlled way to manage loading and versions of custom applications.
Network Element Drivers (NEDs) are also packages. Each NED allows NSO to manage a network device of a specific type. Except for third-party YANG NED packages which do not contain a YANG device model by default (and must be downloaded and fixed before adding to the package), a NED typically contains a device YANG model and the code, specifying how NSO should connect to the device. For NETCONF devices, NSO includes built-in tools to help you build a NED, as described in NED Administration, that you can use if needed. Otherwise, a third-party YANG NED, if available, should be used instead. Vendors, in some cases, provide the required YANG device models but not the entire NED. In practice, all NSO instances use at least one NED. The set of used NED packages depends on the number of different device types the NSO manages.
When NSO starts, it searches for packages to load. The ncs.conf parameter /ncs-config/load-path defines a list of directories. At initial startup, NSO searches these directories for packages and copies the packages to a private directory tree in the directory defined by the /ncs-config/state-dir parameter in ncs.conf, and loads and starts all the packages found. On subsequent startups, NSO will by default only load and start the copied packages. The purpose of this procedure is to make it possible to reliably load new or updated packages while NSO is running, with a fallback to the previously existing version of the packages if the reload should fail.
In a System Install of NSO, packages are always installed (normally through symbolic links) in the packages subdirectory of the run directory, i.e. by default /var/opt/ncs/packages, and the private directory tree is created in the state subdirectory, i.e. by default /var/opt/ncs/state.
Loading of new or updated packages (as well as removal of packages that should no longer be used) can be requested via the reload action - from the NSO CLI:
This request makes NSO copy all packages found in the load path to a temporary version of its private directory, and load the packages from this directory. If the loading is successful, this temporary directory will be made permanent, otherwise, the temporary directory is removed and NSO continues to use the previous version of the packages. Thus when updating packages, always update the version in the load path, and request that NSO does the reload via this action.
If the package changes include modified, added, or deleted .fxs files or .ccl files, NSO needs to run a data model upgrade procedure, also called a CDB upgrade. NSO provides a dry-run option to packages reload action to test the upgrade without committing the changes. Using a reload dry-run, you can tell if a CDB upgrade is needed or not.
The report all-schema-changes option of the reload action instructs NSO to produce a report of how the current data model schema is being changed. Combined with a dry run, the report allows you to verify the modifications introduced with the new versions of the packages before actually performing the upgrade.
For a data model upgrade, including a dry run, all transactions must be closed. In particular, users having CLI sessions in configure mode must exit to operational mode. If there are ongoing commit queue items, and the wait-commit-queue-empty parameter is supplied, it will wait for the items to finish before proceeding with the reload. During this time, it will not allow the creation of any new transactions. Hence, if one of the queue items fails with rollback-on-error option set, the commit queue's rollback will also fail, and the queue item will be locked. In this case, the reload will be canceled. A manual investigation of the failure is needed in order to proceed with the reload.
While the data model upgrade is in progress, all transactions are closed and new transactions are not allowed. This means that starting a new management session, such as a CLI or SSH connection to the NSO, will also fail, producing an error that the node is in upgrade mode.
By default, the reload action will (when needed) wait up to 10 seconds for the commit queue to empty (if the wait-commit-queue-empty parameter is entered) and reload to start.
If there are still open transactions at the end of this period, the upgrade will be canceled and the reload operation will fail. The max-wait-time and timeout-action parameters to the action can modify this behavior. For example, to wait for up to 30 seconds, and forcibly terminate any transactions that still remain open after this period, we can invoke the action as:
Thus the default values for these parameters are 10 and fail, respectively. In case there are no changes to .fxs or .ccl files, the reload can be carried out without the data model upgrade procedure, and these parameters are ignored since there is no need to close open transactions.
When reloading packages, NSO will give a warning when the upgrade looks suspicious, i.e., may break some functionality. Note that this is not a strict upgrade validation, but only intended as a hint to the NSO administrator early in the upgrade process that something might be wrong. Currently, the following scenarios will trigger the warnings:
One or more namespaces are removed by the upgrade. The consequence of this is all data belonging to this namespace is permanently deleted from CDB upon upgrade. This may be intended in some scenarios, in which case it is advised to proceed with overriding warnings as described below.
There are source .java files found in the package, but no matching .class files in the jars loaded by NSO. This likely means that the package has not been compiled.
There are matching .class files with modification time older than the source files, which hints that the source has been modified since the last time the package was compiled. This likely means that the package was not re-compiled the last time the source code was changed.
If a warning has been triggered it is a strong recommendation to fix the root cause. If all of the warnings are intended, it is possible to proceed with packages reload force command.
In some specific situations, upgrading a package with newly added custom validation points in the data model may produce an error similar to no registration found for callpoint NEW-VALIDATION/validate or simply application communication failure, resulting in an aborted upgrade. See on how to proceed.
In some cases, we may want NSO to do the same operation as the reload action at NSO startup, i.e. copy all packages from the load path before loading, even though the private directory copy already exists. This can be achieved in the following ways:
Setting the shell environment variable $NCS_RELOAD_PACKAGES to true. This will make NSO do the copy from the load path on every startup, as long as the environment variable is set. In a System Install, NSO is typically started as a systemd system service, and NCS_RELOAD_PACKAGES=true can be set in /etc/ncs/ncs.systemd.conf temporarily to reload the packages.
Giving the option --with-package-reload to the ncs
Always use one of these methods when upgrading to a new version of NSO in an existing directory structure, to make sure that new packages are loaded together with the other parts of the new system.
If it is known in advance that there were no data model changes, i.e. none of the .fxs or .ccl files changed, and none of the shared JARs changed in a Java package, and the declaration of the components in the package-meta-data.xml is unchanged, then it is possible to do a lightweight package upgrade, called package redeploy. Package redeploy only loads the specified package, unlike packages reload which loads all of the packages found in the load-path.
Redeploying a package allows you to reload updated or load new templates, reload private JARs for a Java package, or reload the Python code which is a part of this package. Only the changed part of the package will be reloaded, e.g. if there were no changes to Python code, but only templates, then the Python VM will not be restarted, but only templates reloaded. The upgrade is not seamless however as the old templates will be unloaded for a short while before the new ones are loaded, so any user of the template during this period of time will fail; the same applies to changed Java or Python code. It is hence the responsibility of the user to make sure that the services or other code provided by the package is unused while it is being redeployed.
The package redeploy will return true if the package's resulting status after the redeploy is up. Consequently, if the result of the action is false, then it is advised to check the operational status of the package in the package list.
Unlike a full packages reload operation, new NED packages can be loaded into the system without disrupting existing transactions. This is only possible for new packages, since these packages don't yet have any instance data.
The operation is performed through the /packages/add action. No additional input is necessary. The operation scans all the load-paths for any new NED packages and also verifies the existing packages are still present. If packages are modified or deleted, the operation will fail.
Each NED package defines ned-id, an identifier that is used in selecting the NED for each managed device. A new NED package is therefore a package with a ned-id value that is not already in use.
In addition, the system imposes some additional constraints, so it is not always possible to add just any arbitrary NED. In particular, NED packages can also contain one or more shared data models, such as NED settings or operational data for private use by the NED, that are not specific to each version of NED package but rather shared between all versions. These are typically placed outside any mount point (device-specific data model), extending the NSO schema directly. So, if a NED defines schema nodes outside any mount point, there must be no changes to these nodes if they already exist.
Adding a NED package with a modified shared data model is therefore not allowed and all shared data models are verified to be identical before a NED package can be added. If they are not, the /packages/add action will fail and you will have to use the /packages/reload command.
The command returns true if the package's resulting status after deployment is up. Likewise, if the result for a package is false, then the package was added but its code has not started successfully and you should check the operational status of the package with the show packages package <PKG> oper-status command for additional information. You may then use the /packages/package/redeploy action to retry deploying the package's code, once you have corrected the error.
In a System Install of NSO, management of pre-built packages is supported through a number of actions. This support is not available in a Local Install, since it is dependent on the directory structure created by the System Install. Please refer to the YANG submodule $NCS_DIR/src/ncs/yang/tailf-ncs-software.yang for the full details of the functionality described in this section.
Actions are provided to list local packages, to fetch packages from the file system, and to install or deinstall packages:
software packages list [...]: List local packages, categorized into loaded, installed, and installable. The listing can be restricted to only one of the categories - otherwise, each package listed will include the category for the package.
software packages fetch package-from-file <file>: Fetch a package by copying it from the file system, making it installable.
software packages install package <package-name> [...]: Install a package, making it available for loading via the packages reload
There is also an upload action that can be used via NETCONF or REST to upload a package from the local host to the NSO host, making it installable there. It is not feasible to use in the CLI or Web UI, since the actual package file contents is a parameter for the action. It is also not suitable for very large (more than a few megabytes) packages, since the processing of action parameters is not designed to deal with very large values, and there is a significant memory overhead in the processing of such values.
NSO Packages contain data models and code for a specific function. It might be NED for a specific device, a service application like MPLS VPN, a WebUI customization package, etc. Packages can be added, removed, and upgraded in run-time. A common task is to add a package to NSO to support a new device type or upgrade an existing package when the device is upgraded.
(We assume you have the example up and running from the previous section). Currently installed packages can be viewed with the following command:
So the above command shows that NSO currently has one package, the NED for Cisco IOS.
NSO reads global configuration parameters from ncs.conf. More on NSO configuration later in this guide. By default, it tells NSO to look for packages in a packages directory where NSO was started. So in this specific example:
As seen above a package is a defined file structure with data models, code, and documentation. NSO comes with a couple of ready-made packages: $NCS_DIR/packages/. Also, there is a library of packages available from Tail-f, especially for supporting specific devices.
Assume you would like to add support for Nexus devices to the example. Nexus devices have different data models and another CLI flavor. There is a package for that in $NCS_DIR/packages/neds/nexus.
We can keep NSO running all the time, but we will stop the network simulator to add the Nexus devices to the simulator.
Add the nexus package to the NSO runtime directory by creating a symbolic link:
The package is now in place, but until we tell NSO to look for package changes nothing happens:
So after the packages reload operation NSO also knows about Nexus devices. The reload operation also takes any changes to existing packages into account. The data store is automatically upgraded to cater to any changes like added attributes to existing configuration data.
We can now add these Nexus devices to NSO according to the below sequence:
admin@ncs# config
Entering configuration mode terminal
admin@ncs(config)# snmp agent udp-port 3457
admin@ncs(config)# snmp community public name foobaz
admin@ncs(config-community-public)# commit
Commit complete.
admin@ncs(config-community-public)# top
admin@ncs(config)# show full-configuration snmp
snmp agent enabled
snmp agent ip 0.0.0.0
snmp agent udp-port 3457
snmp agent version v1
snmp agent version v2c
snmp agent version v3
snmp agent engine-id enterprise-number 32473
snmp agent engine-id from-text testing
snmp agent max-message-size 50000
snmp system contact ""
snmp system name ""
snmp system location ""
snmp usm local user initial
auth sha password GoTellMom
priv aes password GoTellMom
!
snmp target monitor
ip 127.0.0.1
udp-port 162
tag [ monitor ]
timeout 1500
retries 3
v2c sec-name public
!
snmp community public
name foobaz
sec-name public
!
snmp notify foo
tag monitor
type trap
!
snmp vacm group initial
member initial
sec-model [ usm ]
!
access usm no-auth-no-priv
read-view internet
notify-view internet
!
access usm auth-no-priv
read-view internet
notify-view internet
!
access usm auth-priv
read-view internet
notify-view internet
!
!
snmp vacm group public
member public
sec-model [ v1 v2c ]
!
access any no-auth-no-priv
read-view internet
notify-view internet
!
!
snmp vacm view internet
subtree 1.3.6.1
included
!
!
snmp vacm view restricted
subtree 1.3.6.1.6.3.11.2.1
included
!
subtree 1.3.6.1.6.3.15.1.1
included
!
! tfAlarmMIB node 1.3.6.1.4.1.24961.2.103
tfAlarmObjects node 1.3.6.1.4.1.24961.2.103.1
tfAlarms node 1.3.6.1.4.1.24961.2.103.1.1
tfAlarmNumber scalar 1.3.6.1.4.1.24961.2.103.1.1.1
tfAlarmLastChanged scalar 1.3.6.1.4.1.24961.2.103.1.1.2
tfAlarmTable table 1.3.6.1.4.1.24961.2.103.1.1.5
tfAlarmEntry row 1.3.6.1.4.1.24961.2.103.1.1.5.1
tfAlarmIndex column 1.3.6.1.4.1.24961.2.103.1.1.5.1.1
tfAlarmType column 1.3.6.1.4.1.24961.2.103.1.1.5.1.2
tfAlarmDevice column 1.3.6.1.4.1.24961.2.103.1.1.5.1.3
tfAlarmObject column 1.3.6.1.4.1.24961.2.103.1.1.5.1.4
tfAlarmObjectOID column 1.3.6.1.4.1.24961.2.103.1.1.5.1.5
tfAlarmObjectStr column 1.3.6.1.4.1.24961.2.103.1.1.5.1.6
tfAlarmSpecificProblem column 1.3.6.1.4.1.24961.2.103.1.1.5.1.7
tfAlarmEventType column 1.3.6.1.4.1.24961.2.103.1.1.5.1.8
tfAlarmProbableCause column 1.3.6.1.4.1.24961.2.103.1.1.5.1.9
tfAlarmOrigTime column 1.3.6.1.4.1.24961.2.103.1.1.5.1.10
tfAlarmTime column 1.3.6.1.4.1.24961.2.103.1.1.5.1.11
tfAlarmSeverity column 1.3.6.1.4.1.24961.2.103.1.1.5.1.12
tfAlarmCleared column 1.3.6.1.4.1.24961.2.103.1.1.5.1.13
tfAlarmText column 1.3.6.1.4.1.24961.2.103.1.1.5.1.14
tfAlarmOperatorState column 1.3.6.1.4.1.24961.2.103.1.1.5.1.15
tfAlarmOperatorNote column 1.3.6.1.4.1.24961.2.103.1.1.5.1.16
tfAlarmNotifications node 1.3.6.1.4.1.24961.2.103.2
tfAlarmNotifsPrefix node 1.3.6.1.4.1.24961.2.103.2.0
tfAlarmNotifsObjects node 1.3.6.1.4.1.24961.2.103.2.1
tfAlarmStateChangeText scalar 1.3.6.1.4.1.24961.2.103.2.1.1
tfAlarmIndeterminate notification 1.3.6.1.4.1.24961.2.103.2.0.1
tfAlarmWarning notification 1.3.6.1.4.1.24961.2.103.2.0.2
tfAlarmMinor notification 1.3.6.1.4.1.24961.2.103.2.0.3
tfAlarmMajor notification 1.3.6.1.4.1.24961.2.103.2.0.4
tfAlarmCritical notification 1.3.6.1.4.1.24961.2.103.2.0.5
tfAlarmClear notification 1.3.6.1.4.1.24961.2.103.2.0.6
tfAlarmConformance node 1.3.6.1.4.1.24961.2.103.10
tfAlarmCompliances node 1.3.6.1.4.1.24961.2.103.10.1
tfAlarmCompliance compliance 1.3.6.1.4.1.24961.2.103.10.1.1
tfAlarmGroups node 1.3.6.1.4.1.24961.2.103.10.2
tfAlarmNotifs group 1.3.6.1.4.1.24961.2.103.10.2.1
tfAlarmObjs group 1.3.6.1.4.1.24961.2.103.10.2.2$ ncs_cli -u admin -C
admin@ncs# config
Entering configuration mode terminal
admin@ncs(config)# snmp notify monitor type trap tag monitor
admin@ncs(config-notify-monitor)# snmp target alarm-system ip 192.168.1.1 udp-port 162 \
tag monitor v2c sec-name public
admin@ncs(config-target-alarm-system)# commit
Commit complete.
admin@ncs(config-target-alarm-system)# show full-configuration snmp target
snmp target alarm-system
ip 192.168.1.1
udp-port 162
tag [ monitor ]
timeout 1500
retries 3
v2c sec-name public
!
snmp target monitor
ip 127.0.0.1
udp-port 162
tag [ monitor ]
timeout 1500
retries 3
v2c sec-name public
!
admin@ncs(config-target-alarm-system)#
If warnings are encountered when reloading packages at startup using one of the options above, the recommended way forward is to fix the root cause as indicated by the warnings as mentioned before. If the intention is to proceed with the upgrade without fixing the underlying cause for the warnings, it is possible to force the upgrade using NCS_RELOAD_PACKAGES=force environment variable or --with-package-reload-force option.
replace-existingsoftware packages deinstall package <package-name>: Deinstall a package, i.e. remove it from the set of packages available for loading.
admin@ncs# packages reload
reload-result {
package cisco-ios
result true
}admin@ncs# packages reload max-wait-time 30 timeout-action killadmin@ncs# packages package mserv redeploy
result trueadmin@ncs# show packages package mserv oper-status
oper-status file-load-error
oper-status error-info "template3.xml:2 Unknown servicepoint: templ42-servicepoint"admin@ncs# packages add
add-result {
package router-nc-1.1
result true
}admin@ncs# show packages
packages package cisco-ios
package-version 3.0
description "NED package for Cisco IOS"
ncs-min-version [ 3.0.2 ]
directory ./state/packages-in-use/1/cisco-ios
component upgrade-ned-id
upgrade java-class-name com.tailf.packages.ned.ios.UpgradeNedId
component cisco-ios
ned cli ned-id cisco-ios
ned cli java-class-name com.tailf.packages.ned.ios.IOSNedCli
ned device vendor Cisco
NAME VALUE
---------------------
show-tag interface
oper-status up$ pwd
.../examples.ncs/getting-started/using-ncs/1-simulated-cisco-ios
$ ls packages/
cisco-ios
$ ls packages/cisco-ios
doc
load-dir
netsim
package-meta-data.xml
private-jar
shared-jar
src$ ncs-netsim stop$ cd $NCS_DIR/examples.ncs/getting-started/using-ncs/1-simulated-cisco-ios/packages
$ ln -s $NCS_DIR/packages/neds/cisco-nx
$ ls -l
... cisco-nx -> .../packages/neds/cisco-nx admin@ncs# show packages packages package
cisco-ios ... admin@ncs# packages reload
>>> System upgrade is starting.
>>> Sessions in configure mode must exit to operational mode.
>>> No configuration changes can be performed until upgrade has
completed.
>>> System upgrade has completed successfully.
reload-result {
package cisco-ios
result true
}
reload-result {
package cisco-nx
result true
}$ ncs-netsim add-to-network cisco-nx 2 n
$ ncs-netsim list
ncs-netsim list for /Users/stefan/work/ncs-3.2.1/examples.ncs/getting-started/using-ncs/1-simulated-cisco-ios/netsim
name=c0 ...
name=c1 ...
name=c2 ...
name=n0 ...
name=n1 ...
$ ncs-netsim start
DEVICE c0 OK STARTED
DEVICE c1 OK STARTED
DEVICE c2 OK STARTED
DEVICE n0 OK STARTED
DEVICE n1 OK STARTED
$ ncs-netsim cli-c n0
n0#show running-config
no feature ssh
no feature telnet
fex 101
pinning max-links 1
!
fex 102
pinning max-links 1
!
nexus:vlan 1
!
...admin@ncs(config)# devices device n0 device-type cli ned-id cisco-nx
admin@ncs(config-device-n0)# port 10025
admin@ncs(config-device-n0)# address 127.0.0.1
admin@ncs(config-device-n0)# authgroup default
admin@ncs(config-device-n0)# state admin-state unlocked
admin@ncs(config-device-n0)# commit
admin@ncs(config-device-n0)# top
admin@ncs(config)# devices device n0 sync-from
result trueLearn about NEDs, their types, and how to work with them.
Network Element Drivers, NEDs, provides the connectivity between NSO and the devices. NEDs are installed as NSO packages. For information on how to add a package for a new device type, see NSO Package Management.
To see the list of installed packages (you will not see the F5 BigIP):
The core parts of a NED are:
A Driver Element: Running in a Java VM.
Data Model: Independent of the underlying device interface technology, NEDs come with a data model in YANG that specifies configuration data and operational data that is supported for the device.
For native NETCONF devices, the YANG comes from the device.
For JunOS, NSO generates the model from the JunOS XML schema.
For SNMP devices, NSO generates the model from the MIBs.
For CLI devices, the NED designer writes the YANG to map the CLI.
NSO only cares about the data that is in the model for the NED. The rest is ignored. See the to learn more about what is covered by the NED.
Code: For NETCONF and SNMP devices, there is no code. For CLI devices there is a minimum of code managing connecting over SSH/Telnet and looking for version strings. The rest is auto-rendered from the data model.
There are four categories of NEDs depending on the device interface:
NETCONF NED: The device supports NETCONF, for example, Juniper.
CLI NED: Any device with a CLI that resembles a Cisco CLI.
Generic NED: Proprietary protocols like REST, and non-Cisco CLIs.
SNMP NED: An SNMP device.
Every device needs an auth group that tells NSO how to authenticate to the device:
The CLI snippet above shows that there is a mapping from the NSO users admin and oper to the remote user and password to be used on the devices. There are two options, either a mapping from the local user to the remote user or to pass the credentials. Below is a CLI example to create a new authgroup foobar and map NSO user jim:
This auth group will pass on joe's credentials to the device.
There is a similar structure for SNMP devices authgroups snmp-group that supports SNMPv1/v2c, and SNMPv3 authentication.
The SNMP auth group above has a default auth group for non-mapped users.
Make sure you know the authentication information and created authgroups as above. Also, try all information like port numbers and authentication information, and that you can read and set the configuration over for example CLI if it is a CLI NED. So if it is a CLI device try to ssh (or telnet) to the device and do show and set configuration first of all.
All devices have a admin-state with default value southbound-locked. This means that if you do not set this value to unlocked no commands will be sent to the device.
(See also examples.ncs/getting-started/using-ncs/2-real-device-cisco-ios). Straightforward, adding a new device on a specific address, standard SSH port:
See also /examples.ncs/getting-started/using-ncs/3-real-device-juniper. Make sure that NETCONF over SSH is enabled on the JunOS device:
Then you can create a NSO netconf device as:
(See also examples.ncs/snmp-ned/basic/README .) First of all, let's explain SNMP NEDs a bit. By default all read-only objects are mapped to operational data in NSO and read-write objects are mapped to configuration data. This means that a sync-from operation will load read-write objects into NSO. How can you reach read-only objects? Note the following is true for all NED types that have modeled operational data. The device configuration exists at devices device config and has a copy in CDB. NSO can speak live to the device to fetch for example counters by using the path devices device live-status:
In many cases, SNMP NEDs are used for reading operational data in parallel with a CLI NED for writing and reading configuration data. More on that later.
Before trying NSO use net-snmp command line tools or your favorite SNMP Browser to try that all settings are ok.
Adding an SNMP device assuming that NED is in place:
MIB Groups are important. A MIB group is just a named collection of SNMP MIB Modules. If you do not specify any MIB group for a device, NSO will try with all known MIBs. It is possible to create MIB groups with wild cards such as CISCO*.
Generic devices are typically configured like a CLI device. Make sure you set the right address, port, protocol, and authentication information.
Below is an example of setting up NSO with F5 BigIP:
Assume that you have a Cisco device that you would like NSO to configure over CLI but read statistics over SNMP. This can be achieved by adding settings for live-device-protocol:
Device c0 has a config tree from the CLI NED and a live-status tree (read-only) from the SNMP NED using all MIBs in the group snmp.
Sometimes we wish to use a different protocol to collect statistics from the live tree than the protocol that is used to configure a managed device. There are many interesting use cases where this pattern applies. For example, if we wish to access SNMP data as statistics in the live tree on a Juniper router, or alternatively, if we have a CLI NED to a Cisco-type device, and wish to access statistics in the live tree over SNMP.
The solution is to configure additional protocols for the live tree. We can have an arbitrary number of NEDs associated to statistics data for an individual managed device.
The additional NEDs are configured under /devices/device/live-status-protocol.
In the configuration snippet below, we have configured two additional NEDs for statistics data.
Devices have an admin-state with following values:
unlocked: the device can be modified and changes will be propagated to the real device.
southbound-locked: the device can be modified but changes will not be propagated to the real device. Can be used to prepare configurations before the device is available in the network.
locked: the device can only be read.
The admin-state value southbound-locked is the default. This means if you create a new device without explicitly setting this value configuration changes will not propagate to the network. To see default values, use the pipe target details
To analyze NED problems, turn on the tracing for a device and look at the trace file contents.
NSO pools SSH connections and trace settings are only affecting new connections so therefore any open connection must be closed before the trace setting will take effect. Now you can inspect the raw communication between NSO and the device:
If NSO fails to talk to the device, the typical root causes are:
Learn how NSO keeps a record of its managed devices using CDB.
Cisco NSO is a network automation platform that supports a variety of uses. This can be as simple as a configuration of a standard-format hostname, which can be implemented in minutes. Or it could be an advanced MPLS VPN with custom traffic-engineered paths in a Service Provider network, which might take weeks to design and code.
Regardless of complexity, any network automation solution must keep track of two things: intent and network state.
The Configuration Database (CDB) built into NSO was designed for this exact purpose:
Firstly, the CDB will store the intent, which describes what you want from the network. Traditionally we call this intent a network service since this is what the network ultimately provides to its users.
Secondly, the CDB also stores a copy of the configuration of the managed devices, that is, the network state. Knowledge of the network state is essential to correctly provision new services. It also enables faster diagnosis of problems and is required for advanced functionality, such as self-healing.
Run your Python code using Python Virtual Machine (VM).
NSO is capable of starting one or several Python VMs where Python code in user-provided packages can run.
An NSO package containing a python directory will be considered to be a Python Package. By default, a Python VM will be started for each Python package that has a python-class-name defined in its package-meta-data.xml file. In this Python VM, the PYTHONPATH environment variable will be pointing to the python directory in the package.
If any required package that is listed in the package-meta-data.xml contains a python directory, the path to that directory will be added to the PYTHONPATH
admin@ncs# show packages
packages package cisco-ios
package-version 3.0
description "NED package for Cisco IOS"
ncs-min-version [ 3.0.2 ]
directory ./state/packages-in-use/1/cisco-ios
component upgrade-ned-id
upgrade java-class-name com.tailf.packages.ned.ios.UpgradeNedId
component cisco-ios
ned cli ned-id cisco-ios
ned cli java-class-name com.tailf.packages.ned.ios.IOSNedCli
ned device vendor Cisco
NAME VALUE
---------------------
show-tag interface
oper-status up
packages package f5-bigip
package-version 1.3
description "NED package for the F5 BigIp FW/LB"
ncs-min-version [ 3.0.1 ]
directory ./state/packages-in-use/1/bigip
component f5-bigip
ned generic java-class-name com.tailf.packages.ned.bigip.BigIpNedGeneric
ned device vendor F5
oper-status up
!This section describes the main features of the CDB and explains how NSO stores data there. To help you better understand the structure of the CDB, you will also learn how to add your data to it.
The CDB is a dedicated built-in storage for data in NSO. It was built from the ground up to efficiently store and access network configuration data, such as device configurations, service parameters, and even configuration for NSO itself. Unlike traditional SQL databases that store data as rows in a table, the CDB is a hierarchical database, with a structure resembling a tree. You could think of it as somewhat like a big XML document that can store all kinds of data.
There are a number of other features that make the CDB an excellent choice for a configuration store:
Fast lightweight database access through a well-defined API.
Subscription (“push”) mechanism for change notification.
Transaction support for ensuring data consistency.
Rich and extensible schema based on YANG.
Built-in support for schema and associated data upgrade.
Close integration with NSO for low-maintenance operation.
To speed up operations, CDB keeps a copy of configuration data in RAM, in addition to persisting it to disk using journal files. However, this means the amount of RAM needed is proportional to the number of managed devices and services. When NSO is used to manage a large network the amount of needed RAM can be quite large. The CDB also stores transient operational data, such as alarms and traffic statistics. By default, this operational data is only kept in RAM and is reset during restarts, however, the CDB can be instructed to persist it if required.
For reliable storage of the configuration on disk, the CDB requires that the file system correctly implements the standard primitives for file synchronization and truncation. For this reason (as well as for performance), NFS or other network file systems are unsuitable for use with the CDB - they may be acceptable for development, but using them in production is unsupported and strongly discouraged.
The automatic schema update feature is useful not only when performing an actual upgrade of NSO itself, it also simplifies the development process. It allows individual developers to add and delete items in the configuration independently.
Additionally, the schema for data in the CDB is defined with a standard modeling language called YANG. YANG (RFC 7950, https://tools.ietf.org/html/rfc7950) describes constraints on the data and allows the CDB to store values more efficiently.
All of the data stored in the CDB follows the data model provided by various YANG modules. Each module usually comes as one or more files with a .yang extension and declares a part of the overall model.
NSO provides a base set of YANG modules out of the box. They are located in $NCS_DIR/src/ncs/yang if you wish to inspect them. These modules are required for proper system operation.
All other YANG modules are provided by packages and extend the base NSO data model. For example, each Network Element Driver (NED) package adds the required nodes to store the configuration for that particular type of device. In the same way, you can store your custom data in the CDB by providing a package with your own YANG module.
However, the CDB can't use the YANG files directly. The bundled compiler, ncsc, must first transform a YANG module into a final schema (.fxs) file. The reason is that internally and in the programming APIs NSO refers to YANG nodes with integer values instead of names. This conserves space and allows for more efficient operations, such as switch statements in the application code. The .fxs file contains this mapping and needs to be recreated if any part of the YANG model changes. The compilation process is usually started from the package Makefile by the make command.
Ensure that:
No previous NSO or netsim processes are running. Use the ncs --stop and ncs-netsim stop commands to stop them if necessary.
NSO Local Install with a fresh runtime directory has been created by the ncs-setup --dest ~/nso-lab-rundir or similar command.
The environment variable NSO_RUNDIR points to this runtime directory, such as set by the export NSO_RUNDIR=~/nso-lab-rundir command. It enables the below commands to work as-is, without additional substitution needed.
The easiest way to add your data fields to the CDB is by creating a service package. The package includes a YANG file for the service-specific data, which you can customize. You can create the initial package by simply invoking the ncs-make-package command. This command also sets up a Makefile with the code for compiling the YANG model.
Use the following command to create a new package:
The command line switches instruct the command to compile the YANG file and place the package in the right location.
Now start the NSO process if it is not running already and connect to the CLI:
Next, instruct NSO to load the newly created package:
Once the package loading process is completed, you can verify the data model from your package was incorporated into NSO. Use the show command, which now supports an additional parameter:
This command tells you that NSO knows about the extended data model but there is no actual data configured for it yet.
More interestingly, you are now able to add custom entries to the configuration. First, enter the CLI configuration mode:
Then add an arbitrary entry under my-data-entries:
What is more, you can also set a dummy IP address:
However, if you try to use something different from a dummy, you will get an error. Likewise, if you try to assign a dummy a value that is not an IP address. How did NSO learn about this dummy value?
If you assumed from the YANG file, you are correct. YANG files provide the schema for the CDB and that dummy value comes from the YANG model in your package. Let's take a closer look.
Exit the configuration mode and discard the changes by typing abort:
Open the YANG file in an editor or list its contents from the CLI with the following command:
At the start of the output, you can see the module my-data-entries, which contains your data model. By default, the ncs-make-package gives it the same name as the package. You can check that this module is indeed loaded:
The list my-data-entries statement, located a bit further down in the YANG file, allowed you to add custom entries before. And near the end of the output, you can find the leaf dummy definition, with IPv4 as the type. This is the source of information that enables NSO to enforce a valid IP address as the value.
NSO uses YANG to structure and enforce constraints on data that it stores in the CDB. YANG was designed to be extensible and handle all kinds of data modeling, which resulted in a number of language features that helped achieve this goal. However, there are only four fundamental elements (node types) for describing data:
leaf nodes
leaf-list nodes
container nodes
list nodes
You can then combine these elements into a complex, tree-like structure, which is why we refer to individual elements as nodes (of the data tree). In general, YANG separates nodes into those that hold data (leaf, leaf-list) and those that hold other nodes (container, list).
A leaf contains simple data such as an integer or a string. It has one value of a particular type and no child nodes. For example:
This code describes the structure that can hold a value of a hostname (of some device). A leaf node is used because the hostname only has a single value, that is, the device has one (canonical) hostname. In the NSO CLI, you set a value of a leaf simply as:
A leaf-list is a sequence of leaf nodes of the same type. It can hold multiple values, very much like an array. For example:
This code describes a data structure that can hold many values, such as a number of domain names. In the CLI, you can assign multiple values to a leaf-list with the help of square bracket syntax:
leaf and leaf-list describe nodes that hold simple values. As a model keeps expanding, having all data nodes on the same (top) level can quickly become unwieldy. A container node is used to group related nodes into a subtree. It has only child nodes and no value. A container may contain any number of child nodes of any type (including leafs, lists, containers, and leaf-lists). For example:
This code defines the concept of a server administrator. In the CLI, you first select the container before you access the child nodes:
Similarly, a list defines a collection of container-like list entries that share the same structure. Each entry is like a record or a row in a table. It is uniquely identified by the value of its key leaf (or leaves). A list definition may contain any number of child nodes of any type (leafs, containers, other lists, and so on). For example:
This code defines a list of users (of which there can be many), where each user is uniquely identified by their name. In the CLI, lists take an additional parameter, the key value, to select a single entry:
To set a value of a particular list entry, first specify the entry, then the child node, like so:
Combining just these four fundamental YANG node types, you can build a very complex model that describes your data. As an example, the model for the configuration of a Cisco IOS-based network device, with its myriad features, is created with YANG. However, it makes sense to start with some simple models, to learn what kind of data they can represent and how to alter that data with the CLI.
Ensure that:
No previous NSO or netsim processes are running. Use the ncs --stop and ncs-netsim stop commands to stop them if necessary.
NSO Local Install with a fresh runtime directory has been created by the ncs-setup --dest ~/nso-lab-rundir or similar command.
The environment variable NSO_RUNDIR points to this runtime directory, such as set by the export NSO_RUNDIR=~/nso-lab-rundir command. It enables the below commands to work as-is, without additional substitution needed.
You can add custom data models to NSO by using packages. So, you will build a package to hold the YANG module that represents your model. Use the following command to create a package (if you are building on top of the previous showcase, the package may already exist and will be updated):
Change the working directory to the directory of your package:
You will place the YANG model into the src/yang/my-test-model.yang file. In a text editor, create a new file and add the following text at the start:
The first line defines a new module and gives it a name. In addition, there are two more statements required: the namespace and prefix. Their purpose is to help avoid name collisions.
Add a statement for each of the four fundamental YANG node types (leaf, leaf-list, container, list) to the my-test-model.yang model.
Also, add the closing bracket for the module at the end:
Remember to finally save the file as my-test-model.yang in the src/yang/ directory of your package. It is a best practice for the name of the file to match the name of the module.
Having completed the model, you must compile it into an appropriate (.fxs) format. From the text editor first, return to the shell and then run the make command in the src/ subdirectory of your package:
The compiler will report if there are errors in your YANG file, and you must fix them before continuing.
Next, start the NSO process and connect to the CLI:
Finally, instruct NSO to reload the packages:
Enter the configuration mode by using the config command and test out how to set values for the data nodes you have defined in the YANG model:
host-name leaf
domains leaf-list
server-admin container
user-info list
Use the ? and TAB keys to see the possible completions.
Now feel free to go back and experiment with the YANG file to see how your changes affect the data model. Just remember to rebuild and reload the package after you make any changes.
Adding a new YANG module to the CDB enables it to store additional data, however, there is nothing in the CDB for this module yet. While you can add configuration with the CLI, for example, there are situations where it makes sense to start with some initial data in the CDB already. This is especially true when a new instance starts for the first time and the CDB is empty.
In such cases, you can bootstrap the CDB data with XML files. There are various uses for this feature. For example, you can implement some default “factory settings” for your module or you might want to pre-load data when creating a new instance for testing.
In particular, some of the provided examples use the CDB init files mechanism to save you from typing out all of the initial configuration commands by hand. They do so by creating a file with the configuration encoded in the XML format.
When starting empty, the CDB will try to initialize the database from all XML files found in the directories specified by the init-path and db-dir settings in ncs.conf (please see ncs.conf(5) in Manual Pages for exact details). The loading process scans the files with the .xml suffix and adds all the data in a single transaction. In other words, there is no specified order in which the files are processed. This happens early during start-up, during the so-called start phase 1, described in Starting NSO.
The content of the init file does not need to be a complete instance document but can specify just a part of the overall data, very much like the contents of the NETCONF edit-config operation. However, the end result of applying all the files must still be valid according to the model.
It is a good practice to wrap the data inside a config element, as it gives you the option to have multiple top-level data elements in a single file while it remains a valid XML document. Otherwise, you would have to use separate files for each of them. The following example uses the config element to fit all the elements into a single file.
There are many ways to generate the XML data. A common approach is to dump existing data with the ncs_load utility or the display xml filter in the CLI. All of the data in the CDB can be represented (or exported, if you will) in XML. This is no coincidence. XML was the main format for encoding data with NETCONF when YANG was created and you can trace the origin of some YANG features back to XML.
Several Python packages can be started in the same Python VM if their corresponding package-meta-data.xml files contain the same python-package/vm-name.
A Python package skeleton can be created by making use of the ncs-make-package command:
The tailf-ncs-python-vm.yang defines the python-vm container which, along with ncs.conf, is the entry point for controlling the NSO Python VM functionality. Study the content of the YANG model in the example below (The Python VM YANG Model). For a full explanation of all the configuration data, look at the YANG file and man ncs.conf. Here will follow a description of the most important configuration parameters.
Note that some of the nodes beneath python-vm are by default invisible due to a hidden attribute. To make everything under python-vm visible in the CLI, two steps are required:
First, the following XML snippet must be added to ncs.conf:\
Next, the unhide command may be used in the CLI session:
The sanity-checks/self-assign-warning controls the self-assignment warnings for Python services with off, log, and alarm (default) modes. An example of a self-assignment:
As several service invocations may run in parallel, self-assignment will likely cause difficult-to-debug issues. An alarm or a log entry will contain a warning and a keypath to the service instance that caused the warning. Example log entry:
With the logging/level, the amount of logged information can be controlled. This is a global setting applied to all started Python VMs unless explicitly set for a particular VM, see Debugging of Python packages. The levels correspond to the pre-defined Python levels in the Python logging module, ranging from level-critical to level-debug.
The logging/log-file-prefix define the prefix part of the log file path used for the Python VMs. This prefix will be appended with a Python VM-specific suffix which is based on the Python package name or the python-package/vm-name from the package-meta-data.xml file. The default prefix is logs/ncs-python-vm so e.g., if a Python package named l3vpn is started, a logfile with the name logs/ncs-python-vm-l3vpn.log will be created.
The status/start and status/current contains operational data. The status/start command will show information about what Python classes, as declared in the package-meta-data.xml file, were started and whether the outcome was successful or not. The status/current command will show which Python classes that are currently running in a separate thread. The latter assumes that the user-provided code cooperates by informing NSO about any thread(s) started by the user code, see Structure of the User-provided Code.
The start and stop actions make it possible to start and stop a particular Python VM.
The package-meta-data.xml file must contain a component of type application with a python-class-name specified as shown in the example below.
The component name (L3VPN Service in the example) is a human-readable name of this application component. It will be shown when doing show python-vm in the CLI. The python-class-name should specify the Python class that implements the application entry point. Note that it needs to be specified using Python's dot notation and should be fully qualified (given the fact that PYTHONPATH is pointing to the package python directory).
Study the excerpt of the directory listing from a package named l3vpn below.
Look closely at the python directory above. Note that directly under this directory is another directory named the package (l3vpn) that contains the user code. This is an important structural choice that eliminates the chance of code clashes between dependent packages (only if all dependent packages use this pattern of course).
As you can see, the service.py is located according to the description above. There is also a __init__.py (which is empty) there to make the l3vpn directory considered a module from Python's perspective.
Note the _namespaces/l3vpn_ns.py file. It is generated from the l3vpn.yang model using the ncsc --emit-python command and contains constants representing the namespace and the various components of the YANG model, which the User code can import and make use of.
The service.py file should include a class definition named Service which acts as the component's entry point. See The Application Component for details.
Notice that there is also a file named upgrade.py present which holds the implementation of the upgrade component specified in the package-meta-data.xml excerpt above. See The Upgrade Component for details regarding upgrade components.
The Python class specified in the package-meta-data.xml file will be started in a Python thread which we call a component thread. This Python class should inherit ncs.application.Application and should implement the methods setup() and teardown().
NSO supports two different modes for executing the implementations of the registered callpoints, threading and multiprocessing.
The default threading mode will use a single thread pool for executing the callbacks for all callpoints.
The multiprocessing mode will start a subprocess for each callpoint. Depending on the user code, this can greatly improve the performance on systems with a lot of parallel requests, as a separate worker process will be created for each Service, Nano Service, and Action.
The behavior is controlled by three factors:
callpoint-model setting in the package-meta-data.xml file.
Number of registered callpoints in the Application.
Operating System support for killing child processes when the parent exits.
If the callpoint-model is set to multiprocessing, more than one callpoint is registered in the Application and the Operating System supports killing child processes when the parent exits, NSO will enable multiprocessing mode.
The Service class will be instantiated by NSO when started or whenever packages are reloaded. Custom initialization, such as registering service and action callbacks should be done in the setup() method. If any cleanup is needed when NSO finishes or when packages are reloaded it should be placed in the teardown() method.
The existing log functions are named after the standard Python log levels, thus in the example above the self.log object contains the functions debug,info,warning,error,critical. Where to log and with what level can be controlled from NSO?
The Python class specified in the upgrade section of package-meta-data.xml will be run by NSO in a separately started Python VM. The class must be instantiable using the empty constructor and it must have a method called upgrade as in the example below. It should inherit ncs.upgrade.Upgrade.
Python code packages are not running with an attached console and the standard out from the Python VMs are collected and put into the common log file ncs-python-vm.log. Possible Python compilation errors will also end up in this file.
Normally the logging objects provided by the Python APIs are used. They are based on the standard Python logging module. This gives the possibility to control the logging if needed, e.g., getting a module local logger to increase logging granularity.
The default logging level is set to info. For debugging purposes, it is very useful to increase the logging level:
This sets the global logging level and will affect all started Python VMs. It is also possible to set the logging level for a single package (or multiple packages running in the same VM), which will take precedence over the global setting:
The debugging output is printed to separate files for each package and the log file naming is ncs-python-vm-pkg_name.log
Log file output example for package l3vpn:
There are occasions where the standard Python installation is incompatible or maybe not preferred to be used together with NSO. In such cases, there are several options to tell NSO to use another Python installation for starting a Python VM.
By default NSO will use the file $NCS_DIR/bin/ncs-start-python-vm when starting a new Python VM. The last few lines in that file read:
As seen above NSO first looks for python3 and if found it will be used to start the VM. If python3 is not found NSO will try to use the command python instead. Here we describe a couple of options for deciding which Python NSO should start.
NSO can be configured to use a custom start command for starting a Python VM. This can be done by first copying the file $NCS_DIR/bin/ncs-start-python-vm to a new file and then changing the last lines of that file to start the desired version of Python. After that, edit ncs.conf and configure the new file as the start command for a new Python VM. When the file ncs.conf has been changed reload its content by executing the command ncs --reload.
Example:
Add the following snippet to ncs.conf:
The new start-command will take effect upon the next restart or configuration reload.
Another way of telling NSO to start a specific Python executable is to configure the environment so that executing python3 or python starts the desired Python. This may be done system-wide or can be made specific for the user running NSO.
Changing the last line of $NCS_DIR/bin/ncs-start-python-vm is of course an option but altering any of the installation files of NSO is discouraged.
admin@ncs(config)# show full-configuration devices authgroups
devices authgroups group default
umap admin
remote-name admin
remote-password $4$wIo7Yd068FRwhYYI0d4IDw==
!
umap oper
remote-name oper
remote-password $4$zp4zerM68FRwhYYI0d4IDw==
!
!
devices authgroups snmp-group default
default-map community-name public
umap admin
usm remote-name admin
usm security-level auth-priv
usm auth md5 remote-password $4$wIo7Yd068FRwhYYI0d4IDw==
usm priv des remote-password $4$wIo7Yd068FRwhYYI0d4IDw==
!
!admin@ncs(config)# devices authgroups group foobar umap joe same-pass same-user
admin@ncs(config-umap-joe)# commitadmin@ncs(config)# devices device c7 address 1.2.3.4 port 22 \
device-type cli ned-id cisco-ios-cli-3.0
admin@ncs(config-device-c7)# authgroup
Possible completions:
default foobar
admin@ncs(config-device-c7)# authgroup default
admin@ncs(config-device-c7)# state admin-state unlocked
admin@ncs(config-device-c7)# commitjunos1% show system services
ftp;
ssh;
telnet;
netconf {
ssh {
port 22;
}
}admin@ncs(config)# devices device junos1 address junos1.lab port 22 \
authgroup foobar device-type netconf
admin@ncs(config-device-junos1)# state admin-state unlocked
admin@ncs(config-device-junos1)# commitadmin@ncs# show devices device r1 live-status SNMPv2-MIB
live-status SNMPv2-MIB system sysDescr "Tail-f ConfD agent - r1"
live-status SNMPv2-MIB system sysObjectID 1.3.6.1.4.1.24961
live-status SNMPv2-MIB system sysUpTime 4253
live-status SNMPv2-MIB system sysContact ""
live-status SNMPv2-MIB system sysName ""
live-status SNMPv2-MIB system sysLocation ""
live-status SNMPv2-MIB system sysServices 72
live-status SNMPv2-MIB system sysORLastChange 0
live-status SNMPv2-MIB snmp snmpInPkts 3
live-status SNMPv2-MIB snmp snmpInBadVersions 0
live-status SNMPv2-MIB snmp snmpInBadCommunityNames 0
live-status SNMPv2-MIB snmp snmpInBadCommunityUses 0
live-status SNMPv2-MIB snmp snmpInASNParseErrs 0
live-status SNMPv2-MIB snmp snmpEnableAuthenTraps disabled
live-status SNMPv2-MIB snmp snmpSilentDrops 0
live-status SNMPv2-MIB snmp snmpProxyDrops 0
live-status SNMPv2-MIB snmpSet snmpSetSerialNo 2161860admin@ncs(config)# show full-configuration devices device r1
devices device r1
address 127.0.0.1
port 11023
device-type snmp version v2c
device-type snmp snmp-authgroup default
state admin-state unlocked
!
admin@ncs(config)# show full-configuration devices device r2
devices device r2
address 127.0.0.1
port 11024
device-type snmp version v3
device-type snmp snmp-authgroup default
device-type snmp mib-group [ basic snmp ]
state admin-state unlocked
!admin@ncs(config)# show full-configuration devices mib-group
devices mib-group basic
mib-module [ BASIC-CONFIG-MIB ]
!
devices mib-group snmp
mib-module [ SNMP* ]
!admin@ncs(config)# devices device bigip01 address 192.168.1.162 \
port 22 device-type generic ned-id f5-bigip
admin@ncs(config-device-bigip01)# state admin-state southbound-locked
admin@ncs(config-device-bigip01)# authgroup
Possible completions:
default foobar
admin@ncs(config-device-bigip01)# authgroup default
admin@ncs(config-device-bigip01)# commitadmin@ncs(config)# devices device c0 live-status-protocol snmp \
device-type snmp version v1 \
snmp-authgroup default mib-group [ snmp ]
admin@ncs(config-live-status-protocol-snmp)# commit
admin@ncs(config)# show full-configuration devices device c0
devices device c0
address 127.0.0.1
port 10022
!
authgroup default
device-type cli ned-id cisco-ios
live-status-protocol snmp
device-type snmp version v1
device-type snmp snmp-authgroup default
device-type snmp mib-group [ snmp ]
!devices {
authgroups {
snmp-group g1 {
umap admin {
community-name public;
}
}
}
mib-group m1 {
mib-module [ SIMPLE-MIB ];
}
device device0 {
live-status-protocol x1 {
port 4001;
device-type {
snmp {
version v2c;
snmp-authgroup g1;
mib-group [ m1 ];
}
}
}
live-status-protocol x2 {
authgroup default;
device-type {
cli {
ned-id xstats;
}
}
}
}admin@ncs(config)# show full-configuration devices device c0 | detailsadmin@ncs(config)# show full-configuration devices global-settings
devices global-settings trace-dir ./logs
admin@ncs(config)# devices device c0 trace raw
admin@ncs(config-device-c0)# commit
admin@ncs(config)# devices device c0 disconnect
admin@ncs(config)# devices device c0 connect$ less logs/ned-c0.trace
admin connected from 127.0.0.1 using ssh on HOST-17
c0>
*** output 8-Sep-2014::10:05:39.673 ***
enable
*** input 8-Sep-2014::10:05:39.674 ***
enable
c0#
*** output 8-Sep-2014::10:05:39.713 ***
terminal length 0
*** input 8-Sep-2014::10:05:39.714 ***
terminal length 0
c0#
*** output 8-Sep-2014::10:05:39.782 ***
terminal width 0
*** input 8-Sep-2014::10:05:39.783 ***
terminal width 0
0^M
c0#
*** output 8-Sep-2014::10:05:39.839 ***
-- Requesting version string --
show version
*** input 8-Sep-2014::10:05:39.839 ***
show version
Cisco IOS Software, 7200 Software (C7200-JK9O3S-M), Version 12.4(7h), RELEASE SOFTWARE (fc1)^M
Technical Support: http://www.cisco.com/techsupport^M
Copyright (c) 1986-2007 by Cisco Systems, Inc.^M
...admin@ncs(config)# devices device c0
Possible completions:
...
connect-timeout - Timeout in seconds for new connections
...
read-timeout - Timeout in seconds used when reading data
...
write-timeout - Timeout in seconds used when writing dataadmin@ncs(config)# devices profiles profile good-profile
Possible completions:
connect-timeout Timeout in seconds for new connections
ned-settings Control which device capabilities NCS uses
read-timeout Timeout in seconds used when reading data
trace Trace the southbound communication to devices
write-timeout Timeout in seconds used when writing data$ ncs-make-package --service-skeleton python --build \
--dest $NSO_RUNDIR/packages/my-data-entries my-data-entries
mkdir -p ../load-dir
mkdir -p java/src//
/nso/bin/ncsc `ls my-data-entries-ann.yang > /dev/null 2>&1 && echo "-a my-data-entries-ann.yang"` \
-c -o ../load-dir/my-data-entries.fxs yang/my-data-entries.yang
$$ cd $NSO_RUNDIR ; ncs ; ncs_cli -Cu admin
admin connected from 127.0.0.1 using console on nso
admin@ncs#admin@ncs# packages reload
>>> System upgrade is starting.
>>> Sessions in configure mode must exit to operational mode.
>>> No configuration changes can be performed until upgrade has completed.
>>> System upgrade has completed successfully.
reload-result {
package my-data-entries
result true
}admin@ncs# show my-data-entries
% No entries found.
admin@ncs#admin@ncs# config
Entering configuration mode terminal
admin@ncs(config)#admin@ncs(config)# my-data-entries "entry number 1"
admin@ncs(config-my-data-entries-entry number 1)#admin@ncs(config-my-data-entries-entry number 1)# dummy 0.0.0.0
admin@ncs(config-my-data-entries-entry number 1)#admin@ncs(config-my-data-entries-entry number 1)# abort
admin@ncs#admin@ncs# file show packages/my-data-entries/src/yang/my-data-entries.yang
module my-data-entries {
< ... output omitted ... >
list my-data-entries {
< ... output omitted ... >
leaf dummy {
type inet:ipv4-address;
}
}
}admin@ncs# show ncs-state loaded-data-models data-model my-data-entries
EXPORTED EXPORTED
NAME REVISION NAMESPACE PREFIX TO ALL TO
--------------------------------------------------------------------------------------------------
my-data-entries - http://com/example/mydataentries my-data-entries X -
admin@ncs#leaf host-name {
type string;
description "Hostname for this system";
}admin@ncs(config)# host-name "server-NY-01"leaf-list domains {
type string;
description "My favourite internet domains";
}admin@ncs(config)# domains [ cisco.com tail-f.com ]container server-admin {
description "Administrator contact for this system";
leaf name {
type string;
}
}admin@ncs(config)# server-admin name "Ingrid"list user-info {
description "Information about team members";
key "name";
leaf name {
type string;
}
leaf expertise {
type string;
}
}admin@ncs(config)# user-info "Ingrid"admin@ncs(config)# user-info "Ingrid" expertise "Linux"$ ncs-make-package --service-skeleton python \
--dest $NSO_RUNDIR/packages/my-data-entries my-data-entries
$$ cd $NSO_RUNDIR/packages/my-data-entriesmodule my-test-model {
namespace "http://example.tail-f.com/my-test-model";
prefix "t"; leaf host-name {
type string;
description "Hostname for this system";
}
leaf-list domains {
type string;
description "My favourite internet domains";
}
container server-admin {
description "Administrator contact for this system";
leaf name {
type string;
}
}
list user-info {
description "Information about team members";
key "name";
leaf name {
type string;
}
leaf expertise {
type string;
}
}}$ make -C src/
make: Entering directory 'nso-run/packages/my-data-entries/src'
/nso/bin/ncsc `ls my-test-model-ann.yang > /dev/null 2>&1 && echo "-a my-test-model-ann.yang"` \
-c -o ../load-dir/my-test-model.fxs yang/my-test-model.yang
make: Leaving directory 'nso-run/packages/my-data-entries/src'
$$ cd $NSO_RUNDIR && ncs && ncs_cli -C -u admin
admin connected from 127.0.0.1 using console on nso
admin@ncs#admin@ncs# packages reload
>>> System upgrade is starting.
>>> Sessions in configure mode must exit to operational mode.
>>> No configuration changes can be performed until upgrade has completed.
>>> System upgrade has completed successfully.
reload-result {
package my-data-entries
result true
}
admin@ncs#<config xmlns="http://tail-f.com/ns/config/1.0">
<host-name xmlns="http://example.tail-f.com/my-test-model">server-NY-01</host-name>
<server-admin xmlns="http://example.tail-f.com/my-test-model">
<name>Ingrid</name>
</server-admin>
</config>$ ncs_load -F p -p /domains > cdb-init.xml
$ cat cdb-init.xml
<config xmlns="http://tail-f.com/ns/config/1.0">
<domains xmlns="http://example.tail-f.com/my-test-model">cisco.com</domains>
<domains xmlns="http://example.tail-f.com/my-test-model">tail-f.com</domains>
</config>
$<hide-group>
<name>debug</name>
</hide-group>admin@ncs(config)# unhide debug
admin@ncs(config)#ncs-make-package --service-skeleton python <package-name>class ServiceCallbacks(Service):
@Service.create
def cb_create(self, tctx, root, service, proplist):
self.counter = 42<WARNING> ... Assigning to self is not thread safe: /mysrvc:mysrvc{2}> yanger -f tree tailf-ncs-python-vm.yang
submodule: tailf-ncs-python-vm (belongs-to tailf-ncs)
+--rw python-vm
+--rw sanity-checks
| +--rw self-assign-warning? enumeration
+--rw logging
| +--rw log-file-prefix? string
| +--rw level? py-log-level-type
| +--rw vm-levels* [node-id]
| +--rw node-id string
| +--rw level py-log-level-type
+--rw status
| +--ro start* [node-id]
| | +--ro node-id string
| | +--ro packages* [package-name]
| | +--ro package-name string
| | +--ro components* [component-name]
| | +--ro component-name string
| | +--ro class-name? string
| | +--ro status? enumeration
| | +--ro error-info? string
| +--ro current* [node-id]
| +--ro node-id string
| +--ro packages* [package-name]
| +--ro package-name string
| +--ro components* [component-name]
| +--ro component-name string
| +--ro class-names* [class-name]
| +--ro class-name string
| +--ro status? enumeration
+---x stop
| +---w input
| | +---w name string
| +--ro output
| +--ro result? string
+---x start
+---w input
| +---w name string
+--ro output
+--ro result? string<component>
<name>L3VPN Service</name>
<application>
<python-class-name>l3vpn.service.Service</python-class-name>
</application>
</component>
<component>
<name>L3VPN Service model upgrade</name>
<upgrade>
<python-class-name>l3vpn.upgrade.Upgrade</python-class-name>
</upgrade>
</component>packages/
+-- l3vpn/
+-- package-meta-data.xml
+-- python/
| +-- l3vpn/
| +-- __init__.py
| +-- service.py
| +-- upgrade.py
| +-- _namespaces/
| +-- __init__.py
| +-- l3vpn_ns.py
+-- src
+-- Makefile
+-- yang/
+-- l3vpn.yangimport ncs
class Service(ncs.application.Application):
def setup(self):
# The application class sets up logging for us. It is accessible
# through 'self.log' and is a ncs.log.Log instance.
self.log.info('Service RUNNING')
# Service callbacks require a registration for a 'service point',
# as specified in the corresponding data model.
#
self.register_service('l3vpn-servicepoint', ServiceCallbacks)
# If we registered any callback(s) above, the Application class
# took care of creating a daemon (related to the service/action point).
# When this setup method is finished, all registrations are
# considered done and the application is 'started'.
def teardown(self):
# When the application is finished (which would happen if NCS went
# down, packages were reloaded or some error occurred) this teardown
# method will be called.
self.log.info('Service FINISHED')import ncs
import _ncs
class Upgrade(ncs.upgrade.Upgrade):
"""An upgrade 'class' that will be instantiated by NSO.
This class can be named anything as long as NSO can find it using the
information specified in <python-class-name> for the <upgrade>
component in package-meta-data.xml.
Is should inherit ncs.upgrade.Upgrade.
NSO will instantiate this class using the empty contructor.
The class MUST have a method named 'upgrade' (as in the example below)
which will be called by NSO.
"""
def upgrade(self, cdbsock, trans):
"""The upgrade 'method' that will be called by NSO.
Arguments:
cdbsock -- a connected CDB data socket for reading current (old) data.
trans -- a ncs.maapi.Transaction instance connected to the init
transaction for writing (new) data.
There is no need to connect a CDB data socket to NSO - that part is
already taken care of and the socket is passed in the first argument
'cdbsock'. A session against the DB needs to be started though. The
session doesn't need to be ended and the socket doesn't need to be
closed - NSO will do that automatically.
The second argument 'trans' is already attached to the init transaction
and ready to be used for writing the changes. It can be used to create a
maagic object if that is preferred. There's no need to detach or finish
the transaction, and, remember to NOT apply() the transaction when work
is finished.
The method should return True (or None, which means that a return
statement is not needed) if everything was OK.
If something went wrong the method should return False or throw an
error. The northbound client initiating the upgrade will be alerted
with an error message.
Anything written to stdout/stderr will end up in the general log file
for various output from Python VMs. If not configured the file will
be named ncs-python-vm.log.
"""
# start a session against running
_ncs.cdb.start_session2(cdbsock, ncs.cdb.RUNNING,
ncs.cdb.LOCK_SESSION | ncs.cdb.LOCK_WAIT)
# loop over a list and do some work
num = _ncs.cdb.num_instances(cdbsock, '/path/to/list')
for i in range(0, num):
# read the key (which in this example is 'name') as a ncs.Value
value = _ncs.cdb.get(cdbsock, '/path/to/list[{0}]/name'.format(i))
# create a mandatory leaf 'level' (enum - low, normal, high)
key = str(value)
trans.set_elem('normal', '/path/to/list{{{0}}}/level'.format(key))
# not really needed
return True
# Error return example:
#
# This indicates a failure and the string written to stdout below will
# written to the general log file for various output from Python VMs.
#
# print('Error: not implemented yet')
# return False $ ncs_cli -u admin
admin@ncs> config
admin@ncs% set python-vm logging level level-debug
admin@ncs% commit $ ncs_cli -u admin
admin@ncs> config
admin@ncs% set python-vm logging vm-levels pkg_name level level-debug
admin@ncs% commit $ tail -f logs/ncs-python-vm-l3vpn.log
2016-04-13 11:24:07 - l3vpn - DEBUG - Waiting for Json msgs
2016-04-13 11:26:09 - l3vpn - INFO - action name: double
2016-04-13 11:26:09 - l3vpn - INFO - action input.number: 21 if [ -x "$(which python3)" ]; then
echo "Starting python3 -u $main $*"
exec python3 -u "$main" "$@"
fi
echo "Starting python -u $main $*"
exec python -u "$main" "$@"$ cd $NCS_DIR/bin
$ pwd
/usr/local/nso/bin
$ cp ncs-start-python-vm my-start-python-vm
$ # Use your favourite editor to update the last lines of the new
$ # file to start the desired Python executable.<python-vm>
<start-command>/usr/local/nso/bin/my-start-python-vm</start-command>
</python-vm>Manipulate and manage existing services and devices.
Devices and services are the most important entities in NSO. Once created, they may be manipulated in several different ways. The three main categories of operations that affect the state of services and devices are:
Commit Flags: Commit flags modify the transaction semantics.
Device Actions: Explicit actions that modify the devices.
Service Actions: Explicit actions that modify the services.
The purpose of this section is more of a quick reference guide, an enumeration of commonly used commands. The context in which these commands should be used is found in other parts of the documentation.
Commit flags may be present when issuing a commit command:
Some of these flags may be configured to apply globally for all commits, under /devices/global-settings, or per device profile, under /devices/profiles.
Some of the more important flags are:
and-quit: Exit to (CLI operational mode) after commit.
check: Validate the pending configuration changes. Equivalent to validate command (See ).
comment | label: Add a commit comment/label visible in compliance reports, rollback files, etc.
All commands in NSO can also have pipe commands. A useful pipe command for commit is details:
This will give feedback on the steps performed in the commit.
When working with templates, there is a pipe command debug which can be used to troubleshoot templates. To enable debugging on all templates use:
When configuring using many templates the debug output can be overwhelming. For this reason, there is an option to only get debug information for one template, in this example, a template named l3vpn:
Actions for devices can be performed globally on the /devices path and for individual devices on /devices/device/name. Many actions are also available on device groups as well as device ranges.
Service actions are performed on the service instance.
Manage NSO alarms with native alarm manager.
NSO embeds a generic alarm manager. It manages NSO native alarms and can easily be extended with application-specific alarms. Alarm sources can be notifications from devices, undesired states on services detected or anything provided via the Java API.
The Alarm Manager has three main components:
Alarm List: A list of alarms in NSO. Each list entry represents an alarm state for a specific device, an object within the device, and an alarm type.
Alarm Model: For each alarm type, you can configure the mapping to for example X.733 alarm standard parameters that are sent as notifications northbound.
dry-run: Validate and display the configuration changes but do not perform the actual commit. Neither CDB nor the devices are affected. Instead, the effects that would have taken place are shown in the returned output. The output format can be set with the outformat option. Possible output formats are: xml, cli, and native.
The xml format displays all changes in the whole data model. The changes will be displayed in NETCONF XML edit-config format, i.e., the edit-config that would be applied locally (at NCS) to get a config that is equal to that of the managed device.
The cli format displays all changes in the whole data model. The changes will be displayed in CLI curly bracket format.
The native format displays only changes under /devices/device/config. The changes will be displayed in native device format. The native format can be used with the reverse option to display the device commands for getting back to the current running state in the network if the commit is successfully executed. Beware that if any changes are done later on the same data, the reverse device commands returned are invalid.
no-networking: Validate the configuration changes, and update the CDB but do not update the actual devices. This is equivalent to first setting the admin state to southbound locked, then issuing a standard commit. In both cases, the configuration changes are prevented from being sent to the actual devices.
{% hint style="danger" %} If the commit implies changes, it will make the device out-of-sync.
The sync-to command can then be used to push the change to the network. {% endhint %}
no-out-of-sync-check: Commit even if the device is out of sync. This can be used in scenarios where you know that the change you are doing is not in conflict with what is on the device and do not want to perform the action sync-from first. Verify the result by using the action compare-config.
{% hint style="danger" %} The device's sync state is assumed to be unknown after such a commit and the stored last-transaction-id value is cleared. {% endhint %}
no-overwrite: NSO will check that the data that should be modified has not changed on the device compared to NSO's view of the data. This is a fine-granular sync check; NSO verifies that NSO and the device are in sync regarding the data that will be modified. If they are not in sync, the transaction is aborted. This parameter is particularly useful in brownfield scenarios where the device is always out of sync due to being directly modified by operators or other management systems.
{% hint style="danger" %} The device's sync state is assumed to be unknown after such a commit and the stored last-transaction-id value is cleared. {% endhint %}* no-revision-drop: Fail if one or more devices have obsolete device models.
When NSO connects to a managed device the version of the device data model is discovered. Different devices in the network might have different versions. When NSO is requested to send configuration to devices, NSO defaults to drop any configuration that only exists in later models than the device supports. This flag forces NSO to never silently drop any data set operations towards a device.
no-deploy: Commit without invoking the service create method, i.e., write the service instance data without activating the service(s). The service(s) can later be redeployed to write the changes of the service(s) to the network.
reconcile: Reconcile the service data. All data which existed before the service was created will now be owned by the service. When the service is removed, that data will also be removed. In technical terms, the reference count will be decreased by one for everything that existed before the service. If manually configured data exists below in the configuration tree that data is kept unless the option discard-non-service-config is used.
use-lsa: Force handling of the LSA nodes as such. This flag tells NSO to propagate applicable commit flags and actions to the LSA nodes without applying them on the upper NSO node itself. The commit flags affected are: dry-run, no-networking, no-out-of-sync-check, no-overwrite and no-revision-drop.
no-lsa: Do not handle any of the LSA nodes as such. These nodes will be handled as any other device.
commit-queue: Commit through the commit queue (see Commit Queue). While the configuration change is committed to CDB immediately it is not committed to the actual device but rather queued for eventual commit to increase transaction throughput. This enables the use of the commit queue feature for individual commit commands without enabling it by default.
Possible operation modes are: async, sync and bypass.
If the async mode is set, the operation returns successfully if the transaction data has been successfully placed in the queue.
The sync mode will cause the operation to not return until the transaction data has been sent to all devices, or a timeout occurs. If the timeout occurs the transaction data stays in the queue and the operation returns successfully. The timeout value can be specified with the timeout or infinity option. By default, the timeout value is determined by what is configured in /devices/global-settings/commit-queue/sync.
The bypass mode means that if /devices/global-settings/commit-queue/enabled-by-default is true, the data in this transaction will bypass the commit queue. The data will be written directly to the devices. The operation will still fail if the commit queue contains one or more entries affecting the same device(s) as the transaction to be committed.
In addition, the commit-queue flag has a number of other useful options that affect the resulting queue item:
The tag option sets a user-defined opaque tag that is present in all notifications and events sent referencing the queue item.
The block-others option will cause the resulting queue item to block subsequent queue items which use any of the devices in this queue item, from being queued.
The lock option will place a lock on the resulting queue item. The queue item will not be processed until it has been unlocked, see the actions unlock and lock in /devices/commit-queue/queue-item. No following queue items, using the same devices, will be allowed to execute as long as the lock is in place.
The atomic option sets the atomic behavior of the resulting queue item. If this is set to false, the devices contained in the resulting queue item can start executing if the same devices in other non-atomic queue items ahead of it in the queue are completed. If set to true, the atomic integrity of the queue item is preserved.
Depending on the selected error-option, NSO will store the reverse of the original transaction to be able to undo the transaction changes and get back to the previous state. This data is stored in the /devices/commit-queue/completed tree from where it can be viewed and invoked with the rollback action. When invoked, the data will be removed. Possible values are: continue-on-error, rollback-on-error, and stop-on-error.
The continue-on-error
trace-id: Use the provided trace ID as part of the log messages emitted while processing. If no trace ID is given, NSO is going to generate and assign a trace ID to the processing.
modulemodulewait-for-lockno-wait-for-lockno-wait-for-lockwait-for-lockno-wait-for-lockIf configuration changes have been made out-of-band then deep-check-sync is needed to detect an out-of-sync condition.
The deep option is used to recursively check-sync stacked services. The shallow option only check-sync the topmost service.
shallowget-modificationsThe deep option is used to recursively re-deploy stacked services. The shallow option only re-deploy the topmost service.
If the dry-run option is used, the action simply reports (in different formats) what it would do.
Use the option reconcile if the service should reconcile original data, i.e., take control of that data. This option acknowledges other services controlling the same data. All data which existed before the service was created will now be owned by the service. When the service is removed that data will also be removed. In technical terms, the reference count will be decreased by one for everything that existed prior to the service. If manually configured data exists below in the configuration tree that data is kept unless the option discard-non-service-config is used.
Note: The action is idempotent. If no configuration diff exists then nothing needs to be done.
Note: The NSO general principle of minimum change applies.
re-deploysyncInitial Perceived Severity major
Description An error happened while aborting or reverting a transaction. Device's configuration is likely to be inconsistent with the NCS CDB.
Recommended Action Inspect the configuration difference with compare-config, resolve conflicts with sync-from or sync-to if any.
Clear Condition(s) If NCS achieves sync with the device, or receives a transaction id for a netconf session towards the device, the alarm is cleared.
Alarm Message(s)
Device {dev} is locked
Device {dev} is southbound locked
abort error
Operator Actions: Actions to set operator states on alarms such as acknowledgement, and also actions to administratively manage the alarm list such as deleting alarms.
The alarm manager is accessible over all northbound interfaces. A read-only view including an SNMP alarm table and alarm notifications is available in an SNMP Alarm MIB. This MIB is suitable for integration with SNMP-based alarm systems.
To populate the alarm list there is a dedicated Java API. This API lets a developer add alarms, change states on alarms, etc. A common usage pattern is to use the SNMP notification receiver to map a subset of the device traps into alarms.
First of all, it is important to clearly define what an alarm means: "An alarm denotes an undesirable state in a resource for which an operator action is required". Alarms are often confused with general logging and event mechanisms, thereby overflooding the operator with alarms. In NSO, the alarm manager shows undesired resource states that an operator should investigate. NSO contains other mechanisms for logging in general. Therefore, NSO does not naively populate the alarm list with traps received in the SNMP notification receiver.
Before looking into how NSO handles alarms, it is important to define the fundamental concepts. We make a clear distinction between alarms and events in general. Alarms should be taken seriously and be investigated. Alarms have states; they go active with a specific severity, they change severity, and they are cleared by the resource. The same alarm may become active again. A common mistake is to confuse the operator view with the resource view. The model described so far is the resource view. The resource itself may consider the alarm cleared. The alarm manager does not automatically delete cleared alarms. An alarm that has existed in the network may still need investigation. There are dedicated actions an operator can use to manage the alarm list, for example, delete the alarms based on criteria such as cleared and date. These actions can be performed over all northbound interfaces.
Rather than viewing alarms as a list of alarm notifications, NSO defines alarms as states on objects. The NSO alarm list uses four keys for alarms: the alarming object within a device, the alarm type, and an optional specific problem.
Alarm types are normally unique identifiers for a specific alarm state and are defined statically. An alarm type corresponds to the well-known X.733 alarm standard tuple event type and probable cause. A specific problem is an optional key that is string-based and can further redefine an alarm type at run-time. This is needed for alarms that are not known before a system is deployed.
Imagine a system with general digital inputs. A MIB might specify traps called input-high, or input-low. When defining the SNMP notification reception, an integrator might define an alarm type called "External-Alarm". input-high might imply a major alarm and input-low might imply clear.
At installation, some detectors report "fire-alarm" and some "door-open" alarms. This is configured at the device and sent as free text in the SNMP var-binds. This is then managed by using the specific problem field of the NSO alarm manager to separate these different alarm types.
The data model for the alarm manager is outlined below.
This means that we have a list with key: (managed device, managed object, alarm type, specific problem). In the example above, we might have the following different alarms:
Device : House1; Managed Object : Detector1; Alarm-Type : External Alarm; Specific Problem = Smoke;
Device : House1; Managed Object : Detector2; Alarm-Type : External Alarm; Specific Problem = Door Open;
Each alarm entry shows the last status change for the alarm and also a child list with all status changes sorted in chronological order.
is-cleared: was the last state change clear?
last-status-change: timestamp for the last status change.
last-perceived-severity: last severity (not equal to clear).
last-alarm-text: the last alarm text (not equal to clear).
status-change, event-time: the time reported by the device.
status-change, received-time: the time the state change was received by NSO.
status-change, perceived-severity: the new perceived severity.
status-change, alarm-text: descriptive text associated with the new alarm status.
It is fundamental to define alarm types (specific problem) and the managed objects with a fine-grained mechanism that still is extensible. For objects we allow YANG instance-identifiers to refer to a YANG instance identifier, an SNMP OID, or a string. Strings can be used when the underlying object is not modeled. We use YANG identities to define alarm types. This has the benefit that alarm types can be defined in a named hierarchy and thereby provide an extensible mechanism. To support "dynamic alarm types" so that alarms can be separated by information only available at run-time, the string-based field-specific problem can also be used.
So far we have described the model based on the resource view. It is common practice to let operators manipulate the alarms corresponding to the operator's investigation. We clearly separate the resource and the operator view, for example, there is no such thing as an operator "clearing an alarm". Rather the alarm entries can have a corresponding alarm handling state. Operators may want to acknowledge an alarm and set the alarm state to closed or similar.
We also support some alarm list administrative actions:
Synchronize alarms: try to read the alarm states in the underlying resources and update the alarm list accordingly (this action needs to be implemented by user code for specific applications).
Purge alarms: delete entries in the alarm list based on several different filter criteria.
Filter alarms: with an XPATH as filter input, this action returns all alarms fulfilling the filter.
Compress alarms: since every entry may contain a large amount of state change entries this action compresses the history to the latest state change.
Alarms can be forwarded over NSO northbound interfaces. In many telecom environments, alarms need to be mapped to X.733 parameters. We provide an alarm model where every alarm type is mapped to the corresponding X.733 parameters such as event type and probable cause. In this way, it is easy to integrate NSO alarms into whatever X.733 enumerated values the upper fault management system requires.
The central part of the YANG Alarm model tailf-ncs-alarms.yang has the following structure.
The first part of the YANG listing above shows the definition for managed-object type in order for alarms to refer to YANG, SNMP, and other resources. We also see basic definitions from the X.733 standard for severity levels.
Note well the definition of alarm type using YANG identities. In this way, we can create a structured alarm-type hierarchy all rooted at alarm-type. For you to add your specific alarm types, define your own alarm types YANG file and add identities using alarm-type as a base.
The alarm-model container contains the mapping from alarm types to X.733 parameters used for north-bound interfaces.
The alarm-list container is the actual alarm list where we maintain a list mapping (device, managed-object, alarm-type, specific-problem) to the corresponding alarm state changes [(time, severity, text)].
Finally, we see the northbound alarm notification and alarm administrative actions.
The NSO alarm manager has support for the operator to acknowledge alarms. We call this alarm handling. Each alarm has an associated list of alarm handling entries as:
The following typedef defines the different states an alarm can be set into.
It is of course also possible to manipulate the alarm handling list from either Java code or Javascript code running in the web browser using the js_maapi library.
Below is a simple scenario to illustrate the alarm concepts. The example can be found in examples.ncs/service-provider/simple-mpls-vpn.
In the above scenario, we stop two of the devices and then ask NSO to connect to all devices. This results in two alarms for pe0 and pe1. Note that the key for the alarm is the device name, the alarm type, the full path to the object (in this case, the device and not an object within the device), and finally an empty string for the specific problem.
In the next command sequence, we start the device and request NSO to connect. This will clear the alarms.
Note that there are two status-change entries for the alarm and that the alarm is cleared. In the following scenario, we will state that the alarm is closed and finally purge (delete) all alarms that are cleared and closed (Again, note the distinction between operator states and the states from the underlying resources).
Assume that you need to configure the northbound parameters. This is done using the alarm model. A logical mapping of the connection problem above is to map it to X.733 probable cause connectionEstablishmentError (22) . This is done in the NSO CLI in the following way:
Audit and verify your network for configuration compliance.
When the network configuration is broken, there is a need to gather information and verify the network. NSO has numerous functions to show different aspects of such a network configuration verification. However, to simplify this task, compliance reporting can assemble information using a selection of these NSO functions and present the resulting information in one report. This report aims to answer two fundamental questions:
Who has done what?
Is the network correctly configured?
What defines a correctly configured network? Where is the authoritative configuration kept? Naturally, NSO, with the configurations stored in CDB, is the authority. Checking the live devices against the NSO-stored device configuration is a fundamental part of compliance reporting. Compliance reporting can also be based on one or a number of stored templates which the live devices are compared against. The compliance reports can also be a combination of both approaches.
Compliance reporting can be configured to check the current situation, check historical events, or both. To assemble historical events, rollback files are used. Therefore this functionality must be enabled in NSO before report execution, otherwise, the history view cannot be presented.
The reports can be created in either plain text, HTML, or DocBook XML format. In addition, the data can also be exported to a SQLite database file. The DocBook XML format allows you to use the report in further post-processing, such as creating a PDF using Apache FOP and your own custom styling.
It is possible to create several named compliance report definitions. Each named report defines the devices, services, and/or templates that should be part of the network configuration verification.
Let us walk through a simple compliance report definition. This example is based on the examples.ncs/service-provider/mpls-vpn example. For the details of the included services and devices in this example, see the README file.
Each report definition has a name and can specify device and service checks. Device checks are further classified into sync and configuration checks. Device sync checks verify the in-sync status of the devices included in the report, while device configuration checks verify individual device configuration against a compliance template (see ).
For device checks, you can select the devices to be checked in four different ways:
all-devices - Check all defined devices.
device-group - Specified list of device groups.
device - Specified list of devices.
Consider the following example report definition named gold-check:
This report definition, when executed, checks whether all devices known to NSO are in sync.
For such a check, the behavior of the verification can be specified:
To request a check-sync action to verify that the device is currently in sync. This behavior is controlled by the leaf current-out-of-sync (default true).
To scan the commit log (i.e., rollback files) for changes on the devices and report these. This behavior is controlled by the leaf historic-changes (default true).
For the example gold-check, you can also use service checks. This type of check verifies if the specified service instances are in sync, that is if the network devices contain configuration as defined by these services. You can select the services to be checked in four different ways:
all-services - Check all known service instances.
service - Specified list of service instances.
select-services - Specified list of service instances through an XPath expression.
For service checks, the verification behavior can be specified as well:
To request a check-sync action to verify that the service is currently in sync. This behavior is controlled by the leaf current-out-of-sync (default true).
To scan the commit log (i.e., rollback files) for changes on the services and report these. This behavior is controlled by the leaf historic-changes (default true).
In the example report, you might choose the default behavior and check all instances of the l3vpn service:
You can also use the web UI to define compliance reports. See the section for details.
Compliance reporting is a read-only operation. When running a compliance report, the result is stored in a file located in a sub-directory compliance-reports under the NSO state directory. NSO has operational data for managing this report storage which makes it possible to list existing reports.
Here is an example of such a report listing:
There is also a remove action to remove report results (and the corresponding file):
When running the report, there are a number of parameters that can be specified with the specific run action.
The parameters that are possible to specify for a report run action are:
title: The title in the resulting report.
from: The date and time from which the report should start the information gathering. If not set, the oldest available information is implied.
to: The date and time when the information gathering should stop. If not set, the current date and time are implied. If set, no new check-syncs of devices and/or services will be attempted.
We will request a report run with a title and formatted as text.
In the above command, the report was run without a from or a to argument. This implies that historical information gathering will be based on all available information. This includes information gathered from rollback files.
When a from argument is supplied to a compliance report run action, this implies that only historical information younger than the from date and time is checked.
When a to argument is supplied, this implies that historical information will be gathered for all logged information up to the date and time of the to argument.
The from and a to arguments can be combined to specify a fixed historic time interval.
When a compliance report is run, the action will respond with a flag indicating if any discrepancies were found. Also, it reports how many devices and services have been verified in total by the report.
Below is an example of a compliance report result (in text format):
NSO generates a report file and returns a location URL pointing to it after running a compliance report using the command compliance reports <report-name> run outformat <format> . This URL is a direct HTTP(S) link to the report, which can be downloaded, for example, using a standard tool like curl or using Python requests. With basic authentication, the tools authenticate with NSO using a username and password, and allow users to retrieve and save the report file locally for further processing, automation, or archiving. You must first establish a JSON-RPC session before downloading the report. If the connection is closed before requesting the file, as is typically done with curl, use the returned session cookie to download the report.
The examples below clarify how to make requests.
Session-based authentication using the provided cookie to identify the session
Session-based authentication
Services are the preferred way to manage device configuration in NSO as they provide numerous benefits (see in Development). However, on your journey to full automation, perhaps you only use NSO to configure a subset of all the services (configuration) on the devices. In this case, you can still perform generic configuration validation on other parts with the help of device configuration checks.
Often, each device will have a somewhat different configuration, such as its own set of IP addresses, which makes checking against a static template impossible. For this reason, NSO supports compliance templates.
These templates are similar to but separate from, device templates. With compliance templates, you use regular expressions to check compliance, instead of simple fixed values. You can also define and reference variables that get their values when a report is run. All selected devices are then checked against the compliance template and the differences (if any) are reported as a compliance violation.
You can create a compliance template from scratch. For example, to check that the router uses only internal DNS servers from the 10.0.0.0/8 range, you might create a compliance template such as:
Here, the value of the /sys/dns/server must start with 10., followed by any string (the regular expression .+). Since a dot has a special meaning with regular expressions (any character), it must be escaped with a backslash to match only the actual dot character. But note the required multiple escaping (\\\\) in this case.
As these expressions can be non-trivial to construct, the templates have a check command that allows you to quickly check compliance for a set of devices, which is a great development aid.
Alternatively, you can use the /compliance/create-template action when you already have existing device templates that you would like to use as a starting point for a compliance template. For example:
Finally, to use compliance templates in a report, reference them from device-check/template:
In some cases, it is insufficient to only check that the required configuration is present, as other configurations on the device can interfere with the desired functionality. For example, a service may configure a routing table entry for the 198.51.100.0/24 network. If someone also configures a more specific entry, say 198.51.100.0/28, that entry will take precedence and may interfere with the way the service requires the traffic to be routed. In effect, this additional configuration can render the service inoperable.
To help operators ensure there is no such extraneous configuration on the managed devices, the compliance reporting feature supports the so-called strict mode. This mode not only checks whether the required configuration is present but also reports any configuration present on the device that is not part of the template.
You can configure this mode in the report definition, when specifying the device template to check against, for example:
Develop and deploy a nano service using a guided example.
This section shows how to develop and deploy a simple NSO nano service for managing the provisioning of SSH public keys for authentication. For more details on nano services, see in Development. The example showcasing development is available under $NCS_DIR/examples.ncs/development-guide/nano-services/netsim-sshkey. In addition, there is a reference from the README in the example's directory to the deployment version of the example.
After installing NSO with the option, development often begins with either retrieving an existing YANG model representing what the managed network element (a virtual or physical device, such as a router) can do or constructing a new YANG model that at least covers the configuration of interest to an NSO service. To enable NSO service development, the network element's YANG model can be used with NSO's netsim tool that uses ConfD (Configuration Daemon) to simulate the network elements and their management interfaces like NETCONF. Read more about netsim in
Learn how NSO enhances transactional efficiency with parallel transactions.
From version 6.0, NSO uses the so-called 'optimistic concurrency', which greatly improves parallelism. With this approach, NSO avoids the need for serialization and a global lock to run user code which would otherwise limit the number of requests the system can process in a given time unit.
Using this concurrency model, your code, such as a service mapping or custom validation code, can run in parallel, either with another instance of the same service or an entirely different service (or any other provisioning code, for that matter). As a result, the system can take better advantage of available resources, especially the additional CPU cores, making it a lot more performant.
Transactional systems, such as NSO, must process each request in a way that preserves what are known as the ACID properties, such as atomicity and isolation of requests. A traditional approach to ensure this behavior is by using locking to apply requests or transactions one by one. The main downside is that requests are processed sequentially and may not be able to fully utilize the available resources.
commit <flag>ncs% commit | detailsncs% commit | debug templatencs% commit | debug template l3vpnalarm-type
certificate-expiration
ha-alarm
ha-node-down-alarm
ha-primary-down
ha-secondary-down
ncs-cluster-alarm
cluster-subscriber-failure
ncs-dev-manager-alarm
abort-error
bad-user-input
commit-through-queue-blocked
commit-through-queue-failed
commit-through-queue-failed-transiently
commit-through-queue-rollback-failed
configuration-error
connection-failure
final-commit-error
missing-transaction-id
ned-live-tree-connection-failure
out-of-sync
revision-error
ncs-package-alarm
package-load-failure
package-operation-failure
ncs-service-manager-alarm
service-activation-failure
ncs-snmp-notification-receiver-alarm
receiver-configuration-error
time-violation-alarm
transaction-lock-time-violationmodule tailf-ncs-alarms {
namespace "http://tail-f.com/ns/ncs-alarms";
prefix "al";
...
typedef managed-object-t {
type union {
type instance-identifier {
require-instance false;
}
type yang:object-identifier;
type string;
}
...
typedef event-type {
type enumeration {
enum other {value 1;}
enum communicationsAlarm {value 2;}
enum qualityOfServiceAlarm {value 3;}
enum processingErrorAlarm {value 4;}
enum equipmentAlarm {value 5;}
...
}
description
"...";
reference
"ITU Recommendation X.736, 'Information Technology - Open
Systems Interconnection - System Management: Security
Alarm Reporting Function', 1992";
}
typedef severity-t {
type enumeration {
enum cleared {value 1;}
enum indeterminate {value 2;}
enum critical {value 3;}
enum major {value 4;}
enum minor {value 5;}
enum warning {value 6;}
}
description
"...";
}
...
identity alarm-type {
description
"Base identity for alarm types."
...
}
identity ncs-dev-manager-alarm {
base alarm-type;
}
identity ncs-service-manager-alarm {
base alarm-type;
}
identity connection-failure {
base ncs-dev-manager-alarm;
description
"NCS failed to connect to a device";
}
....
container alarm-model {
list alarm-type {
key "type";
leaf type {
type alarm-type-t;
}
uses alarm-model-parameters;
}
}
...
container alarm-list {
config false;
leaf number-of-alarms {
type yang:gauge32;
}
leaf last-changed {
type yang:date-and-time;
}
list alarm {
key "device type managed-object specific-problem";
uses common-alarm-parameters;
leaf is-cleared {
type boolean;
mandatory true;
}
leaf last-status-change {
type yang:date-and-time;
mandatory true;
}
leaf last-perceived-severity {
type severity-t;
}
leaf last-alarm-text {
type alarm-text-t;
}
list status-change {
key event-time;
min-elements 1;
uses alarm-state-change-parameters;
}
leaf last-alarm-handling-change {
type yang:date-and-time;
}
list alarm-handling {
key time;
leaf time {
tailf:info "Time stamp for operator action";
type yang:date-and-time;
}
leaf state {
tailf:info "The operators view of the alarm state";
type alarm-handling-state-t;
mandatory true;
description
"The operators view of the alarm state.";
}
...
}
...
notification alarm-notification {
...
rpc synchronize-alarms {
...
rpc compress-alarms {
...
rpc purge-alarms {container alarms {
....
container alarm-list {
config false;
....
list alarm {
key "device type managed-object specific-problem";
.....
list alarm-handling {
key time;
leaf time {
type yang:date-and-time;
description
"Time-stamp for operator action on alarm.";
}
leaf state {
mandatory true;
type alarm-handling-state-t;
description
"The operators view of the alarm state";
}
leaf user {
description "Which user has acknowledged this alarm";
mandatory true;
type string;
}
leaf description {
description "Additional optional textual information regarding
this new alarm-handling entry";
type string;
}
}
tailf:action handle-alarm {
tailf:info "Set the operator state of this alarm";
description
"An action to allow the operator to add an entry to the
alarm-handling list. This is a means for the operator to indicate
the level of human intervention on an alarm.";
input {
leaf state {
type alarm-handling-state-t;
mandatory true;
}
}
}
} typedef alarm-handling-state-t {
type enumeration {
enum none {
value 1;
}
enum ack {
value 2;
}
enum investigation {
value 3;
}
enum observation {
value 4;
}
enum closed {
value 5;
}
}
description
"Operator actions on alarms";
}$ make stop clean all start
$ ncs-netsim stop pe0
$ ncs-netsim stop pe1
$ ncs_cli -u admin -C
admin connected from 127.0.0.1 using console on host
admin@ncs# devices connect
...
connect-result {
device pe0
result false
info Failed to connect to device pe0: connection refused
}
connect-result {
device pe1
result false
info Failed to connect to device pe1: connection refused
}
...
admin@ncs# show alarms alarm-list
alarms alarm-list number-of-alarms 2
alarms alarm-list last-changed 2015-02-18T08:02:49.162436+00:00
alarms alarm-list alarm pe0 connection-failure /devices/device[name='pe0'] ""
is-cleared false
last-status-change 2015-02-18T08:02:49.162734+00:00
last-perceived-severity major
last-alarm-text "Failed to connect to device pe0: connection refused"
status-change 2015-02-18T08:02:49.162734+00:00
received-time 2015-02-18T08:02:49.162734+00:00
perceived-severity major
alarm-text "Failed to connect to device pe0: connection refused"
alarms alarm-list alarm pe1 connection-failure /devices/device[name='pe1'] ""
is-cleared false
last-status-change 2015-02-18T08:02:49.162436+00:00
last-perceived-severity major
last-alarm-text "Failed to connect to device pe1: connection refused"
status-change 2015-02-18T08:02:49.162436+00:00
received-time 2015-02-18T08:02:49.162436+00:00
perceived-severity major
alarm-text "Failed to connect to device pe1: connection refused"admin@ncs# exit
$ ncs-netsim start pe0
DEVICE pe0 OK STARTED
$ ncs-netsim start pe1
DEVICE pe1 OK STARTED
$ ncs_cli -u admin -C
$ admin@ncs# devices connect
...
connect-result {
device pe0
result true
info (admin) Connected to pe0 - 127.0.0.1:10028
}
connect-result {
device pe1
result true
info (admin) Connected to pe1 - 127.0.0.1:10029
}
...
admin@ncs# show alarms alarm-list
alarms alarm-list number-of-alarms 2
alarms alarm-list last-changed 2015-02-18T08:05:04.942637+00:00
alarms alarm-list alarm pe0 connection-failure /devices/device[name='pe0'] ""
is-cleared true
last-status-change 2015-02-18T08:05:04.942637+00:00
last-perceived-severity major
last-alarm-text "Failed to connect to device pe0: connection refused"
status-change 2015-02-18T08:02:49.162734+00:00
received-time 2015-02-18T08:02:49.162734+00:00
perceived-severity major
alarm-text "Failed to connect to device pe0: connection refused"
status-change 2015-02-18T08:05:04.942637+00:00
received-time 2015-02-18T08:05:04.942637+00:00
perceived-severity cleared
alarm-text "Connected as admin"
alarms alarm-list alarm pe1 connection-failure /devices/device[name='pe1'] ""
is-cleared true
last-status-change 2015-02-18T08:05:04.84115+00:00
last-perceived-severity major
last-alarm-text "Failed to connect to device pe1: connection refused"
status-change 2015-02-18T08:02:49.162436+00:00
received-time 2015-02-18T08:02:49.162436+00:00
perceived-severity major
alarm-text "Failed to connect to device pe1: connection refused"
status-change 2015-02-18T08:05:04.84115+00:00
received-time 2015-02-18T08:05:04.84115+00:00
perceived-severity cleared
alarm-text "Connected as admin"admin@ncs# alarms alarm-list alarm pe0 connection-failure /devices/device[name='pe0']
"" handle-alarm state closed description Fixed
admin@ncs# show alarms alarm-list alarm alarm-handling
DEVICE TYPE STATE USER DESCRIPTION
---------------------------------------------------------
pe0 connection-failure closed admin Fixed
admin@ncs# alarms purge-alarms alarm-handling-state-filter { state closed }
Value for 'alarm-status' [any,cleared,not-cleared]: cleared
purged-alarms 1admin@ncs# config
Entering configuration mode terminal
admin@ncs(config)# alarms alarm-model alarm-type connection-failure probable-cause 22
admin@ncs(config-alarm-type-connection-failure/*)# commit
Commit complete.
admin@ncs(config-alarm-type-connection-failure/*)# show full-configuration
alarms alarm-model alarm-type connection-failure *
event-type communicationsAlarm
has-clear true
kind-of-alarm root-cause
probable-cause 22The rollback-on-error value means that the commit queue item will roll back on errors. The commit queue will place a lock on the failed queue item, thus blocking other queue items with overlapping devices from being executed. The rollback action will then automatically be invoked when the queue item has finished its execution. The lock will be removed as part of the rollback.
The stop-on-error means that the commit queue will place a lock on the failed queue item, thus blocking other queue items with overlapping devices from being executed. The lock must then either manually be released when the error is fixed, or the rollback action under /devices/commit-queue/completed be invoked.
Read about error recovery in Commit Queue for a more detailed explanation.
Device {dev} is southbound locked
Commit queue item {CqId} rollback invoked
Commit queue item {CqId} has failed: Operation failed because: inconsistent database
Remote commit queue item ~p cannot be unlocked: cluster node not configured correctly
The configuration database is locked for device {dev}: {reason}
the configuration database is locked by session {id} {identification}
the configuration database is locked by session {id} {identification}
{Dev}: Device is locked in a {Op} operation by session {session-id}
resource denied
Commit queue item {CqId} rollback invoked
Commit queue item {CqId} has failed: Operation failed because: inconsistent database
Remote commit queue item ~p cannot be unlocked: cluster node not configured correctly
Resource {resource} doesn't exist


select-devices - Specified by an XPath expression.service-type - Specified list of service types.outformat: One of the formats from xml, html, text, or sqlite. If xml is specified, the report will be formatted using the DocBook schema. The generated file can be downloaded, for example, using standard CLI tools like curl or using Python requests via the URL returned by NSO.
ncs(config)# compliance reports report gold-check
ncs(config-report-gold-check)# device-check all-devicesncs(config-report-gold-check)# device-check ?
Possible completions:
all-devices Report on all devices
current-out-of-sync Should current check-sync action be performed?
device Report on specific devices
device-group Report on specific device groups
historic-changes Include commit log events from within the report
interval
select-devices Report on devices selected by an XPath expression
<cr>ncs(config-report-gold-check)# service-check ?
Possible completions:
all-services Report on all services
current-out-of-sync Should current check-sync action be performed?
historic-changes Include commit log events from within the report
interval
select-services Report on services selected by an XPath expression
service Report on specific services
service-type The type of service.
<cr>ncs(config-report-gold-check)# service-check service-type /l3vpn:vpn/l3vpn:l3vpn
ncs(config-report-gold-check)# commit
Commit complete.
ncs(config-report-gold-check)# show full-configuration
compliance reports report gold-check
device-check all-devices
service-check service-type /l3vpn:vpn/l3vpn:l3vpn
!ncs# show compliance report-results
compliance report-results report 1
name gold-check
title "GOLD NW 1"
time 2015-02-04T18:48:57+00:00
who admin
compliance-status violations
location http://.../report_1_admin_1_2015-2-4T18:48:57:0.xml
compliance report-results report 2
name gold-check
title "GOLD NW 2"
time 2015-02-04T18:51:48+00:00
who admin
compliance-status violations
location http://.../report_2_admin_1_2015-2-4T18:51:48:0.text
compliance report-results report 3
name gold-check
title "GOLD NW 3"
time 2015-02-04T19:11:43+00:00
who admin
compliance-status violations
location http://.../report_3_admin_1_2015-2-4T19:11:43:0.textncs# compliance report-results report 2..3 remove
ncs# show compliance report-results
compliance report-results report 1
name gold-check
title "GOLD NW 1"
time 2015-02-04T18:48:57+00:00
who admin
compliance-status violations
location http://.../report_1_admin_1_2015-2-4T18:48:57:0.xmlncs# compliance reports report gold-check run \
> title "My First Report" outformat textncs# compliance reports report gold-check run \
> title "First check" from 2015-02-04T00:00:00ncs# compliance reports report gold-check run \
> title "Second check" to 2015-02-05T00:00:00ncs# compliance reports report gold-check run \
> title "Third check" from 2015-02-04T00:00:00 to 2015-02-05T00:00:00ncs# compliance reports report gold-check run \
> title "Fourth check" outformat text
time 2015-2-4T20:42:45.019012+00:00
compliance-status violations
info Checking 17 devices and 2 services
location http://.../report_7_admin_1_2015-2-4T20:42:45.019012+00:00.text$ cat ./state/compliance-reports/report_7_admin_1_2015-2-4T20\:42\:45.019012+00\:00.text
reportcookie : g2gCbQAAAAtGaWZ0aCBjaGVja20AAAAKZ29sZC1jaGVjaw==
Compliance report : Fourth check
Publication date : 2015-2-4 20:42:45
Produced by user : admin
Chapter : Summary
Compliance result titled "Fourth check" defined by report "gold-check"
Resulting in violations
Checking 17 devices and 2 services
Produced 2015-2-4 20:42:45
From : Oldest available information
To : 2015-2-4 20:42:45
Devices out of sync
p0
check-sync unsupported for device
p1
check-sync unsupported for device
p2
check-sync unsupported for device
p3
check-sync unsupported for device
pe0
check-sync unsupported for device
pe1
check-sync unsupported for device
pe3
check-sync unsupported for device
Template discrepancies
gold-conf
Discrepancies in device
ce0
ce1
ce2
ce3
Chapter : Details
Commit list
SeqNo ID User Client Timestamp Label Comment
0 10031 admin cli 2015-02-04 20:31:42
1 10030 admin cli 2015-02-04 20:03:41
2 10029 admin cli 2015-02-04 19:54:40
3 10028 admin cli 2015-02-04 19:45:20
4 10027 admin cli 2015-02-04 18:38:05
Service commit changes
No service data commits saved for the time interval
Device commit changes
No device data commits saved for the time interval
Service differences
No service data diffs found
Template discrepancies details
gold-conf
Device ce0
config {
ios:snmp-server {
+ community public {
+ }
}
}
Device ce1
config {
ios:snmp-server {
+ community public {
+ }
}
}
Device ce2
config {
ios:snmp-server {
+ community public {
+ }
}
}
Device ce3
config {
ios:snmp-server {
+ community public {
+ }
}
}# 1. Start a session and save the cookie
$ curl -X POST -H 'Content-Type: application/json' --cookie-jar cookie.txt -d '{"jsonrpc": "2.0", "id": 1, "method": "login", "params": {"user": "admin", "passwd": "admin"}}' http://localhost:8080/jsonrpc
# 2. Use the cookie to identify the session and download the report
$ curl --cookie cookie.txt --output report.txt "http://localhost:8080/compliance-reports/report_2025-10-09T13:48:32.663282+00:00.txt"import requests
url = "http://localhost:8080/jsonrpc"
# 1. Start a session
session = requests.Session()
headers = {
"Content-Type": "application/json"
}
data = {
"jsonrpc": "2.0",
"id": 1,
"method": "login",
"params": {
"user": "admin",
"passwd": "admin"
}
}
response = session.post(url, json=data, headers=headers,verify=False)
print("Status code:", response.status_code)
print("Response:", response.text)
file_url = "http://localhost:8080/compliance-reports/report_2025-10-09T13:48:32.663282+00:00.txt"
filename = file_url.split("/")[-1]
# 2. Use the session to download the report
file_response = session.get(file_url, stream=True)
if file_response.status_code == 200:
with open("report.txt", "wb") as f:
for chunk in file_response.iter_content(chunk_size=8192):
if chunk:
f.write(chunk)
else:
print(file_response.text)admin@ncs(config)# compliance template internal-dns
admin@ncs(config-template-internal-dns)# ned-id router-nc-1.0 config sys dns server 10\\\\..+admin@ncs(config)# show full-configuration devices device ex0 config sys dns server
devices device ex0
config
sys dns server 10.2.3.4
!
sys dns server 192.168.100.10
!
!
!
admin@ncs(config)# compliance template internal-dns
admin@ncs(config-template-internal-dns)# check device ex0
check-result {
device ex0
result violations
diff config {
sys {
dns {
+ # after server 10.2.3.4
+ /* No match of 10\\..+ */
+ server 192.168.100.10;
}
}
}
}admin@ncs(config)# show full-configuration devices template use-internal-dns
devices template use-internal-dns
ned-id router-nc-1.0
config
! Tags: replace (/devices/template{use-internal-dns}/ned-id{router-nc-1.0:router-nc-1.0}/config/r:sys/dns)
sys dns server 10.8.8.8
!
!
!
!
admin@ncs(config)# compliance create-template name internal-dns device-template use-internal-dns
admin@ncs(config)# show configuration
compliance template internal-dns
ned-id router-nc-1.0
config
! Tags: replace (/compliance/template{internal-dns}/ned-id{router-nc-1.0:router-nc-1.0}/config/r:sys/dns)
sys dns server 10.8.8.8
!
!
!
!
admin@ncs(config)# compliance template internal-dns
admin@ncs(config-template-internal-dns)# ned-id router-nc-1.0 config sys dns server 10\\\\..+admin@ncs(config-report-gold-check)# device-check template internal-dnsncs(config)# compliance reports report gold-check
ncs(config-report-gold-check)# device-check template internal-dns strictStart by setting up your system to install and run NSO.
To install NSO:
Fulfill at least the primary requirements.
If you intend to build and run NSO examples, you also need to install additional applications listed under Additional Requirements.
Where requirements list a specific or higher version, there always exists a (small) possibility that a higher version introduces breaking changes. If in doubt whether the higher version is fully backwards compatible, always use the specific version.
To download the Cisco NSO installer and example NEDs:
Go to the Cisco's official Software Download site.
Search for the product "Network Services Orchestrator" and select the desired version.
There are two versions of the NSO installer, i.e. for macOS and Linux systems. Download the desired installer.
If your downloaded file is a signed.bin file, it means that it has been digitally signed by Cisco, and upon execution, you will verify the signature and unpack the installer.bin.
If you only have installer.bin, skip to the next step.
To unpack the installer:
In the terminal, list the binaries in the directory where you downloaded the installer, for example:
Use the sh command to run the signed.bin to verify the certificate and extract the installer binary and other files. An example output is shown below.
List the files to check if extraction was successful.
Local Install of NSO Software is performed in a single user-specified directory, for example in your $HOME directory. It is always recommended to install NSO in a directory named as the version of the release, for example, if the version being installed is 6.1, the directory should be ~/nso-6.1.
To run the installer:
Navigate to your Install Directory.
Run the following command to install NSO in your Install Directory. The --local-install parameter is optional.
An example output is shown below.
The installation program creates a shell script file named ncsrc in each NSO installation, which sets the environment variables.
To set the environment variables:
Source the ncsrc file to get the environment variables settings in your shell. You may want to add this sourcing command to your login sequence, such as .bashrc.
For csh/tcsh users, there is a ncsrc.tcsh file with csh/tcsh syntax. The example below assumes that you are using bash, other versions of /bin/sh may require that you use . instead of source.
Most users add source ~/nso-x.x/ncsrc (where x.x is the NSO version) to their ~/.bash_profile, but you can simply do it manually when you want it. Once it has been sourced, you have access to all the NSO executable commands, which start with ncs.
NSO needs a deployment/runtime directory where the database files, logs, etc. are stored. An empty default directory can be created using the ncs-setup command.
To create a Runtime Directory:
Create a Runtime Directory for NSO by running the following command. In this case, we assume that the directory is $HOME/ncs-run.
Start the NSO daemon ncs.
The ncs-setup command creates an ncs.conf file that uses predefined encryption keys for easier migration of data across installations. It is not suitable for cases where data confidentiality is required, such as a production deployment.
To conclude the NSO installation, a license registration token must be created using a (CSSM) account. This is because NSO uses Cisco Smart Licensing, as described in the Cisco Smart Licensing to make it easy to deploy and manage NSO license entitlements. Login credentials to the Cisco Smart Software Manager (CSSM) account are provided by your Cisco contact and detailed instructions on how to create a registration token can be found in the Cisco Smart Licensing. General licensing information covering licensing models, how licensing works, usage compliance, etc., is covered in the Cisco Software Licensing Guide.
To generate a license registration token:
When you have a token, start a Cisco CLI towards NSO and enter the token, for example:
Upon successful registration, NSO automatically requests a license entitlement for its own instance and for the number of devices it orchestrates and their NED types. If development mode has been enabled, only development entitlement for the NSO instance itself is requested.
Inspect the requested entitlements using the command show license all (or by inspecting the NSO daemon log). An example output is shown below.
Frequently Asked Questions (FAQs) about Local Install.
Next Steps
Prepare
Install
Finalize
The simple network element YANG model used for this example is available under packages/ne/src/yang/ssh-authkey.yang. The ssh-authkey.yang model implements a list of SSH public keys for identifying a user. The list of keys augments a list of users in the ConfD built-in tailf-aaa.yang module that ConfD uses to authenticate users.
On the network element, a Python application subscribes to ConfD to be notified of configuration changes to the user's public keys and updates the user's authorized_keys file accordingly. See packages/ne/netsim/ssh-authkey.py for details.
The first step is to create an NSO package from the network element YANG model. Since NSO will use NETCONF over SSH to communicate with the device, the package will be a NETCONF NED. The package can be created using the ncs-make-package command or the NETCONF NED builder tool. The ncs-make-package command is typically used when the YANG models used by the network element are available. Hence, the packages/ne package for this example was generated using the ncs-make-package command.
As the ssh-authkey.yang model augments the users list in the ConfD built-in tailf-aaa.yang model, NSO needs a representation of that YANG model too to build the NED. However, the service will only configure the user's public keys, so only a subset of the tailf-aaa.yang model that only includes the user list is sufficient. To compare, see the packages/ne/src/yang/tailf-aaa.yang in the example vs. the network element's version under $NCS_DIR/netsim/confd/src/confd/aaa/tailf-aaa.yang.
Now that the network element package is defined, next up is the service package, beginning with finding out what steps are required for NSO to authenticate with the network element using SSH public key authentication:
First, generate private and public keys using, for example, the ssh-keygen OpenSSH authentication key utility.
Distribute the public keys to the ConfD-enabled network element's list of authorized keys.
Configure NSO to use public key authentication with the network element.
Finally, test the public key authentication by connecting NSO with the network element.
The outline above indicates that the service will benefit from implementing several smaller (nano) steps:
The first step only generates private and public key files with no configuration. Thus, the first step should be implemented by an action before the second step runs, not as part of the second step transaction create() callback code configuring the network elements. The create() callback runs multiple times, for example, for service configuration changes, re-deploy, or commit dry-run. Therefore, generating keys should only happen when creating the service instance.
The third step cannot be executed before the second step is complete, as NSO cannot use the public key for authenticating with the network element before the network element has it in its list of authorized keys.
The fourth step uses the NSO built-in connect() action and should run after the third step finishes.
What configuration input do the above steps need?
The name of the network element that will authenticate a user with an SSH public key.
The name of the local NSO user that maps to the remote network element user the public key authenticates.
The name of the remote network element user.
A passphrase is used for encrypting the private key, guarding its privacy. The passphrase should be encrypted when storing it in the CDB, just like any other password.
The name of the NSO authentication group to configure for public-key authentication with the NSO-managed network element.
A service YANG model that implements the above configuration:
For details on the YANG statements used by the YANG model, such as leaf, container, list, leafref, mandatory, length, pattern, etc., see the IETF RFC 7950 that documents the YANG 1.1 Data Modeling Language. The tailf:xyz are YANG extension statements documented by tailf_yang_extensions(5) in Manual Pages.
The service configuration is implemented in YANG by a key-auth list where the network element and local user names are the list keys. In addition, the list has a distkey-servicepoint service point YANG extension statement to enable the list parameters used by the Python service callbacks that this example implements. Finally, the used service-data and nano-plan-data groupings add the common definitions for a service and the plan data needed when the service is a nano service.
For the nano service YANG part, an NSO YANG nano service behavior tree extension that references a plan outline extension implements the above steps for setting up SSH public key authentication with a network element:
The nano service-behavior-tree for the service point creates a nano service component for each list entry in the key-auth list. The last connection verification step of the nano service, the connected state, uses the NE-NAME variable. The NAME variable concatenates the ne-name and local-user keys from the key-auth list to create a unique nano service component name.
The only step that requires both a create and delete part is the generated state action that generates the SSH keys. If a user deletes a service instance and another network element does not currently use the generated keys, this deletes the keys too. NSO will revert the configuration automatically as part of the FASTMAP algorithm. Hence, the service list instances also need actions for generating and deleting keys.
The actions have no input statements, as the input is the configuration in the service instance list entry.
The generated state uses the ncs:sync statement to ensure that the keys exist before the distributed state runs. Similarly, the distributed state uses the force-commit statement to commit the configuration to the NSO CDB and the network elements before the configured state runs.
See the packages/distkey/src/yang/distkey.yang YANG model for the nano service behavior tree, plan outline, and service configuration implementation.
Next, handling the key generation, distributing keys to the network element, and configuring NSO to authenticate using the keys with the network element requires some code, here written in Python, implemented by the packages/distkey/python/distkey/distkey-app.py script application.
The Python script application defines a Python DistKeyApp class specified in the packages/distkey/package-meta-data.xml file that NSO starts in a Python thread. This Python class inherits ncs.application.Application and implements the setup() and teardown() methods. The setup() method registers the nano service create() callbacks and the action handlers for generating and deleting the key files. Using the nano service state to separate the two nano service create() callbacks for the distribution and NSO configuration of keys, only one Python class, the DistKeyServiceCallbacks class, is needed to implement them.
The action for generating keys calls the OpenSSH ssh-keygen command to generate the private and public key files. Calling ssh-keygen is kept out of the service create() callback to avoid the key generation running multiple times, for example, for service changes, re-deploy, or dry-run commits. Also, NSO encrypts the passphrase used when generating the keys for added security, see the YANG model, so the Python code decrypts it before using it with the ssh-keygen command.
The DeleteActionHandler action deletes the key files if no more network elements use the user's keys:
The Python class for the nano service create() callbacks handles both the distribution and NSO configuration of the keys. The dk:distributed state create() callback code adds the public key data to the network element's list of authorized keys. For the create() call for the dk:configured state, a template is used to configure NSO to use public key authentication with the network element. The template can be called directly from the nano service, but in this case, it needs to be called from the Python code to input the current working directory to the template:
The template to configure NSO to use public key authentication with the network element is available under packages/distkey/templates/distkey-configured.xml:
The example uses three scripts to showcase the nano service:
A shell script, showcase.sh, which uses the ncs_cli program to run CLI commands via the NSO IPC port.
A Python script, showcase-rc.sh, which uses the requests package for RESTCONF edit operations and receiving event notifications.
A Python script that uses NSO MAAPI, showcase-maapi.sh, via the NSO IPC port.
The ncs_cli program identifies itself with NSO as the admin user without authentication, and the RESTCONF client uses plain HTTP and basic user password authentication. All three scripts demonstrate the service by generating keys, distributing the public key, and configuring NSO for public key authentication with the network elements. To run the example, see the instructions in the README file of the example.
See the README in the netsim-sshkey example's directory for a reference to an NSO system installation in a container deployment variant.
The deployment variant differs from the development example by:
Installing NSO with a system installation for deployment instead of a local installation suitable for development
Addressing NSO security by running NSO as the admin user and authenticating using a public key and token.
Rotating NSO logs to avoid running out of disk space
Installing the distkey service package and ne NED package at startup
The NSO CLI showcase script uses SSH with public key authentication instead of the ncs_cli program over unsecured IPC
There is no Python MAAPI showcase script. Use RESTCONF over HTTPS with Python instead of Python MAAPI over unsecured IPC.
Having NSO and the network elements (simulated by the ConfD subscriber application) run in separate containers
NSO is either pre-installed in the NSO production container image or installed in a generic Linux container.
The deployment example sets up a minimal production installation where the NSO process runs as the admin OS user, relying on PAM authentication for the admin and oper NSO users. The admin user is authenticated over SSH using a public key for CLI and NETCONF access and over RESTCONF HTTPS using a token. The read-only oper user uses password authentication. The oper user can access the NSO WebUI over HTTPS port 443 from the container host.
A modified version of the NSO configuration file ncs.conf from the example running with a local install NSO is located in the $NCS_CONFIG_DIR (/etc/ncs) directory. The packages, ncs-cdb, state, and scripts directories are now under the $NCS_RUN_DIR (/var/opt/ncs) directory. The log directory is now the $NCS_LOG_DIR (/var/log/ncs) directory. Finally, the $NCS_DIR variable points to /opt/ncs/current.
Two scripts showcase the nano service:
A shell script that runs NSO CLI commands over SSH.
A Python script that uses the requests package to perform edit operations and receive event notifications.
As with the development version, both scripts will demo the service by generating keys, distributing the public key, and configuring NSO for public key authentication with the network elements.
To run the example and for more details, see the instructions in the README file of the deployment example.

Optimistic concurrency, on the other hand, allows transactions to run in parallel. It works on the premise that data conflicts are rare, so most of the time the transactions can be applied concurrently and will retain the required properties. NSO ensures this by checking that there are no conflicts with other transactions just before each transaction is committed. In particular, NSO will verify that all the data accessed as part of the transaction is still valid when applying changes. Otherwise, the system will reject the transaction.
Such a model makes sense because a lot of the time concurrent transactions deal with separate sets of data. Even if multiple transactions share some data in a read-only fashion, it is fine as they still produce the same result.
In the figure, svc1 in the T1 transaction and svc2 in the T2 transaction both read (but do not change) the same, shared piece of data and can proceed as usual, unperturbed.
On the other hand, a conflict is when a piece of data, that has been read by one transaction, is changed by another transaction before the first transaction is committed. In this case, at the moment the first transaction completes, it is already working with stale data and must be rejected, as the following figure shows.
In the figure, the transaction T1 reads dns-server to use in the provisioning of svc1 but transaction T2 changes dns-server value in the meantime. The two transactions conflict and T1 is rejected because T2 completed first.
To be precise, for a transaction to experience a conflict, both of the following has to be true:
It reads some data that is changed after being read and before the transaction is completed.
It commits a set of changes in NSO.
This means a set of read-only transactions or transactions, where nothing is changed, will never conflict. It is also possible that multiple write-only transactions won't conflict even when they update the same data nodes.
Allowing multiple concurrent transactions to write (and only write, not read) to the same data without conflict may seem odd at first. But from a transaction's standpoint, it does not depend on the current value because it was never read. Suppose the value changed the previous day, the transaction would do the exact same thing and you wouldn't consider it a conflict. So, the last write wins, regardless of the time elapsed between the two transactions.
It is extremely important that you do not mix multiple transactions, because it will prevent NSO from detecting conflicts properly. For example, starting multiple separate transactions and using one to write data, based on what was read from a different one, can result in subtle bugs that are hard to troubleshoot.
While the optimistic concurrency model allows transactions to run concurrently most of the time, ultimately some synchronization (a global lock) is still required to perform the conflict checks and serialize data writes to the CDB and devices. The following figure shows everything that happens after a client tries to apply a configuration change, including acquiring and releasing the lock. This process takes place, for example, when you enter the commit command on the NSO CLI or when a PUT request of the RESTCONF API is processed.
As the figure shows (and you can also observe it in the progress trace output), service mapping, validation, and transforms all happen in the transaction before taking a (global) transaction lock.
At the same time, NSO tracks all of the data reads and writes from the start of the transaction, right until the lock and conflict check. This includes service mapping callbacks and XML templates, as well as transform and custom validation hooks if you are using any. It even includes reads done as part of the YANG validation and rollback creation that NSO performs automatically.
If reads do not overlap with writes from other transactions, the conflict check passes. The change is written to the CDB and disseminated to the affected network devices, through the prepare and commit phases. Kickers and subscribers are called and, finally, the global lock can be released.
On the other hand, if there is overlap and the system detects a conflict, the transaction obviously cannot proceed. To recover if this happens, the transaction should be retried. Sometimes the system can do it automatically and sometimes the client itself must be prepared to retry it.
In general, what affects the chance of conflict is the actual data that is read and written by each transaction. So, if there is more data, the surface for potential conflict is bigger. But you can minimize this chance by accounting for it in the application design.
When a transaction conflict occurs, NSO logs an entry in the developer log, often found at logs/devel.log or a similar path. Suppose you have the following code in Python:
If the /mysvc-dns leaf changes while the code is executing, the t.apply() line fails and the developer log contains an entry similar to the following example:
Here, the transaction with id 3347 reads a value of /mysvc-dns as “10.1.2.2” but that value was changed by the transaction with id 3346 to “10.1.1.138” by the time the first transaction called t.apply(). The entry also contains some additional data, such as the user that initiated the other transaction and the low-level operations that resulted in the conflict.
At the same time, the Python code raises an ncs.error.Error exception, with confd_errno set to the value of ncs.ERR_TRANSACTION_CONFLICT and error text, such as the following:
In Java code, a matching com.tailf.conf.ConfException is thrown, with errorCode set to the com.tailf.conf.ErrorCode.ERR_TRANSACTION_CONFLICT value.
A thing to keep in mind when examining conflicts is that the transaction that performed the read operations is the one that gets the error and causes the log entry, while the other transaction, performing the write operations to the same path, is already completed successfully.
The error includes a reference to the work phase. The phase tells which part of the transaction encountered a conflict. The work phase signifies changes in an open transaction before it is applied. In practice, this is a direct read in the code that started the transaction before calling the apply() or applyTrans() function: the example reads the value of the leaf into dns_server.
On the other hand, if two transactions configure two service instances and the conflict arises in the mapping code, then the phase shows transform instead. It is also possible for a conflict to occur in more than one place, such as the phase transform,work denoting a conflict in both, the service mapping code as well as the initial transaction.
The complete list of conflict sources, that is, the possible values for the phase, is as follows:
work: read in an open transaction before it is applied
rollback: read during rollback file creation
pre-transform: read while validating service input parameters according to the service YANG model
transform: read during service (FASTMAP) or another transform invocation
validation: read while validating the final configuration (YANG validation)
For example, pre-transform indicates that the service YANG model validation is the source of the conflict. This can help tremendously when you try to narrow down the conflicting code in complex scenarios. In addition, the phase information is useful when you troubleshoot automatic transaction retries in case of conflict: when the phase includes work, automatic retry is not possible.
In some situations, NSO can retry a transaction that first failed to apply due to a conflict. A prerequisite is that NSO knows which code caused the conflict and that it can run that code again.
Changes done in the work phase are changes made directly by an external agent, such as a Python script connecting to the NSO or a remote NETCONF client. Since NSO is not in control of and is not aware of the logic in the external agent, it can only reject the conflicting transaction.
However, for the phases that follow the work phase, all the logic is implemented in NSO and NSO can run it on demand. For example, NSO is in charge of calling the service mapping code and the code can be run as many times as needed (a requirement for service re-deploy and similar). So, in case of a conflict, NSO can rerun all of the necessary logic to provision or de-provision a service.
NSO keeps checkpoints for each transaction, to restart it from the conflicting phase and save itself from redoing the work from the preceding phases if possible. NSO automatically checks if the transaction checkpoint read- or write-set grows too large. This allows for larger transactions to go through without memory exhaustion. When all checkpoints are skipped, no transaction retries are possible, and the transaction fails. When later-stage checkpoints are skipped, the transaction retry will take more time.
Moreover, in case of conflicts during service mapping, NSO optimizes the process even further. It tracks the conflicting services to not schedule them concurrently in the future. This automatic retry behavior is enabled by default.
For services, retries can be configured further or even disabled under /services/global-settings. You can also find the service conflicts NSO knows about by running the show services scheduling conflict command. For example:
Since a given service may not always conflict and can evolve over time, NSO reverts to default scheduling after expiry time, unless new conflicts occur.
Sometimes, you know in advance that a service will conflict, either with itself or another service. You can encode this information in the service YANG model using the conflicts-with parameter under the servicepoint definition:
The parameter ensures that NSO will never schedule and execute this service concurrently with another service using the specified servicepoint. It adds a non-expiring static scheduling conflict entry. This way, you can avoid the unnecessary occasional retry when the dynamic scheduling conflict entry expires.
Declaring a conflict with itself is especially useful when you have older, non-thread-safe service code that cannot be easily updated to avoid threading issues.
For the NSO CLI and JSON-RPC (WebUI) interfaces, a commit of a transaction that results in a conflict will trigger an automatic rebase and retry when the resulting configuration is the same despite the conflict. If the rebase does not resolve the conflict, the transaction will fail. The conflict can, in some CLI cases, be resolved manually. A successful automatic rebase and a retry will generate something like the following pseudo-log entries in the developer log (trace log level):
When a transaction fails to apply due to a read-write conflict in the work phase, NSO rejects the transaction and returns a corresponding error. In such a case, you must start a new transaction and redo all the changes.
Why is this necessary? Suppose you have code, let's say as part of a CDB subscriber or a standalone program, similar to the following Python snippet:
If mysvc-use-dhcp has one value when your code starts provisioning but is changed mid-process, your code needs to restart from the beginning or you can end up with a broken system. To guard against such a scenario, NSO needs to be conservative and return an error.
Since there is a chance of a transaction failing to apply due to a conflict, robust code should implement a retry scheme. You can implement the retry algorithm yourself, or you can use one of the provided helpers.
In Python, Maapi class has a run_with_retry() method, which creates a new transaction and calls a user-supplied function to perform the work. On conflict, run_with_retry() will recreate the transaction and call the user function again. For details, please see the relevant API documentation.
The same functionality is available in Java as well, as the Maapi.ncsRunWithRetry() method. Where it differs from the Python implementation is that it expects the function to be implemented inside a MaapiRetryableOp object.
As an alternative option, available only in Python, you can use the retry_on_conflict() function decorator.
Example code for each of these approaches is shown next. In addition, the examples.ncs/development-guide/concurrency-model/retry example showcases this functionality as part of a concrete service.
Suppose you have some code in Python, such as the following:
Since the code performs reads and writes of data in NSO through a newly established transaction, there is a chance of encountering a conflict with another, concurrent transaction.
On the other hand, if this was a service mapping code, you wouldn't be creating a new transaction yourself because the system would already provide one for you. You wouldn't have to worry about the retry because, again, the system would handle it for you through the automatic mechanism described earlier.
Yet, you may find such code in CDB subscribers, standalone scripts, or action implementations. As a best practice, the code should handle conflicts.
If you have an existing ncs.maapi.Maapi object already available, the simplest option might be to refactor the actual logic into a separate function and call it through run_with_retry(). For the current example, this might look like the following:
If the new function is not entirely independent and needs additional values passed as parameters, you can wrap it inside an anonymous (lambda) function:
An alternative implementation with a decorator is also possible and might be easier to implement if the code relies on the single_write_trans() or similar function. Here, the code does not change unless it has to be refactored into a separate function. The function is then adorned with the @ncs.maapi.retry_on_conflict() decorator. For example:
The major benefit of this approach is when the code is already in a function and only a decorator needs to be added. It can also be used with methods of the Action class and alike.
For actions in particular, please note that the order of decorators is important and the decorator is only useful when you start your own write transaction in the wrapped function. This is what single_write_trans() does in the preceding example because the old transaction cannot be used any longer in case of conflict.
Suppose you have some code in Java, such as the following:
To read and write some data in NSO, the code starts a new transaction with the help of NavuContext.startRunningTrans() but could have called Maapi.startTrans() directly as well. Regardless of the way such a transaction is started, there is a chance of encountering a read-write conflict. To handle those cases, the code can be rewritten to use Maapi.ncsRunWithRetry().
The ncsRunWithRetry() call creates and manages a new transaction, then delegates work to an object implementing the com.tailf.maapi.MaapiRetryableOp interface. So, you need to move the code that does the work into a new class, let's say MyProvisioningOp:
This class does not start its own transaction any more but uses the transaction handle tid, provided by the ncsRunWithRetry() wrapper.
You can create the MyProvisioningOp as an inner or nested class if you wish so but note that, depending on your code, you may need to designate it as a static class to use it directly as shown here.
If the code requires some extra parameters when called, you can also define additional properties on the new class and use them for this purpose. With the new class ready, you instantiate and call into it with the ncsRunWithRetry() function. For example:
And what if your use case requires you to customize how the transaction is started or applied? ncsRunWithRetry() can take additional parameters that allow you to control those aspects. Please see the relevant API documentation for the full reference.
In general, transaction conflicts in NSO cannot be avoided altogether, so your code should handle them gracefully with retries. Retries are required to ensure correctness but do take up additional time and resources. Since a high percentage of retries will notably decrease the throughput of the system, you should endeavor to construct your data models and logic in a way that minimizes the chance of conflicts.
A conflict arises when one transaction changes a value that one or more other ongoing transactions rely on. From this, you can make a couple of observations that should help guide your implementation.
First, if the shared data changes infrequently, it will rarely cause a conflict (regardless of the number of reads) because it only affects the transactions happening at the time it is changed. Conversely, a frequent change can clash with other transactions much more often and warrants spending some effort to analyze and possibly make conflict-free.
Next, if a transaction runs a long time, a greater number of other write transactions can potentially run in the meantime, increasing the chances of a conflict. For this reason, you should avoid long-running read-write transactions.
Likewise, the more data nodes and the different parts of the data tree the transaction touches, the more likely it is to run into a conflict. Limiting the scope and the amount of the changes to shared data is an important design aspect.
Also, when considering possible conflicts, you must account for all the changes in the transaction. This includes changes propagated to other parts of the data model through dependencies. For example, consider the following YANG snippet. Changing a single provision-dns leaf also changes every mysvc list item because of the when statement.
Ultimately, what matters is the read-write overlap with other transactions. Thus, you should avoid needless reads in your code: if there are no reads of the changed values, there can't be any conflicts.
A technique used in some existing projects, in service mapping code and elsewhere, is to first prepare all the provisioning parameters by reading a number of things from the CDB. But some of these parameters, or even most, may not really be needed for that particular invocation.
Consider the following service mapping code:
Here, a service performs NTP configuration when enabled through the do_ntp switch. But even if the switch is off, there are still a lot of reads performed. If one of the values changes during provisioning, such as the list of the available NTP servers in ntp_servers, it will cause a conflict and a retry.
An improved version of the code only calculates the NTP server value if it is actually needed:
Another thing to consider in addition to the individual service implementation is the placement and interaction of the service within the system. What happens if one service is used to generate input for another service? If the two services run concurrently, writes of the first service will invalidate reads of the other one, pretty much guaranteeing a conflict. Then it is wasteful to run both services concurrently and they should really run serially.
A way to achieve this is through a design pattern called stacked services. You create a third service that instantiates the first service (generating the input data) before the second one (dependent on the generated data).
When there is a need to search or filter a list for specific items, you will often find for-loops or similar constructs in the code. For example, to configure NTP, you might have the following:
This approach is especially prevalent in ordered-by-user lists since the order of the items and their processing is important.
The interesting bit is that such code reads every item in the list. If the list is changed while the transaction is ongoing, you get a conflict with the message identifying the get_next operation (which is used for list traversal). This is not very surprising: if another active item is added or removed, it changes the result of your algorithm. So, this behavior is expected and desirable to ensure correctness.
However, you can observe the same conflict behavior in less obvious scenarios. If the list model contains a unique YANG statement, NSO performs the same kind of enumeration of list items for you to verify the unique constraint. Likewise, a must or when statement can also trigger the evaluation of every item during validation, depending on the XPath expression.
NSO knows how to discern between access to specific list items based on the key value, where it tracks reads only to those particular items, and enumerating the list, where no key value is supplied and a list with all elements is treated as a single item. This works for your code as well as for the XPath expressions (in YANG and otherwise). As you can imagine, adding or removing items in the first case doesn't cause conflicts, while in the second one, it does.
In the end, it depends on the situation whether list enumeration can affect throughput or not. In the example, the NTP servers could be configured manually, by the operator, so they would rarely change, making it a non-issue. But your use case might differ.
As several service invocations may run in parallel, Python self-assignment in service handling code can cause difficult-to-debug issues. Therefore, NSO checks for such patterns and issues an alarm (default) or a log entry containing a warning and a keypath to the service instance that caused the warning. See NSO Python VM for details.
Upgrade NSO to a higher version.
Upgrading the NSO software gives you access to new features and product improvements. Every change carries a risk, and upgrades are no exception. To minimize the risk and make the upgrade process as painless as possible, this section describes the recommended procedures and practices to follow during an upgrade.
As usual, sufficient preparation avoids many pitfalls and makes the process more straightforward and less stressful.
There are multiple aspects that you should consider before starting with the actual upgrade procedure. While the development team tries to provide as much compatibility between software releases as possible, they cannot always avoid all incompatible changes. For example, when a deviation from an RFC standard is found and resolved, it may break clients that depend on the non-standard behavior. For this reason, a distinction is made between maintenance and a major NSO upgrade.
A maintenance NSO upgrade is within the same branch, i.e., when the first two version numbers stay the same (x.y in the x.y.z NSO version). An example is upgrading from version 6.2.1 to 6.2.2. In the case of a maintenance upgrade, the NSO release contains only corrections and minor enhancements, minimizing the changes. It includes binary compatibility for packages, so there is no need to recompile the .fxs files for a maintenance upgrade.
Correspondingly, when the first or second number in the version changes, that is called a full or major upgrade. For example, upgrading version 6.2.1 to 6.3 is a major, non-maintenance upgrade. Due to new features, packages must be recompiled, and some incompatibilities could manifest.
In addition to the above, a package upgrade is when you replace a package with a newer version, such as a NED or a service package. Sometimes, when package changes are not too big, it is possible to supply the new packages as part of the NSO upgrade, but this approach brings additional complexity. Instead, package upgrade and NSO upgrade should in general, be performed as separate actions and are covered as such.
To avoid surprises during any upgrade, first ensure the following:
Hosts have sufficient disk space, as some additional space is required for an upgrade.
The software is compatible with the target OS. However, sometimes a newer version of Java or system libraries, such as glibc, may be required.
All the required NEDs and custom packages are compatible with the target NSO version.
Existing packages have been compiled for the new version and are available to you during the upgrade.
In case it turns out any of the packages are incompatible or cannot be recompiled, you will need to contact the package developers for an updated or recompiled version. For an official Cisco-supplied package, it is recommended that you always obtain a pre-compiled version if it is available for the target NSO release, instead of compiling the package yourself.
Additional preparation steps may be required based on the upgrade and the actual setup, such as when using the Layered Service Architecture (LSA) feature. In particular, for a major NSO upgrade in a multi-version LSA cluster, ensure that the new version supports the other cluster members and follow the additional steps outlined in in Layered Service Architecture.
If you use the High Availability (HA) feature, the upgrade consists of multiple steps on different nodes. To avoid mistakes, you are encouraged to script the process, for which you will need to set up and verify access to all NSO instances with either ssh, nct, or some other remote management command. For the reference example, we use in this chapter, see examples.ncs/development-guide/high-availability/hcc. The management station uses shell and Python scripts that use ssh to access the Linux shell and NSO CLI and Python Requests for NSO RESTCONF interface access.
Likewise, NSO 5.3 added support for 256-bit AES encrypted strings, requiring the AES256CFB128 key in the ncs.conf configuration. You can generate one with the openssl rand -hex 32 or a similar command. Alternatively, if you use an external command to provide keys, ensure that it includes a value for an AES256CFB128_KEY in the output.
Finally, regardless of the upgrade type, ensure that you have a working backup and can easily restore the previous configuration if needed, as described in .
Caution
The ncs-backup (and consequently the nct backup) command does not back up the /opt/ncs/packages folder. If you make any file changes, back them up separately.
However, the best practice is not to modify packages in the /opt/ncs/packages folder. Instead, if an upgrade requires package recompilation, separate package folders (or files) should be used, one for each NSO version.
The upgrade of a single NSO instance requires the following steps:
Create a backup.
Perform a System Install of the new version.
Stop the old NSO server process.
Compact the CDB files write log.
The following steps assume that you are upgrading to the 6.3 release. They pertain to a System Install of NSO, and you must perform them with Super User privileges.
As a best practice, always create a backup before trying to upgrade.
For the upgrade itself, you must first download to the host and install the new NSO release.
Then, stop the currently running server with the help of systemd or an equivalent command relevant to your system.
Compact the CDB files write log using, for example, the ncs --cdb-compact $NCS_RUN_DIR/cdb command.
Next, you update the symbolic link for the currently selected version to point to the newly installed one, 6.3 in this case.
While seldom necessary, at this point, you would also update the /etc/ncs/ncs.conf file.
Now, ensure that the /var/opt/ncs/packages/ directory has appropriate packages for the new version. It should be possible to continue using the same packages for a maintenance upgrade. But for a major upgrade, you must normally rebuild the packages or use pre-built ones for the new version. You must ensure this directory contains the exact same version of each existing package, compiled for the new release, and nothing else.
As a best practice, the available packages are kept in /opt/ncs/packages/ and /var/opt/ncs/packages/ only contains symbolic links. In this case, to identify the release for which they were compiled, the package file names all start with the corresponding NSO version. Then, you only need to rearrange the symbolic links in the /var/opt/ncs/packages/ directory.
Please note that the above package naming scheme is neither required nor enforced. If your package filesystem names differ from it, you will need to adjust the preceding command accordingly.
Finally, you start the new version of the NSO server with the package reload flag set. Set NCS_RELOAD_PACKAGES=true in /etc/ncs/ncs.systemd.conf and start NSO:
Set the NCS_RELOAD_PACKAGES variable in /etc/ncs/ncs.systemd.conf back to its previous value or the system would keep performing a packages reload at subsequent starts.
NSO will perform the necessary data upgrade automatically. However, this process may fail if you have changed or removed any packages. In that case, ensure that the correct versions of all packages are present in /var/opt/ncs/packages/ and retry the preceding command.
Also, note that with many packages or data entries in the CDB, this process could take more than 90 seconds and result in the following error message:
The above error does not imply that NSO failed to start, just that it took longer than 90 seconds. Therefore, it is recommended you wait some additional time before verifying.
It is imperative that you have a working copy of data available from which you can restore. That is why you must always create a backup before starting an upgrade. Only a backup guarantees that you can rerun the upgrade or back out of it, should it be necessary.
The same steps can also be used to restore data on a new, similar host if the OS of the initial host becomes corrupted beyond repair.
First, stop the NSO process if it is running.
Verify and, if necessary, revert the symbolic link in /opt/ncs/ to point to the initial NSO release.
In the exceptional case where the initial version installation was removed or damaged, you will need to re-install it first and redo the step above.
Verify if the correct (initial) version of NSO is being used.
Upgrading NSO in a highly available (HA) setup is a staged process. It entails running various commands across multiple NSO instances at different times.
The procedure described in this section is used with the rule-based built-in HA clusters. For HA Raft cluster instructions, refer to in the HA documentation.
The procedure is almost the same for a maintenance and major NSO upgrade. The difference is that a major upgrade requires the replacement of packages with recompiled ones. Still, a maintenance upgrade is often perceived as easier because there are fewer changes in the product.
The stages of the upgrade are:
First, enable read-only mode on the designated primary, and then on the secondary that is enabled for fail-over.
Take a full backup on all nodes.
If using a 3-node setup, disconnect the 3rd, non-fail-over secondary by disabling HA on this node.
Enabling the read-only mode on both nodes is required to ensure the subsequent backup captures the full system state, as well as making sure the failover-primary does not start taking writes when it is promoted later on.
Disabling the non-fail-over secondary in a 3-node setup right after taking a backup is necessary when using the built-in HA rule-based algorithm (enabled by default in NSO 5.8 and later). Without it, the node might connect to the failover-primary when the failover happens, which disables read-only mode.
While not strictly necessary, explicitly promoting the designated secondary after disabling HA on the primary ensures a fast failover, avoiding the automatic reconnection attempts. If using a shared IP solution, such as the Tail-f HCC, this makes sure the shared VIP comes back up on the designated secondary as soon as possible. In addition, some older NSO versions do not reset the read-only mode upon disabling HA if they are not acting primary.
Another important thing to note is that all packages used in the upgrade must match the NSO release. If they do not, the upgrade will fail.
In the case of a major upgrade, you must recompile the packages for the new version. It is highly recommended that you use pre-compiled packages and do not compile them during this upgrade procedure since the compilation can prove nontrivial, and the production hosts may lack all the required (development) tooling. You should use a naming scheme to distinguish between packages compiled for different NSO versions. A good option is for package file names to start with the ncs-MAJORVERSION- prefix for a given major NSO version. This ensures multiple packages can co-exist in the /opt/ncs/packages folder, and the NSO version they can be used with becomes obvious.
The following is a transcript of a sample upgrade procedure, showing the commands for each step described above, in a 2-node HA setup, with nodes in their initial designated state. The procedure ensures that this is also the case in the end.
Scripting is a recommended way to upgrade the NSO version of an HA cluster. The following example script shows the required commands and can serve as a basis for your own customized upgrade script. In particular, the script requires a specific package naming convention above, and you may need to tailor it to your environment. In addition, it expects the new release version and the designated primary and secondary node addresses as the arguments. The recompiled packages are read from the packages-MAJORVERSION/ directory.
For the below example script, we configured our primary and secondary nodes with their nominal roles that they assume at startup and when HA is enabled. Automatic failover is also enabled so that the secondary will assume the primary role if the primary node goes down.
Once the script is completed, it is paramount that you manually verify the outcome. First, check that the HA is enabled by using the show high-availability command on the CLI of each node. Then connect to the designated secondaries and ensure they have the complete latest copy of the data, synchronized from the primaries.
After the primary node is upgraded and restarted, the read-only mode is automatically disabled. This allows the primary node to start processing writes, minimizing downtime. However, there is no HA. Should the primary fail at this point or you need to revert to a pre-upgrade backup, the new writes would be lost. To avoid this scenario, again enable read-only mode on the primary after re-enabling HA. Then disable read-only mode only after successfully upgrading and reconnecting the secondary.
To further reduce time spent upgrading, you can customize the script to install the new NSO release and copy packages beforehand. Then, you only need to switch the symbolic links and restart the NSO process to use the new version.
You can use the same script for a maintenance upgrade as-is, with an empty packages-MAJORVERSION directory, or remove the upgrade_packages calls from the script.
Example implementations that use scripts to upgrade a 2- and 3-node setup using CLI/MAAPI or RESTCONF are available in the NSO example set under examples.ncs/development-guide/high-availability.
We have been using a two-node HCC layer-2 upgrade reference example elsewhere in the documentation to demonstrate installing NSO and adding the initial configuration. The upgrade-l2 example referenced in examples.ncs/development-guide/high-availability/hcc implements shell and Python scripted steps to upgrade the NSO version using ssh to the Linux shell and the NSO CLI or Python Requests RESTCONF for accessing the paris and london nodes. See the example for details.
If you do not wish to automate the upgrade process, you will need to follow the instructions from and transfer the required files to each host manually. Additional information on HA is available in . However, you can run the high-availability actions from the preceding script on the NSO CLI as-is. In this case, please take special care of which host you perform each command, as it can be easy to mix them up.
Package upgrades are frequent and routine in development but require the same care as NSO upgrades in the production environment. The reason is that the new packages may contain an updated YANG model, resulting in a data upgrade process similar to a version upgrade. So, if a package is removed or uninstalled and a replacement is not provided, package-specific data, such as service instance data, will also be removed.
In a single-node environment, the procedure is straightforward. Create a backup with the ncs-backup command and ensure the new package is compiled for the current NSO version and available under the /opt/ncs/packages directory. Then either manually rearrange the symbolic links in the /var/opt/ncs/packages directory or use the software packages install command in the NSO CLI. Finally, invoke the packages reload command. For example:
On the other hand, upgrading packages in an HA setup is an error-prone process. Thus, NSO provides an action, packages ha sync and-reloadto minimize such complexity. This action loads new data models into NSO instead of restarting the server process. As a result, it is considerably more efficient, and the time difference to upgrade can be considerable if the amount of data in CDB is huge.
The action executes on the primary node. First, it syncs the physical packages from the primary node to the secondary nodes as tar archive files, regardless if the packages were initially added as directories or tar archives. Then, it performs the upgrade on all nodes in one go. The action does not perform the sync and the upgrade on the node with none role.
The packages ha sync action distributes new packages to the secondary nodes. If a package already exists on the secondary node, it will replace it with the one on the primary node. Deleting a package on the primary node will also delete it on the secondary node. Packages found in load paths under the installation destination (by default /opt/ncs/current) are not distributed as they belong to the system and should not differ between the primary and the secondary nodes.
It is crucial to ensure that the load path configuration is identical on both primary and secondary nodes. Otherwise, the distribution will not start, and the action output will contain detailed error information.
Using the and-reload parameter with the action starts the upgrade once packages are copied over. The action sets the primary node to read-only mode. After the upgrade is successfully completed, the node is set back to its previous mode.
If the parameter and-reload is also supplied with the wait-commit-queue-empty parameter, it will wait for the commit queue to become empty on the primary node and prevent other queue items from being added while the queue is being drained.
Using the wait-commit-queue-empty parameter is the recommended approach, as it minimizes the risk of the upgrade failing due to commit queue items still relying on the old schema.
The packages ha sync and-reload command has the following known limitations and side effects:
The primary node is set to read-only mode before the upgrade starts, and it is set back to its previous mode if the upgrade is successfully upgraded. However, the node will always be in read-write mode if an error occurs during the upgrade. It is up to the user to set the node back to the desired mode by using the high-availability read-only mode command.
As a best practice, you should create a backup of all nodes before upgrading. This action creates no backups, you must do that explicitly.
Example implementations that use scripts to upgrade a 2- and 3-node setup using CLI/MAAPI or RESTCONF are available in the NSO example set under examples.ncs/development-guide/high-availability.
We have been using a two-node HCC layer 2 upgrade reference example elsewhere in the documentation to demonstrate installing NSO and adding the initial configuration. The upgrade-l2 example referenced in examples.ncs/development-guide/high-availability/hcc implements shell and Python scripted steps to upgrade the primary paris package versions and sync the packages to the secondary london using ssh to the Linux shell and the NSO CLI or Python Requests RESTCONF for accessing the paris and london nodes. See the example for details.
In some cases, NSO may warn when the upgrade looks suspicious. For more information on this, see . If you understand the implications and are willing to risk losing data, use the force option with packages reload or set the NCS_RELOAD_PACKAGES environment variable to force when restarting NSO. It will force NSO to ignore warnings and proceed with the upgrade. In general, this is not recommended.
In addition, you must take special care of NED upgrades because services depend on them. For example, since NSO 5 introduced the CDM feature, which allows loading multiple versions of a NED, a major NED upgrade requires a procedure involving the migrate action.
When a NED contains nontrivial YANG model changes, that is called a major NED upgrade. The NED ID changes, and the first or second number in the NED version changes since NEDs follow the same versioning scheme as NSO. In this case, you cannot simply replace the package, as you would for a maintenance or patch NED release. Instead, you must load (add) the new NED package alongside the old one and perform the migration.
Migration uses the /ncs:devices/device/migrate action to change the ned-id of a single device or a group of devices. It does not affect the actual network device, except possibly reading from it. So, the migration does not have to be performed as part of the package upgrade procedure described above but can be done later, during normal operations. The details are described in . Once the migration is complete, you can remove the old NED by performing another package upgrade, where you deinstall the old NED package. It can be done straight after the migration or as part of the next upgrade cycle.
Implement basic automation with Python.
You can manipulate data in the CDB with the help of XML files or the UI, however, these approaches are not well suited for programmatic access. NSO includes libraries for multiple programming languages, providing a simpler way for scripts and programs to interact with it. The Python Application Programming Interface (API) is likely the easiest to use.
This section will show you how to read and write data using the Python programming language. With this approach, you will learn how to do basic network automation in just a few lines of code.
The environment setup that happens during the sourcing of the ncsrc file also configures the PYTHONPATH environment variable. It allows the Python interpreter to find the NSO modules, which are packaged with the product. This approach also works with Python virtual environments and does not require installing any packages.
Since the ncsrc file takes care of setting everything up, you can directly start the Python interactive shell and import the main ncs module. This module is a wrapper around a low-level C _ncs module that you may also need to reference occasionally. Documentation for both of the modules is available through the built-in help() function or separately in the HTML format.
If the import ncs statement fails, please verify that you are using a supported Python version and that you have sourced the ncsrc beforehand.
Generally, you can run the code from the Python interactive shell but we recommend against it. The code uses nested blocks, which are hard to edit and input interactively. Instead, we recommend you save the code to a file, such as script.py, which you can then easily run and rerun with the python3 script.py command. If you would still like to interactively inspect or alter the values during the execution, you can use the import pdb; pdb.set_trace() statements at the location of interest.
With NSO, data reads and writes normally happen inside a transaction. Transactions ensure consistency and avoid race conditions, where simultaneous access by multiple clients could result in data corruption, such as reading half-written data. To avoid this issue, NSO requires you to first start a transaction with a call to ncs.maapi.single_read_trans() or ncs.maapi.single_write_trans(), depending on whether you want to only read data or read and write data. Both of them require you to provide the following two parameters:
user: The username (string) of the user you wish to connect as
context: Method of access (string), allowing NSO to distinguish between CLI, web UI, and other types of access, such as Python scripts
These parameters specify security-related information that is used for auditing, access authorization, and so on. Please refer to for more details.
As transactions use up resources, it is important to clean up after you are done using them. Using a Python with code block will ensure that cleanup is automatically performed after a transaction goes out of scope. For example:
In this case, the variable t stores the reference to a newly started transaction. Before you can actually access the data, you also need a reference to the root element in the data tree for this transaction. That is, the top element, under which all of the data is located. The ncs.maagic.get_root() function, with transaction t as a parameter, achieves this goal.
Once you have the reference to the root element, say in a variable named root, navigating the data model becomes straightforward. Accessing a property on root selects a child data node with the same name as the property. For example, root.nacm gives you access to the nacm container, used to define fine-grained access control. Since nacm is itself a container node, you can select one of its children using the same approach. So, the code root.nacm.enable_nacm refers to another node inside nacm, called enable-nacm. This node is a leaf, holding a value, which you can print out with the Python print() function. Doing so is conceptually the same as using the show running-config nacm enable-nacm command in the CLI.
There is a small difference, however. Notice that in the CLI the enable-nacm is hyphenated, as this is the actual node name in YANG. But names must not include the hyphen (minus) sign in Python, so the Python code uses an underscore instead.
The following is the full source code that prints the value:
As you can see in this example, it is necessary to import only the ncs module, which automatically imports all the submodules. Depending on your NSO instance, you might also notice that the value printed is True, without any quotation marks. As a convenience, the value gets automatically converted to the best-matching Python type, which in this case is a boolean value (True or False).
Moreover, if you start a read/write transaction instead of a read-only one, you can also assign a new value to the leaf. Of course, the same validation rules apply as using the CLI and you need to explicitly commit the transaction if you want the changes to persist. A call to the apply() method on the transaction object t performs this function. Here is an example:
You can access a YANG list node like how you access a leaf. However, working with a list more resembles working with Python dict than a list, even though the name would suggest otherwise. The distinguishing feature is that YANG lists have keys that uniquely identify each list item. So, lists are more naturally represented as a kind of dictionary in Python.
Let's say there is a list of customers defined in NSO, with a YANG schema such as:
To simplify the code, you might want to assign the value of root.customers.customer to a new variable our_customers. Then you can easily access individual customers (list items) by their id. For example, our_customers['ACME'] would select the customer with id equal to ACME. You can check for the existence of an item in a list using the Python in operator, for example, 'ACME' in our_customers. Having selected a specific customer using the square bracket syntax, you can then access the other nodes of this item.
Compared to dictionaries, making changes to YANG lists is quite a bit different. You cannot just add arbitrary items because they must obey the YANG schema rules. Instead, you call the create() method on the list object and provide the value for the key. This method creates and returns a new item in the list if it doesn't exist yet. Otherwise, the method returns the existing item. And for item removal, use the Python built-in del function with the list object and specify the item to delete. For example, del our_customers['ACME'] deletes the ACME customer entry.
In some situations, you might want to enumerate all of the list items. Here, the list object can be used with the Python for syntax, which iterates through each list item in turn. Note that this differs from standard Python dictionaries, which iterate through the keys. The following example demonstrates this behavior.
Now let's see how you can use this knowledge for network automation.
No previous NSO or netsim processes are running. Use the ncs --stop and ncs-netsim stop commands to stop them if necessary.
Leveraging one of the examples included with the NSO installation allows you to quickly gain access to an NSO instance with a few devices already onboarded. The getting-started/developing-with-ncs set of examples contains three simulated routers that you can configure.
Navigate to the 0-router-network directory with the following command.
You can prepare and start the routers by running the make and netsim commands from this directory.
With the routers running, you should also start the NSO instance that will allow you to manage them.
In case the ncs command reports an error about an address already in use, you have another NSO instance already running that you must stop first (ncs --stop).
Before you can use Python to configure the router, you need to know what to configure. The simplest way to find out how to configure the DNS on this type of router is by using the NSO CLI.
In the CLI, you can verify that the NSO is managing three routers and check their names with the following command:
To make sure that the NSO configuration matches the one deployed on routers, also perform a sync-from action.
Let's say you would like to configure the DNS server 192.0.2.1 on the ex1 router. To do this by hand, first enter the configuration mode.
As you won't be configuring ex1 manually at this point, exit the configuration mode.
Instead, you will create a Python script to do it, so exit the CLI as well.
You will place the script into the ex1-dns.py file.
In a text editor, create a new file and add the following text at the start.\
The root variable allows you to access configuration in the NSO, much like entering the configuration mode on the CLI does.
Next, you will need to navigate to the ex1 router. It makes sense to assign it to the ex1_device variable, which makes it more obvious what it refers to and easier to access in the script.
Save the script file as ex1-dns.py and run it with the python3 command.
You should see Done! printed out. Then start the NSO CLI to verify the configuration change.
Finally, you can check the configured DNS servers on ex1 by using the show running-config
The code in this chapter is intentionally kept simple to demonstrate the core concepts and lacks robustness in error handling. In particular, it is missing the retry mechanism in case of concurrency conflicts as described in .
Perhaps you've wondered about the unusual name of Python ncs.maagic module? It is not a typo but a portmanteau of the words Management Agent API (MAAPI) and magic. The latter is used in the context of so-called magic methods in Python. The purpose of magic methods is to allow custom code to play nicely with the Python language. An example you might have come across in the past is the __init__() method in a class, which gets called whenever you create a new object. This one and similar methods are called magic because they are invoked automatically and behind the scenes (implicitly).
The NSO Python API makes extensive use of such magic methods in the ncs.maagic module. Magic methods help this module translate an object-based, user-friendly programming interface into low-level function calls. In turn, the high-level approach to navigating the data hierarchy with ncs.maagic objects is called the Python Maagic API.
Understand NSO deployment with an example setup.
This section shows examples of a typical deployment for a highly available (HA) setup. A reference to an example implementation of the tailf-hcc layer-2 upgrade deployment scenario described here, check the NSO example set under examples.ncs/development-guide/high-availability/hcc. The example covers the following topics:
Installation of NSO on all nodes in an HA setup
Initial configuration of NSO on all nodes
cd ~/Downloads
ls -l nso*.bin
-rw-r--r--@ 1 user staff 199M Dec 15 11:45 nso-6.0.darwin.x86_64.installer.bin
-rw-r--r--@ 1 user staff 199M Dec 15 11:45 nso-6.0.darwin.x86_64.signed.binsh nso-6.0.darwin.x86_64.signed.bin
# Output
Unpacking...
Verifying signature...
Downloading CA certificate from http://www.cisco.com/security/pki/certs/crcam2.cer ...
Successfully downloaded and verified crcam2.cer.
Downloading SubCA certificate from http://www.cisco.com/security/pki/certs/innerspace.cer ...
Successfully downloaded and verified innerspace.cer.
Successfully verified root, subca and end-entity certificate chain.
Successfully fetched a public key from tailf.cer.
Successfully verified the signature of nso-6.0.darwin.x86_64.installer.bin using tailf.cerls -l
# Output
-rw-r--r-- 1 user staff 1.8K Nov 29 06:05 README.signature
-rw-r--r-- 1 user staff 12K Nov 29 06:05 cisco_x509_verify_release.py
-rwxr-xr-x 1 user staff 199M Nov 29 05:55 nso-6.0.darwin.x86_64.installer.bin
-rw-r--r-- 1 user staff 256B Nov 29 06:05 nso-6.0.darwin.x86_64.installer.bin.signature
-rwxr-xr-x@ 1 user staff 199M Dec 15 11:45 nso-6.0.darwin.x86_64.signed.bin
-rw-r--r-- 1 user staff 1.4K Nov 29 06:05 tailf.cer$ sh nso-VERSION.OS.ARCH.installer.bin $HOME/ncs-VERSION --local-installsh nso-6.0.darwin.x86_64.installer.bin --local-install ~/nso-6.0
# Output
INFO Using temporary directory /var/folders/90/n5sbctr922336_
0jrzhb54400000gn/T//ncs_installer.93831 to stage NCS installation bundle
INFO Unpacked ncs-6.0 in /Users/user/nso-6.0
INFO Found and unpacked corresponding DOCUMENTATION_PACKAGE
INFO Found and unpacked corresponding EXAMPLE_PACKAGE
INFO Found and unpacked corresponding JAVA_PACKAGE
INFO Generating default SSH hostkey (this may take some time)
INFO SSH hostkey generated
INFO Environment set-up generated in /Users/user/nso-6.0/ncsrc
INFO NSO installation script finished
INFO Found and unpacked corresponding NETSIM_PACKAGE
INFO NCS installation complete$ source $HOME/ncs-VERSION/ncsrc $ ncs-setup --dest $HOME/ncs-run$ cd $HOME/ncs-run
$ ncs $ cd .../ncs-2.3.1
$ . ncsrc
$ cd .../ncs-2.3.2/examples.ncs/datacenter-qinq
$ ncs $ cd $NCS_DIR/examples.ncs/data-center-qinq
$ ncs
$ ncs --stop
$ cd $NCS_DIR/examples.ncs/getting-started/1-simulated-cisco-ios
$ ncs
$ ncs --stop$ ncs_cli -Cu admin
admin@ncs# license smart register idtoken YzIzMDM3MTgtZTRkNC00YjkxLTk2ODQt
OGEzMTM3OTg5MG
Registration process in progress.
Use the 'show license status' command to check the progress and result.admin@ncs# show license all
...
<INFO> 21-Apr-2016::11:29:18.022 miosaterm confd[8226]:
Smart Licensing Global Notification:
type = "notifyRegisterSuccess",
agentID = "sa1",
enforceMode = "notApplicable",
allowRestricted = false,
failReasonCode = "success",
failMessage = "Successful."
<INFO> 21-Apr-2016::11:29:23.029 miosaterm confd[8226]:
Smart Licensing Entitlement Notification: type = "notifyEnforcementMode",
agentID = "sa1",
notificationTime = "Apr 21 11:29:20 2016",
version = "1.0",
displayName = "regid.2015-10.com.cisco.NSO-network-element",
requestedDate = "Apr 21 11:26:19 2016",
tag = "regid.2015-10.com.cisco.NSO-network-element",
enforceMode = "inCompliance",
daysLeft = 90,
expiryDate = "Jul 20 11:26:19 2016",
requestedCount = 8
......
<INFO> 13-Apr-2016::13:22:29.178 miosaterm confd[16260]:
Starting the NCS Smart Licensing Java VM
<INFO> 13-Apr-2016::13:22:34.737 miosaterm confd[16260]:
Smart Licensing evaluation time remaining: 90d 0h 0m 0s
...
<INFO> 13-Apr-2016::13:22:34.737 miosaterm confd[16260]:
Smart Licensing evaluation time remaining: 89d 23h 0m 0s
...<INFO> 21-Apr-2016::11:29:18.022 miosaterm confd[8226]:
Smart Licensing Global Notification:
type = "notifyRegisterSuccess"admin@ncs# show license status
Smart Licensing is ENABLED
Registration:
Status: REGISTERED
Smart Account: Network Services Orchestrator
Virtual Account: Default
Export-Controlled Functionality: Allowed
Initial Registration: SUCCEEDED on Apr 21 09:29:11 2016 UTC
Last Renewal Attempt: SUCCEEDED on Apr 21 09:29:16 2016 UTC
Next Renewal Attempt: Oct 18 09:29:16 2016 UTC
Registration Expires: Apr 21 09:26:13 2017 UTC
Export-Controlled Functionality: Allowed
License Authorization:
License Authorization:
Status: IN COMPLIANCE on Apr 21 09:29:18 2016 UTC
Last Communication Attempt: SUCCEEDED on Apr 21 09:26:30 2016 UTC
Next Communication Attempt: Apr 21 21:29:32 2016 UTC
Communication Deadline: Apr 21 09:26:13 2017 UTCmodule ssh-authkey {
yang-version 1.1;
namespace "http://example.com/ssh-authkey";
prefix sa;
import tailf-common {
prefix tailf;
}
import tailf-aaa {
prefix aaa;
}
description
"List of SSH authorized public keys";
revision 2023-02-02 {
description
"Initial revision.";
}
augment "/aaa:aaa/aaa:authentication/aaa:users/aaa:user" {
list authkey {
key pubkey-data;
leaf pubkey-data {
type string;
}
}
}
} container pubkey-dist {
list key-auth {
key "ne-name local-user";
uses ncs:nano-plan-data;
uses ncs:service-data;
ncs:servicepoint "distkey-servicepoint";
leaf ne-name {
type leafref {
path "/ncs:devices/ncs:device/ncs:name";
}
}
leaf local-user {
type leafref {
path "/ncs:devices/ncs:authgroups/ncs:group/ncs:umap/ncs:local-user";
require-instance false;
}
}
leaf remote-name {
type leafref {
path "/ncs:devices/ncs:authgroups/ncs:group/ncs:umap/ncs:remote-name";
require-instance false;
}
mandatory true;
}
leaf authgroup-name {
type leafref {
path "/ncs:devices/ncs:authgroups/ncs:group/ncs:name";
require-instance false;
}
mandatory true;
}
leaf passphrase {
// Leave unset for no passphrase
tailf:suppress-echo true;
type tailf:aes-256-cfb-128-encrypted-string {
length "10..max" {
error-message "The passphrase must be at least 10 characters long";
}
pattern ".*[a-z]+.*" {
error-message "The passphrase must have at least one lower case alpha";
}
pattern ".*[A-Z]+.*" {
error-message "The passphrase must have at least one upper case alpha";
}
pattern ".*[0-9]+.*" {
error-message "The passphrase must have at least one digit";
}
pattern ".*[<>~;:!@#/$%^&*=-]+.*" {
error-message "The passphrase must have at least one of these" +
" symbols: [<>~;:!@#/$%^&*=-]+";
}
pattern ".* .*" {
modifier invert-match;
error-message "The passphrase must have no spaces";
}
}
}
...
}
} ncs:plan-outline distkey-plan {
description "Plan for distributing a public key";
ncs:component-type "dk:ne" {
ncs:state "ncs:init";
ncs:state "dk:generated" {
ncs:create {
// Request the generate-keys action
ncs:post-action-node "$SERVICE" {
ncs:action-name "generate-keys";
ncs:result-expr "result = 'true'";
ncs:sync;
}
}
ncs:delete {
// Request the delete-keys action
ncs:post-action-node "$SERVICE" {
ncs:action-name "delete-keys";
ncs:result-expr "result = 'true'";
}
}
}
ncs:state "dk:distributed" {
ncs:create {
// Invoke a Python program to distribute the authorized public key to
// the network element
ncs:nano-callback;
ncs:force-commit;
}
}
ncs:state "dk:configured" {
ncs:create {
// Invoke a Python program that in turn invokes a service template to
// configure NSO to use public key authentication with the network
// element
ncs:nano-callback;
// Request the connect action to test the public key authentication
ncs:post-action-node "/ncs:devices/device[name=$NE-NAME]" {
ncs:action-name "connect";
ncs:result-expr "result = 'true'";
}
}
}
ncs:state "ncs:ready";
}
}
ncs:service-behavior-tree distkey-servicepoint {
description "One component per distkey behavior tree";
ncs:plan-outline-ref "dk:distkey-plan";
ncs:selector {
// The network element name used with this component
ncs:variable "NE-NAME" {
ncs:value-expr "current()/ne-name";
}
// The unique component name
ncs:variable "NAME" {
ncs:value-expr "concat(current()/ne-name, '-', current()/local-user)";
}
// Component for setting up public key authentication
ncs:create-component "$NAME" {
ncs:component-type-ref "dk:ne";
}
}
} container pubkey-dist {
list key-auth {
key "ne-name local-user";
...
action generate-keys {
tailf:actionpoint generate-keys;
output {
leaf result {
type boolean;
}
}
}
action delete-keys {
tailf:actionpoint delete-keys;
output {
leaf result {
type boolean;
}
}
}
}
}class DistKeyApp(ncs.application.Application):
def setup(self):
# Nano service callbacks require a registration for a service point,
# component, and state, as specified in the corresponding data model
# and plan outline.
self.register_nano_service('distkey-servicepoint', # Service point
'dk:ne', # Component
'dk:distributed', # State
DistKeyServiceCallbacks)
self.register_nano_service('distkey-servicepoint', # Service point
'dk:ne', # Component
'dk:configured', # State
DistKeyServiceCallbacks)
# Side effect action that uses ssh-keygen to create the keyfiles
self.register_action('generate-keys', GenerateActionHandler)
# Action to delete the keys created by the generate keys action
self.register_action('delete-keys', DeleteActionHandler)
def teardown(self):
self.log.info('DistKeyApp FINISHED')class GenerateActionHandler(Action):
@Action.action
def cb_action(self, uinfo, name, keypath, ainput, aoutput, trans):
'''Action callback'''
service = ncs.maagic.get_node(trans, keypath)
# Install the crypto keys used to decrypt the service passphrase leaf
# as input to the key generation.
with ncs.maapi.Maapi() as maapi:
_maapi.install_crypto_keys(maapi.msock)
# Decrypt the passphrase leaf for use when generating the keys
encrypted_passphrase = service.passphrase
decrypted_passphrase = _ncs.decrypt(str(encrypted_passphrase))
aoutput = True
# If it does not exist already, generate a private and public key
if os.path.isfile(f'./{service.local_user}_ed25519') == False:
result = subprocess.run(['ssh-keygen', '-N',
f'{decrypted_passphrase}', '-t', 'ed25519',
'-f', f'./{service.local_user}_ed25519'],
stdout=subprocess.PIPE, check=True,
encoding='utf-8')
if "has been saved" not in result.stdout:
aoutput = Falseclass DeleteActionHandler(Action):
@Action.action
def cb_action(self, uinfo, name, keypath, ainput, aoutput, trans):
'''Action callback'''
service = ncs.maagic.get_node(trans, keypath)
# Only delete the key files if no more network elements use this
# user's keys
cur = trans.cursor('/pubkey-dist/key-auth')
remove_key = True
while True:
try:
value = next(cur)
if value[0] != service.ne_name and value[1] == service.local_user:
remove_key = False
break
except StopIteration:
break
aoutput = True
if remove_key is True:
try:
os.remove(f'./{service.local_user}_ed25519.pub')
os.remove(f'./{service.local_user}_ed25519')
except OSError as e:
if e.errno != errno.ENOENT:
aoutput = Falseclass DistKeyServiceCallbacks(NanoService):
@NanoService.create
def cb_nano_create(self, tctx, root, service, plan, component, state,
proplist, component_proplist):
'''Nano service create callback'''
if state == 'dk:distributed':
# Distribute the public key to the network element's authorized
# keys list
with open(f'./{service.local_user}_ed25519.pub', 'r') as f:
pubkey_data = f.read()
config = root.devices.device[service.ne_name].config
users = config.aaa.authentication.users
users.user[service.local_user].authkey.create(pubkey_data)
elif state == 'dk:configured':
# Configure NSO to use a public key for authentication with
# the network element
template_vars = ncs.template.Variables()
template_vars.add('CWD', os.getcwd())
template = ncs.template.Template(service)
template.apply('distkey-configured', template_vars)<config-template xmlns="http://tail-f.com/ns/config/1.0">
<devices xmlns="http://tail-f.com/ns/ncs" tags="merge">
<authgroups>
<group>
<name>{authgroup-name}</name>
<umap>
<local-user>{local-user}</local-user>
<remote-name>{remote-name}</remote-name>
<public-key>
<private-key>
<file>
<name>{$CWD}/{local-user}_ed25519</name>
<passphrase>{passphrase}</passphrase>
</file>
</private-key>
</public-key>
</umap>
</group>
</authgroups>
<device>
<name>{ne-name}</name>
<authgroup>{authgroup-name}</authgroup>
</device>
</devices>
</config-template>}with ncs.maapi.single_write_trans('admin', 'system') as t:
root = ncs.maagic.get_root(t)
# Read a value that can change during this transaction
dns_server = root.mysvc_dns
# Now perform complex work... or time.sleep(10) for testing
# Finally, write the result
root.some_data = 'the result'
t.apply()<INFO> 23-Aug-2022::03:31:17.029 linux-nso ncs[<0.18350.3>]: ncs writeset collector:
check conflict tid=3347 min=234 seq=237 wait=0ms against=[3346] elapsed=1ms
-> conflict on: /mysvc-dns read: <<"10.1.2.2">> (op: get_delem tid: 3347)
write: <<"10.1.1.138">> (op: write tid: 3346 user: admin) phase(s): work
write tids: 3346Conflict detected (70): Transaction 3347 conflicts with transaction 3346 started by
user admin: /mysvc:mysvc-dns read-op get_delem write-op write in work phase(s)admin@ncs# unhide debug
admin@ncs# show services scheduling conflict | notab
services scheduling conflict mysvc-servicepoint mysvc-servicepoint
type dynamic
first-seen 2022-08-27T17:15:10+00:00
inactive-after 2022-08-27T17:15:09+00:00
expires-after 2022-08-27T18:05:09+00:00
ttl-multiplier 1
admin@ncs#list mysvc {
uses ncs:service-data;
ncs:servicepoint mysvc-servicepoint {
ncs:conflicts-with "mysvc-servicepoint";
ncs:conflicts-with "some-other-servicepoint";
}
// ...
}<INFO> … check for read-write conflicts: conflict found
<INFO> … rebase transaction
…
<INFO> … rebase transaction: ok
<INFO> … retrying transaction after rebasewith ncs.maapi.single_write_trans('admin', 'system') as t:
if t.get_elem('/mysvc-use-dhcp') == True:
# do something
else:
# do something entirely different that breaks
# your network if mysvc-use-dhcp happens to be true
t.apply()with ncs.maapi.single_write_trans('admin', 'python') as t:
root = ncs.maagic.get_root(t)
# First read some data, then write some too.
# Finally, call apply.
t.apply()def do_provisioning(t):
"""Function containing the actual logic"""
root = ncs.maagic.get_root(t)
# First read some data, then write some too.
# ...
# Finally, return True to signal apply() has to be called.
return True
# Need to replace single_write_trans() with a Maapi object
with ncs.maapi.Maapi() as m:
with ncs.maapi.Session(m, 'admin', 'python'):
m.run_with_retry(do_provisioning)m.run_with_retry(lambda t: do_provisioning(t, one_param, another_param))from ncs.maapi import retry_on_conflict
@retry_on_conflict()
def do_provisioning():
# This is the same code as before but in a function
with ncs.maapi.single_write_trans('admin', 'python') as t:
root = ncs.maagic.get_root(t)
# First read some data, then write some too.
# ...
# Finally, call apply().
t.apply()
do_provisioning()class MyAction(ncs.dp.Action):
@ncs.dp.Action.action
@retry_on_conflict()
def cb_action(self, uinfo, name, kp, input, output, trans):
with ncs.maapi.single_write_trans('admin', 'python') as t:
...public class MyProgram {
public static void main(String[] arg) throws Exception {
Socket socket = new Socket("127.0.0.1", Conf.NCS_PORT);
Maapi maapi = new Maapi(socket);
maapi.startUserSession("admin", InetAddress.getByName(null),
"system", new String[]{},
MaapiUserSessionFlag.PROTO_TCP);
NavuContext context = new NavuContext(maapi);
int tid = context.startRunningTrans(Conf.MODE_READ_WRITE);
// Your code here that reads and writes data.
// Finally, call apply.
context.applyClearTrans();
maapi.endUserSession();
socket.close();
}
}public class MyProvisioningOp implements MaapiRetryableOp {
public boolean execute(Maapi maapi, int tid)
throws IOException, ConfException, MaapiException
{
// Create context for the provided, managed transaction;
// note the extra parameter compared to before and no calling
// context.startRunningTrans() anymore.
NavuContext context = new NavuContext(maapi, tid);
// Your code here that reads and writes data.
// Finally, return true to signal apply() has to be called.
return true;
}
}public class MyProgram {
public static void main(String[] arg) throws Exception {
Socket socket = new Socket("127.0.0.1", Conf.NCS_PORT);
Maapi maapi = new Maapi(socket);
maapi.startUserSession("admin", InetAddress.getByName(null),
"system", new String[]{},
MaapiUserSessionFlag.PROTO_TCP);
// Deletegate work to MyProvisioningOp, with retry.
maapi.ncsRunWithRetry(new MyProvisioningOp());
// No more calling applyClearTrans() or friends,
// ncsRunWithRetry() does that for you.
maapi.endUserSession();
socket.close();
}
}leaf provision-dns {
type boolean;
}
list mysvc {
container dns {
when "../../provision-dns";
// ...
}
}def cb_create(self, tctx, root, service, proplist):
device = root.devices.device[service.device]
# Search device interfaces and CDB for mgmt IP
device_ip = find_device_ip(device)
# Find the best server to use for this device
ntp_servers = root.my_settings.ntp_servers
use_ntp_server = find_closest_server(device_ip, ntp_servers)
if service.do_ntp:
device.ntp.servers.append(use_ntp_server)def cb_create(self, tctx, root, service, proplist):
device = root.devices.device[service.device]
if service.do_ntp:
# Search device interfaces and CDB for mgmt IP
device_ip = find_device_ip(device)
# Find the best server to use for this device
ntp_servers = root.my_settings.ntp_servers
use_ntp_server = find_closest_server(device_ip, ntp_servers)
device.ntp.servers.append(use_ntp_server)for ntp_server in root.my_settings.ntp_servers:
# Only select active servers
if ntp_server.is_active:
# Do something



openssl command. Generate self-signed certificates for HTTPS.
find command. Used to find out if all required libraries are available.
which command. Used by the NSO package manager.
libpam.so.0. Pluggable Authentication Module library.
libexpat.so.1. EXtensible Markup Language parsing library.
libz.so.1 version 1.2.7.1 or higher. Data compression library.
Google Chrome
ncs-netsim(1): Command to create and manipulate a simulated network.ncs-setup(1): Command to create an initial NSO setup.
ncs.conf: NSO daemon configuration file format.
Check whether the existing ncs.conf file can be used as-is or needs updating. For example, stronger encryption algorithms may require you to configure additional keying material.
Review the CHANGES file for information on what has changed.
If upgrading from a no longer supported software version, verify that the upgrade can be performed directly. In situations where the currently installed version is very old, you may have to upgrade to one or more intermediate versions before upgrading to the target version.
/opt/ncs/current symbolic link.If required, update the ncs.conf configuration file.
Update the packages in /var/opt/ncs/packages/ if recompilation is needed.
Start the NSO server process, instructing it to reload the packages.
Finally, start the NSO server and verify the restore was successful.
Disconnect the HA pair by disabling HA on the designated primary, temporarily promoting the designated secondary to provide the read-only service (and advertise the shared virtual IP address if it is used).
Upgrade the designated primary.
Disable HA on the designated secondary node, to allow designated primary to become actual primary in the next step.
Activate HA on the designated primary, which will assume its assigned (primary) role to provide the full service (and again advertise the shared IP if used). However, at this point, the system is without HA.
Upgrade the designated secondary node.
Activate HA on the designated secondary, which will assume its assigned (secondary) role, connecting HA again.
Verify that HA is operational and has converged.
Upgrade the 3rd, non-fail-over secondary if it is used, and verify it successfully rejoins the HA cluster.
Then navigate to the NSO copy of the ex1 configuration, which resides under the devices device ex1 config path, and use the ? and TAB keys to explore the available configuration options. You are looking for the DNS configuration.
...
Once you have found it, you see the full DNS server configuration path: devices device ex1 config sys dns server.
ex1devicedevicesex1configex1ex1_config
Alternatively, you can assign to ex1_config directly, without referring to ex1_device, like so:
This is the equivalent of using devices device ex1 config on the CLI.
For the last part, keep in mind the full configuration path you found earlier. You have to keep navigating to reach the server list node. You can do this through the sys and dns nodes on the ex1_config variable.
DNS configuration typically allows specifying multiple servers for redundancy and is therefore modeled as a list. You add a new DNS server with the create() method on the list object.
Having made the changes, do not forget to commit them with a call to apply() or they will be lost.
Alternatively, you can use the dry-run parameter with the apply_params() to, for example, preview what will be sent to the device.
Lastly, add a simple print statement to notify you when the script is completed.
If you see the 192.0.2.1 address in the output, you have successfully configured this device using Python!

$ man ncs.conf$ ncs --help$ ncsc --helpsh nso-6.0.darwin.x86_64.installer.bin --help
# Output
This is the NCS installation script.
Usage: ./nso-6.0.darwin.x86_64.installer.bin [--local-install] LocalInstallDir
Installs NCS in the LocalInstallDir directory only.
This is convenient for test and development purposes.
Usage: ./nso-6.0.darwin.x86_64.installer.bin --system-install
[--install-dir InstallDir]
[--config-dir ConfigDir] [--run-dir RunDir] [--log-dir LogDir]
[--run-as-user User] [--keep-ncs-setup] [--non-interactive]
Does a system install of NCS, suitable for deployment.
Static files are installed in InstallDir/ncs-<vsn>.
The first time --system-install is used, the ConfigDir,
RunDir, and LogDir directories are also created and
populated for config files, run-time state files, and log files,
respectively, and an init script for start of NCS at system boot
and user profile scripts are installed. Defaults are:
InstallDir - /opt/ncs
ConfigDir - /etc/ncs
RunDir - /var/opt/ncs
LogDir - /var/log/ncs
By default, the system install will run NCS as the root user.
If the --run-as-user option is given, the system install will
instead run NCS as the given user. The user will be created if
it does not already exist.
If the --non-interactive option is given, the installer will
proceed with potentially disruptive changes (e.g. modifying or
removing existing files) without asking for confirmation. ncs {TAB} {TAB}
# Output
ncs ncs-maapi ncs-project ncs-start-python-vm ncs_cmd
ncs-backup ncs-make-package ncs-setup ncs-uninstall ncs_conf_tool
ncs-collect ncs-netsim ncs-start-java-vm ncs_cli
ncs_load
ncsc
ncs_crypto_keys-tech-report# systemctl start ncs
Starting ncs: .# ncs-backup# sh nso-6.3.linux.x86_64.installer.bin --system-install# systemctl stop ncs
Stopping ncs: .# cd /opt/ncs
# rm -f current
# ln -s ncs-6.3 current# cd /var/opt/ncs/packages/
# rm -f *
# for pkg in /opt/ncs/packages/ncs-6.3-*; do ln -s $pkg; done# systemctl start ncs
Starting ncs: ...Starting ncs (via systemctl): Job for ncs.service failed
because a timeout was exceeded. See "systemctl status
ncs.service" and "journalctl -xe" for details. [FAILED]# systemctl stop ncs
Stopping ncs: .# cd /opt/ncs
# ls -l current
# ln -s ncs-VERSION current# ncs --version<switch to designated primary CLI>
admin@ncs# show high-availability status mode
high-availability status mode primary
admin@ncs# high-availability read-only mode true
<switch to designated secondary CLI>
admin@ncs# show high-availability status mode
high-availability status mode secondary
admin@ncs# high-availability read-only mode true
<switch to designated primary shell>
# ncs-backup
<switch to designated secondary shell>
# ncs-backup
<switch to designated primary CLI>
admin@ncs# high-availability disable
<switch to designated secondary CLI>
admin@ncs# high-availability be-primary
<switch to designated primary shell>
# <upgrade node>
# <set NCS_RELOAD_PACKAGES=true in `/etc/ncs/ncs.systemd.conf`>
# systemctl restart ncs
# <restore `/etc/ncs/ncs.systemd.conf`>
<switch to designated secondary CLI>
admin@ncs# high-availability disable
<switch to designated primary CLI>
admin@ncs# high-availability enable
<switch to designated secondary shell>
# <upgrade node>
# <set NCS_RELOAD_PACKAGES=true in `/etc/ncs/ncs.systemd.conf`>
# systemctl restart ncs
# <restore `/etc/ncs/ncs.systemd.conf`>
<switch to designated secondary CLI>
admin@ncs# high-availability enable<config xmlns="http://tail-f.com/ns/config/1.0">
<high-availability xmlns="http://tail-f.com/ns/ncs">
<ha-node>
<id>n1</id>
<nominal-role>primary</nominal-role>
</ha-node>
<ha-node>
<id>n2</id>
<nominal-role>secondary</nominal-role>
<failover-primary>true</failover-primary>
</ha-node>
<settings>
<enable-failover>true</enable-failover>
<start-up>
<assume-nominal-role>true</assume-nominal-role>
<join-ha>true</join-ha>
</start-up>
</settings>
</high-availability>
</config>#!/bin/bash
set -ex
vsn=$1
primary=$2
secondary=$3
installer_file=nso-${vsn}.linux.x86_64.installer.bin
pkg_vsn=$(echo $vsn | sed -e 's/^\([0-9]\+\.[0-9]\+\).*/\1/')
pkg_dir="packages-${pkg_vsn}"
function on_primary() { ssh $primary "$@" ; }
function on_secondary() { ssh $secondary "$@" ; }
function on_primary_cli() { ssh -p 2024 $primary "$@" ; }
function on_secondary_cli() { ssh -p 2024 $secondary "$@" ; }
function upgrade_nso() {
target=$1
scp $installer_file $target:
ssh $target "sh $installer_file --system-install --non-interactive"
ssh $target "rm -f /opt/ncs/current && \
ln -s /opt/ncs/ncs-${vsn} /opt/ncs/current"
}
function upgrade_packages() {
target=$1
do_pkgs=$(ls "${pkg_dir}/" || echo "")
if [ -n "${do_pkgs}" ] ; then
cd ${pkg_dir}
ssh $target 'rm -rf /var/opt/ncs/packages/*'
for p in ncs-${pkg_vsn}-*.gz; do
scp $p $target:/opt/ncs/packages/
ssh $target "ln -s /opt/ncs/packages/$p /var/opt/ncs/packages/"
done
cd -
fi
}
# Perform the actual procedure
on_primary_cli 'request high-availability read-only mode true'
on_secondary_cli 'request high-availability read-only mode true'
on_primary 'ncs-backup'
on_secondary 'ncs-backup'
on_primary_cli 'request high-availability disable'
on_secondary_cli 'request high-availability be-primary'
upgrade_nso $primary
upgrade_packages $primary
on_primary `mv /etc/ncs/ncs.systemd.conf /etc/ncs/ncs.systemd.conf.bak'
on_primary 'echo "NCS_RELOAD_PACKAGES=true" > /etc/ncs/ncs.systemd.conf`
on_primary 'systemctl restart ncs'
on_primary `mv /etc/ncs/ncs.systemd.conf.bak /etc/ncs/ncs.systemd.conf'
on_secondary_cli 'request high-availability disable'
on_primary_cli 'request high-availability enable'
upgrade_nso $secondary
upgrade_packages $secondary
on_secondary `mv /etc/ncs/ncs.systemd.conf /etc/ncs/ncs.systemd.conf.bak'
on_secondary 'echo "NCS_RELOAD_PACKAGES=true" > /etc/ncs/ncs.systemd.conf`
on_secondary 'systemctl restart ncs'
on_secondary `mv /etc/ncs/ncs.systemd.conf.bak /etc/ncs/ncs.systemd.conf'
on_secondary_cli 'request high-availability enable'# ncs-backup
INFO Backup /var/opt/ncs/backups/ncs-6.3@2024-04-21T10:34:42.backup.gz created
successfully
# ls /opt/ncs/packages
ncs-6.3-router-nc-1.0 ncs-6.3-router-nc-1.0.2
# ncs_cli -C
admin@ncs# software packages install package router-nc-1.0.2 replace-existing
installed ncs-6.3-router-nc-1.0.2
admin@ncs# packages reload
>>> System upgrade is starting.
>>> Sessions in configure mode must exit to operational mode.
>>> No configuration changes can be performed until upgrade has completed.
>>> System upgrade has completed successfully.
reload-result {
package router-nc-1.0.2
result true
}primary@node1# software packages list
package {
name dummy-1.0.tar.gz
loaded
}
primary@node1# software packages fetch package-from-file \
$MY_PACKAGE_STORE/dummy-1.1.tar.gz
primary@node1# software packages install package dummy-1.1 replace-existing
primary@node1# packages ha sync and-reload { wait-commit-queue-empty }# ncs-backup --restoreadmin@ncs(config)# devices device ex1 config dns_server_list = ex1_config.sys.dns.server dns_server_list.create('192.0.2.1') t.apply() params = t.get_params()
params.dry_run_native()
result = t.apply_params(True, params)
print(result['device']['ex1'])
t.apply_params(True, t.get_params()) print('Done!')with ncs.maapi.single_read_trans('admin', 'python') as t:
...import ncs
with ncs.maapi.single_read_trans('admin', 'python') as t:
root = ncs.maagic.get_root(t)
print(root.nacm.enable_nacm)import ncs
with ncs.maapi.single_write_trans('admin', 'python') as t:
root = ncs.maagic.get_root(t)
root.nacm.enable_nacm = True
t.apply()container customers {
list customer {
key "id";
leaf id {
type string;
}
}
}import ncs
with ncs.maapi.single_write_trans('admin', 'python') as t:
root = ncs.maagic.get_root(t)
our_customers = root.customers.customer
new_customer = our_customers.create('ACME')
new_customer.status = 'active'
for c in our_customers:
print(c.id)
del our_customers['ACME']
t.apply()$ cd $NCS_DIR/examples.ncs/getting-started/developing-with-ncs/0-router-network$ make clean all && ncs-netsim start$ ncs$ ncs_cli -C -u adminadmin@ncs# show devices listadmin@ncs# devices sync-fromadmin@ncs# configadmin@ncs(config)# abortadmin@ncs# exitimport ncs
with ncs.maapi.single_write_trans('admin', 'python') as t:
root = ncs.maagic.get_root(t) ex1_device = root.devices.device['ex1']$ python3 ex1-dns.py$ ncs_cli -C -u admin ex1_config = ex1_device.config ex1_config = root.devices.device['ex1'].configadmin@ncs# show running-config devices device ex1 config sys dns serverHA failover
Upgrading NSO on all nodes in the HA cluster
Upgrading NSO packages on all nodes in the HA cluster
The deployment examples use both the legacy rule-based and recommended HA Raft setup. See High Availability for HA details. The HA Raft deployment consists of three nodes running NSO and a node managing them, while the rule-based HA deployment uses only two nodes.
Based on the Raft consensus algorithm, the HA Raft version provides the best fault tolerance, performance, and security and is therefore recommended.
For the HA Raft setup, the NSO nodes paris.fra, london.eng, and berlin.ger nodes make up a cluster of one leader and two followers.
For the rule-based HA setup, the NSO nodes paris and london make up one HA pair — one primary and one secondary.
HA is usually not optional for a deployment. Data resides in CDB, a RAM database with a disk-based journal for persistence. Both HA variants can be set up to avoid the need for manual intervention in a failure scenario, where HA Raft does the best job of keeping the cluster up. See High Availability for details.
An NSO system installation on the NSO nodes is recommended for deployments. For System Installation details, see the System Install steps.
In this container-based example, Docker Compose uses a Dockerfile to build the container image and install NSO on multiple nodes, here containers. A shell script uses an SSH client to access the NSO nodes from the manager node to demonstrate HA failover and, as an alternative, a Python script that implements SSH and RESTCONF clients.
An admin user is created on the NSO nodes. Password-less sudo access is set up to enable the tailf-hcc server to run the ip command. The manager's SSH client uses public key authentication, while the RESTCONF client uses a token to authenticate with the NSO nodes.
The example creates two packages using the ncs-make-package command: dummy and inert. A third package, tailf-hcc, provides VIPs that point to the current HA leader/primary node.
The packages are compressed into a tar.gz format for easier distribution, but that is not a requirement.
This example uses a minimal Red Hat UBI distribution for hosting NSO with the following added packages:
NSO's basic dependency requirements are fulfilled by adding the Java Runtime Environment (JRE), OpenSSH, and OpenSSL packages.
The OpenSSH server is used for shell access and secure copy to the NSO Linux host for NSO version upgrade purposes. The NSO built-in SSH server provides CLI and NETCONF access to NSO.
The NSO services require Python.
To fulfill the tailf-hcc server dependencies, the iproute2 utilities and sudo packages are installed. See (in the section ) for details on dependencies.
The rsyslog package enables storing an NSO log file from several NSO logs locally and forwarding some logs to the manager.
The arp command from the net-tools and iputils (ping) packages have been added for demonstration purposes.
The steps in the list below are performed as root. Docker Compose will build the container images, i.e., create the NSO installation as root.
The admin user will only need root access to run the ip command when tailf-hcc adds the Layer 2 VIP address to the leader/primary node interface.
The initialization steps are also performed as root for the nodes that make up the HA cluster:
Create the ncsadmin and ncsoper Linux user groups.
Create and add the admin and oper Linux users to their respective groups.
Perform a system installation of NSO that runs NSO as the admin user.
The admin user is granted access to run the ip command from the vipctl script as root using the sudo command as required by the tailf-hcc package.
The cmdwrapper NSO program gets access to run the scripts executed by the generate-token action for generating RESTCONF authentication tokens as the current NSO user.
Password authentication is set up for the read-only oper user for use with NSO only, which is intended for WebUI access.
The root user is set up for Linux shell access only.
The NSO installer, tailf-hcc package, application YANG modules, scripts for generating and authenticating RESTCONF tokens, and scripts for running the demo are all available to the NSO and manager containers.
admin user permissions are set for the NSO directories and files created by the system install, as well as for the root, admin, and oper home directories.
The ncs.crypto_keys are generated and distributed to all nodes.
Note: The ncs.crypto_keys file is highly sensitive. It contains the encryption keys for all encrypted CDB data, which often includes passwords for various entities, such as login credentials to managed devices.
Note: In an NSO System Install setup, not only the TLS certificates (HA Raft) or shared token (rule-based HA) need to match between the HA cluster nodes, but also the configuration for encrypted strings, by default stored in /etc/ncs/ncs.crypto_keys, needs to match between the nodes in the HA cluster. For rule-based HA, the tokens configured on the secondary nodes are overwritten with the encrypted token of type aes-256-cfb-128-encrypted-string from the primary node when the secondary connects to the primary. If there is a mismatch between the encrypted-string configuration on the nodes, NSO will not decrypt the HA token to match the token presented. As a result, the primary node denies the secondary node access the next time the HA connection needs to be re-established with a "Token mismatch, secondary is not allowed" error.
For HA Raft, TLS certificates are generated for all nodes.
The initial NSO configuration, ncs.conf, is updated and in sync (identical) on the nodes.
The SSH servers are configured to allow only SSH public key authentication (no password). The oper user can use password authentication with the WebUI but has read-only NSO access.
The oper user is denied access to the Linux shell.
The admin user can access the Linux shell and NSO CLI using public key authentication.
New keys for all users are distributed to the HA cluster nodes and the manager node when the HA cluster is initialized.
The OpenSSH server and the NSO built-in SSH server use the same private and public key pairs located under ~/.ssh/id_ed25519, while the manager public key is stored in the ~/.ssh/authorized_keys file for both NSO nodes.
Host keys are generated for all nodes to allow the NSO built-in SSH and OpenSSH servers to authenticate the server to the client.
Each HA cluster node has its own unique SSH host keys stored under ${NCS_CONFIG_DIR}/ssh_host_ed25519_key. The SSH client(s), here the manager, has the keys for all nodes in the cluster paired with the node's hostname and the VIP address in its /root/.ssh/known_hosts file.
The host keys, like those used for client authentication, are generated each time the HA cluster nodes are initialized. The host keys are distributed to the manager and nodes in the HA cluster before the NSO built-in SSH and OpenSSH servers are started on the nodes.
As NSO runs in containers, the environment variables are set to point to the system install directories in the Docker Compose .env file.
NSO runs as the non-root admin user and, therefore, the NSO system installation is done using the ./nso-${VERSION}.linux.${ARCH}.installer.bin --system-install --run-as-user admin --ignore-init-scripts options. By default, the NSO installation start script will create a systemd system service to run NSO as the admin user (default is the root user) when NSO is started using the systemctl start ncs command.
However, this example uses the --ignore-init-scripts option to skip installing systemd scripts as it runs in a container that does not support systemd.
The environment variables are copied to a .pam_environment
The OpenSSH sshd and rsyslog daemons are started.
The packages from the package store are added to the ${NCS_RUN_DIR}/packages directory before finishing the initialization part in the root context.
The NSO smart licensing token is set.
The NSO IPC socket is configured in ncs.conf to only listen to localhost 127.0.0.1 connections, which is the default setting.
By default, the clients connecting to the NSO IPC socket are considered trusted, i.e., no authentication is required, and the use of 127.0.0.1 with the /ncs-config/ncs-ipc-address IP address in ncs.conf to prevent remote access. See Security Considerations and ncs.conf(5) in Manual Pages for more details.
/ncs-config/aaa/pam is set to enable PAM to authenticate users as recommended. All remote access to NSO must now be done using the NSO host's privileges. See ncs.conf(5) in Manual Pages for details.
Depending on your Linux distribution, you may have to change the /ncs-config/aaa/pam/service setting. The default value is common-auth. Check the file /etc/pam.d/common-auth and make sure it fits your needs. See in Manual Pages for details.
Alternatively, or as a complement to the PAM authentication, users can be stored in the NSO CDB database or authenticated externally. See for details.
RESTCONF token authentication under /ncs-config/aaa/external-validation is enabled using a token_auth.sh script that was added earlier together with a generate_token.sh script. See in Manual Pages for details.
The scripts allow users to generate a token for RESTCONF authentication through, for example, the NSO CLI and NETCONF interfaces that use SSH authentication or the Web interface.
The token provided to the user is added to a simple YANG list of tokens where the list key is the username.
The token list is stored in the NSO CDB operational data store and is only accessible from the node's local MAAPI and CDB APIs. See the HA Raft and rule-based HA upgrade-l2/manager-etc/yang/token.yang file in the examples.
The NSO web server HTTPS interface should be enabled under /ncs-config/webui, along with /ncs-config/webui/match-host-name = true and /ncs-config/webui/server-name set to the hostname of the node, following security best practice. See in Manual Pages for details.
Note: The SSL certificates that NSO generates are self-signed:
Thus, if this is a production environment and the JSON-RPC and RESTCONF interfaces using the web server are not used solely for internal purposes, the self-signed certificate must be replaced with a properly signed certificate. See in Manual Pages under /ncs-config/webui/transport/ssl/cert-file and /ncs-config/restconf/transport/ssl/certFile for more details.
Disable /ncs-config/webui/cgi unless needed.
The NSO SSH CLI login is enabled under /ncs-config/cli/ssh/enabled. See in Manual Pages for details.
The NSO CLI style is set to C-style, and the CLI prompt is modified to include the hostname under /ncs-config/cli/prompt. See in Manual Pages for details.
NSO HA Raft is enabled under /ncs-config/ha-raft, and the rule-based HA under /ncs-config/ha. See in Manual Pages for details.
Depending on your provisioned applications, you may want to turn /ncs-config/rollback/enabled off. Rollbacks do not work well with nano service reactive FASTMAP applications or if maximum transaction performance is a goal. If your application performs classical NSO provisioning, the recommendation is to enable rollbacks. Otherwise not. See in Manual Pages for details.
The NSO System Install places an AAA aaa_init.xml file in the $NCS_RUN_DIR/cdb directory. Compared to a Local Install for development, no users are defined for authentication in the aaa_init.xml file, and PAM is enabled for authentication. NACM rules for controlling NSO access are defined in the file for users belonging to a ncsadmin user group and read-only access for a ncsoper user group. As seen in the previous sections, this example creates Linux root, admin, and oper users, as well as the ncsadmin and ncsoper Linux user groups.
PAM authenticates the users using SSH public key authentication without a passphrase for NSO CLI and NETCONF login. Password authentication is used for the oper user intended for NSO WebUI login and token authentication for RESTCONF login.
Before the NSO daemon is running, and there are no existing CDB files, the default AAA configuration in the aaa_init.xml is used. It is restrictive and is used for this demo with only a minor addition to allow the oper user to generate a token for RESTCONF authentication.
The NSO authorization system is group-based; thus, for the rules to apply to a specific user, the user must be a member of the group to which the restrictions apply. PAM performs the authentication, while the NSO NACM rules do the authorization.
Adding the admin user to the ncsadmin group and the oper user to the limited ncsoper group will ensure that the two users get properly authorized with NSO.
Not adding the root user to any group matching the NACM groups results in zero access, as no NACM rule will match, and the default in the aaa_init.xml file is to deny all access.
The NSO NACM functionality is based on the Network Configuration Access Control Model IETF RFC 8341 with NSO extensions augmented by tailf-acm.yang. See AAA infrastructure, for more details.
The manager in this example logs into the different NSO hosts using the Linux user login credentials. This scheme has many advantages, mainly because all audit logs on the NSO hosts will show who did what and when. Therefore, the common bad practice of having a shared admin Linux user and NSO local user with a shared password is not recommended.
This example sets up one HA cluster using HA Raft or rule-based HA with the tailf-hcc server to manage virtual IP addresses. See NSO Rule-based HA and Tail-f HCC Package for details.
The NSO HA, together with the tailf-hcc package, provides three features:
All CDB data is replicated from the leader/primary to the follower/secondary nodes.
If the leader/primary fails, a follower/secondary takes over and starts to act as leader/primary. This is how HA Raft works and how the rule-based HA variant of this example is configured to handle failover automatically.
At failover, tailf-hcc sets up a virtual alias IP address on the leader/primary node only and uses gratuitous ARP packets to update all nodes in the network with the new mapping to the leader/primary node.
Nodes in other networks can be updated using the tailf-hcc layer-3 BGP functionality or a load balancer. See the NSO example set under examples.ncs/development-guide/high-availability.
See the NSO example set under examples.ncs/development-guide/high-availability/hcc for a reference to an HA Raft and rule-based HA tailf-hcc Layer 3 BGP examples.
The HA Raft and rule-based HA upgrade-l2 examples also demonstrate HA failover, upgrading the NSO version on all nodes, and upgrading NSO packages on all nodes.
Depending on your installation, e.g., the size and speed of the managed devices and the characteristics of your service applications, some default values of NSO may have to be tweaked, particularly some of the timeouts.
Device timeouts. NSO has connect, read, and write timeouts for traffic between NSO and the managed devices. The default value may not be sufficient if devices/nodes are slow to commit, while some are sometimes slow to deliver their full configuration. Adjust timeouts under /devices/global-settings accordingly.
Service code timeouts. Some service applications can sometimes be slow. Adjusting the /services/global-settings/service-callback-timeout configuration might be applicable depending on the applications. However, the best practice is to change the timeout per service from the service code using the Java ServiceContext.setTimeout function or the Python data_set_timeout function.
There are quite a few different global settings for NSO. The two mentioned above often need to be changed.
NSO uses Cisco Smart Licensing, which is described in detail in Cisco Smart Licensing. After registering your NSO instance(s), and receiving a token, following steps 1-6 as described in the Create a License Registration Token section of Cisco Smart Licensing, enter a token from your Cisco Smart Software Manager account on each host. Use the same token for all instances and script entering the token as part of the initial NSO configuration or from the management node:
The NSO system installations performed on the nodes in the HA cluster also install defaults for logrotate. Inspect /etc/logrotate.d/ncs and ensure that the settings are what you want. Note that the NSO error logs, i.e., the files /var/log/ncs/ncserr.log*, are internally rotated by NSO and must not be rotated by logrotate.
For the HA Raft and rule-based HA upgrade-l2 examples, see the reference from the examples.ncs/development-guide/high-availability/hcc/README; the examples integrate with rsyslog to log the ncs, developer, upgrade, audit, netconf, snmp, and webui-access logs to syslog with facility set to daemon in ncs.conf.
rsyslogd on the nodes in the HA cluster is configured to write the daemon facility logs to /var/log/daemon.log, and forward the daemon facility logs with the severity info or higher to the manager node's /var/log/ha-cluster.log syslog.
Use the audit-network-log for recording southbound traffic towards devices. Enable by setting /ncs-config/logs/audit-network-log/enabled and /ncs-config/logs/audit-network-log/file/enabled to true in $NCS_CONFIG_DIR/ncs.conf, See ncs.conf(5) in Manual Pages for more information.
NED trace logs are a crucial tool for debugging NSO installations and not recommended for deployment. These logs are very verbose and for debugging only. Do not enable these logs in production.
Note that the NED logs include everything, even potentially sensitive data is logged. No filtering is done. The NED trace logs are controlled through the CLI under: /device/global-settings/trace. It is also possible to control the NED trace on a per-device basis under /devices/device[name='x']/trace.
There are three different settings for trace output. For various historical reasons, the setting that makes the most sense depends on the device type.
For all CLI NEDs, use the raw setting.
For all ConfD and netsim-based NETCONF devices, use the pretty setting. This is because ConfD sends the NETCONF XML unformatted, while pretty means that the XML is formatted.
For Juniper devices, use the raw setting. Juniper devices sometimes send broken XML that cannot be formatted appropriately. However, their XML payload is already indented and formatted.
For generic NED devices - depending on the level of trace support in the NED itself, use either pretty or raw.
For SNMP-based devices, use the pretty setting.
Thus, it is usually not good enough to control the NED trace from /devices/global-settings/trace.
While there is a global log for, for example, compilation errors in /var/log/ncs/ncs-python-vm.log, logs from user application packages are written to separate files for each package, and the log file naming is ncs-python-vm-pkg_name.log. The level of logging from Python code is controlled on a per package basis. See Debugging of Python packages for more details.
User application Java logs are written to /var/log/ncs/ncs-java-vm.log. The level of logging from Java code is controlled per Java package. See Logging in Java VM for more details.
The internal NSO log resides at /var/log/ncs/ncserr.*. The log is written in a binary format. To view the internal error log, run the following command:
All large-scale deployments employ monitoring systems. There are plenty of good tools to choose from, open source and commercial. All good monitoring tools can script (using various protocols) what should be monitored. It is recommended that a special read-only Linux user without shell access be set up like the oper user earlier in this chapter. A few commonly used checks include:
At startup, check that NSO has been started using the $NCS_DIR/bin/ncs_cmd -c "wait-start 2" command.
Use the ssh command to verify SSH access to the NSO host and NSO CLI.
Check disk usage using, for example, the df utility.
For example, use curl or the Python requests library to verify that the RESTCONF API is accessible.
Check that the NETCONF API is accessible using, for example, the $NCS_DIR/bin/netconf-console tool with a hello message.
Verify the NSO version using, for example, the $NCS_DIR/bin/ncs --version or RESTCONF /restconf/data/tailf-ncs-monitoring:ncs-state/version.
Check if HA is enabled using, for example, RESTCONF /restconf/data/tailf-ncs-monitoring:ncs-state/ha.
RESTCONF can be used to view the NSO alarm table and subscribe to alarm notifications. NSO alarms are not events. Whenever an NSO alarm is created, a RESTCONF notification and SNMP trap are also sent, assuming that you have a RESTCONF client registered with the alarm stream or configured a proper SNMP target. Some alarms, like the rule-based HA ha-secondary-down alarm, require the intervention of an operator. Thus, a monitoring tool should also fetch the NSO alarm list.
Or subscribe to the ncs-alarms RESTCONF notification stream.
NSO metric has different contexts all containing different counters, gauges, and rate of change gauges. There is a sysadmin, a developer and a debug context. Note that only the sysadmin context is enabled by default, as it is designed to be lightweight. Consult the YANG module tailf-ncs-metric.yang to learn the details of the different contexts.
You may read counters by e.g. CLI, as in this example:
You may read gauges by e.g. CLI, as in this example:
You may read rate of change gauges by e.g. CLI, as in this example:
The presented configuration enables the built-in web server for the WebUI and RESTCONF interfaces. It is paramount for security that you only enable HTTPS access with /ncs-config/webui/match-host-name and /ncs-config/webui/server-name properly set.
The AAA setup described so far in this deployment document is the recommended AAA setup. To reiterate:
Have all users that need access to NSO authenticated through Linux PAM. This may then be through /etc/passwd. Avoid storing users in CDB.
Given the default NACM authorization rules, you should have three different types of users on the system.
Users with shell access are members of the ncsadmin Linux group and are considered fully trusted because they have full access to the system.
Users without shell access who are members of the ncsadmin Linux group have full access to the network. They have access to the NSO SSH shell and can execute RESTCONF calls, access the NSO CLI, make configuration changes, etc. However, they cannot manipulate backups or perform system upgrades unless such actions are added to by NSO applications.
Users without shell access who are members of the ncsoper Linux group have read-only access. They can access the NSO SSH shell, read data using RESTCONF calls, etc. However, they cannot change the configuration, manipulate backups, and perform system upgrades.
If you have more fine-grained authorization requirements than read-write and read-only, additional Linux groups can be created, and the NACM rules can be updated accordingly. See The aaa_init.xml Configuration from earlier in this chapter on how the reference example implements users, groups, and NACM rules to achieve the above.
The default aaa_init.xml file must not be used as-is before reviewing and verifying that every NACM rule in the file matches the desired authorization level.
For a detailed discussion of the configuration of authorization rules through NACM, see AAA infrastructure, particularly the section Authorization.
A considerably more complex scenario is when users require shell access to the host but are either untrusted or should not have any access to NSO at all. NSO listens to a so-called IPC socket configured through /ncs-config/ncs-ipc-address. This socket is typically limited to local connections and defaults to 127.0.0.1:4569 for security. The socket multiplexes several different access methods to NSO.
The main security-related point is that no AAA checks are performed on this socket. If you have access to the socket, you also have complete access to all of NSO.
To drive this point home, when you invoke the ncs_cli command, a small C program that connects to the socket and tells NSO who you are, NSO assumes that authentication has already been performed. There is even a documented flag --noaaa, which tells NSO to skip all NACM rule checks for this session.
You must protect the socket to prevent untrusted Linux shell users from accessing the NSO instance using this method. This is done by using a file in the Linux file system. The file /etc/ncs/ipc_access gets created and populated with random data at install time. Enable /ncs-config/ncs-ipc-access-check/enabled in ncs.conf and ensure that trusted users can read the /etc/ncs/ipc_access file, for example, by changing group access to the file. See ncs.conf(5) in Manual Pages for details.
For an HA setup, HA Raft is based on the Raft consensus algorithm and provides the best fault tolerance, performance, and security. It is therefore recommended over the legacy rule-based HA variant. The raft-upgrade-l2 project, referenced from the NSO example (set under examples.ncs/development-guide/high-availability/hcc) together with this Deployment Example section, describes a reference implementation. See NSO HA Raft for more HA Raft details.
Build your own applications in NSO.
Services provide the foundation for managing the configuration of a network. But this is not the only aspect of network automation. A holistic solution must also consider various verification procedures, one-time actions, monitoring, and so on. This is quite different from managing configuration. NSO helps you implement such automation use cases through a generic application framework.
This section explores the concept of services as more general NSO applications. It gives an overview of the mechanisms for orchestrating network automation tasks that require more than just configuration provisioning.
You have seen two different ways in which you can make a configuration change on a network device. With the first, you make changes directly on the NSO copy of the device configuration. The Device Manager picks up the changes and propagates them to the affected devices.
The purpose of the Device Manager is to manage different devices uniformly. The Device Manager uses the Network Element Drivers (NEDs) to abstract away the different protocols and APIs towards the devices. The NED contains a YANG data model for a supported device. So, each device type requires an appropriate NED package that allows the Device Manager to handle all devices in the same, YANG-model-based way.
The second way to make configuration changes is through services. Here, the Service Manager adds a layer on top of the Device Manager to process the service request and enlists the help of service-aware applications to generate the device changes.
The following figure illustrates the difference between the two approaches.
The Device Manager and the Service Manager are tightly integrated into one transactional engine, using the CDB to store data. Another thing the two managers have in common is packages. Like Device Manager uses NED packages to support specific devices, Service Manager relies on service packages to provide an application-specific mapping for each service type.
However, a network application can consist of more than just a configuration recipe. For example, an integrated service test action can verify the initial provisioning and simplify troubleshooting if issues arise. A simple test might run the ping command to verify connectivity. Or an application could only monitor the network and not produce any configuration at all. That is why NSO actually uses an approach where an application chooses what custom code to execute for specific NSO events.
NSO allows augmenting the base functionality of the system by delegating certain functions to applications. As the communication must happen on demand, NSO implements a system of callbacks. Usually, the application code registers the required callbacks on start-up, and then NSO can invoke each callback as needed. A prime example is a Python service, which registers the cb_create() function as a service callback that NSO uses to construct the actual configuration.
In a Python service skeleton, callback registration happens inside a class Main, found in main.py:
In this code, the register_service() method registers the ServiceCallbacks class to receive callbacks for a service. The first argument defines which service that is. In theory, a single class could even handle service callbacks for multiple services but that is not a common practice.
On the other hand, it is also possible that no code registered a callback for a given service. This is quite often a result of a misspelling or a bug in the code that causes the application code to crash. In these situations, NSO presents an error if you try to use the service:
This error refers to the concept of a service point. Service points are declared in the service YANG model and allow NSO to distinguish ordinary data from services. They instruct NSO to invoke FASTMAP and the service callbacks when a service instance is being provisioned. That means the service skeleton YANG file also contains a service point definition, such as the following:
Service point therefore links the definition in the model with custom code. Some methods in the code will have names starting with cb_, for instance, the cb_create() method, letting you know quickly that they are an implementation of a callback.
NSO implements additional callbacks for each service point, that may be required in some specific circumstances. Most of these callbacks perform work outside of the automatic change tracking, so you need to consider that before using them. The section offers more details.
As well as services, other extensibility options in NSO also rely on callbacks and callpoints, a generalized version of a service point. Two notable examples are validation callbacks, to implement additional validation logic to that supported by YANG, and custom actions. The section provides a comprehensive list and an overview of when to use each.
In summary, you implement custom behavior in NSO by providing the following three parts:
A YANG model directing NSO to use callbacks, such as a service point for services.
Registration of callbacks, telling NSO to call into your code at a given point.
The implementation of each callback with your custom logic.
This way, an application in NSO can implement all the required functionality for a given use case (configuration management and otherwise) by registering the right callbacks.
The most common way to implement non-configuration automation in NSO is using actions. An action represents a task or an operation that a user of the system can invoke on demand, such as downloading a file, resetting a device, or performing some test.
Like configuration elements, actions must also be defined in the YANG model. Each action is described by the action YANG statement that specifies what are its inputs and outputs, if any. Inputs allow a user of the action to provide additional information to the action invocation, while outputs provide information to the caller. Actions are a form of a Remote Procedure Call (RPC) and have historically evolved from NETCONF RPCs. It's therefore unsurprising that with NSO you implement both in a similar manner.
Let's look at an example action definition:
The first thing to notice in the code is that, just like services use a service point, actions use an actionpoint. It is denoted by the tailf:actionpoint statement and tells NSO to execute a callback registered to this name. As discussed, the callback mechanism allows you to provide custom action implementation.
Correspondingly, your code needs to register a callback to this action point, by calling the register_action(), as demonstrated here:
The MyTestAction class, referenced in the call, is responsible for implementing the actual action logic and should inherit from the ncs.dp.Action base class. The base class will take care of calling the cb_action() class method when users initiate the action. The cb_action() is where you put your own code. The following code shows a trivial implementation of an action, that checks whether its input contains the string “NSO”:
The input and output arguments contain input and output data, respectively, which matches the definition in the action YANG model. The example shows the value of a simple Python in string check that is assigned to an output value.
The name argument has the name of the called action (such as my-test), to help you distinguish which action was called in the case where you would register the same class for multiple actions. Similarly, an action may be defined on a list item and the kp argument contains the full keypath (a tuple) to an instance where it was called.
Finally, the uinfo contains information on the user invoking the action and the trans argument represents a transaction, that you can use to access data other than input. This transaction is read-only, as configuration changes should normally be done through services instead. Still, the action may need some data from NSO, such as an IP address of a device, which you can access by using trans with the ncs.maagic.get_root() function and navigate to the relevant information.
Further details and the format of the arguments can be found in the NSO Python API reference.
The last thing to note in the above action code definition is the use of the decorator @Action.action. Its purpose is to set up the function arguments correctly, so variables such as input and output behave like other Python Maagic objects. This is no different from services, where decorators are required for the same reason.
No previous NSO or netsim processes are running. Use the ncs --stop and ncs-netsim stop commands to stop them if necessary.
NSO local install with a fresh runtime directory has been created by the ncs-setup --dest ~/nso-lab-rundir or similar command.
The environment variable NSO_RUNDIR points to this runtime directory, such as set by the export NSO_RUNDIR=~/nso-lab-rundir
One of the most common uses of NSO actions is automating network and service tests but they are also a good choice for any other non-configuration task. Being able to quickly answer questions, such as how many network ports are available (unused) or how many devices currently reside in a given subnet, can greatly simplify the network planning process. Coding these computations as actions in NSO makes them accessible on-demand to a wider audience.
For this scenario, you will create a new package for the action, however actions can also be placed into existing packages. A common example is adding a self-test action to a service package.
First, navigate to the packages subdirectory:
Create a package skeleton with the ncs-make-package command and the --action-example option. Name the package count-devices, like so:
This command creates a YANG module file, where you will place a custom action definition. In a text or code editor open the count-devices.yang file, located inside count-devices/src/yang/. This file already contains an example action which you will remove. Find the following line (after module imports):
Delete this line and all the lines following it, to the very end of the file. The file should now resemble the following:
To model an action, you can use the action YANG statement. It is part of the YANG standard from version 1.1 onward, requiring you to also define yang-version 1.1 in the YANG model. So, add the following line at the start of the module, right before namespace statement:
Note that in YANG version 1.0, actions used the NSO-specific tailf:action extension, which you may still find in some YANG models.
Now, go to the end of the file and add a custom-actions container with the count-devices action, using the count-devices-action action point. The input is an IP subnet and the output is the number of devices managed by NSO in this subnet.
Also, add the closing bracket for the module at the end:
Remember to finally save the file, which should now be similar to the following:
The action code is implemented in a dedicated class, that you will put in a separate file. Using an editor, create a new, empty file count_devices_action.py in the count-devices/python/count_devices/ subdirectory.
At the start of the file, import the packages that you will need later on and define the action class with the cb_action() method:
Then initialize the count variable to 0 and construct a reference to the NSO data root, since it is not part of the method arguments:
Using the root variable, you can iterate through the devices managed by NSO and find their (IPv4) address:
If the IP address comes from the specified subnet, increment the count:
Lastly, assign the count to the result:
Your custom Python code is ready; however, you still need to link it to the count-devices action. Open the main.py from the same directory in a text or code editor and delete all the content already in there.
Next, create a class called Main that inherits from the ncs.application.Application base class. Add a single class method setup() that takes no additional arguments.
Inside the setup() method call the register_action() as follows:
This line instructs NSO to use the CountDevicesAction class to handle invocations of the count-devices-action action point. Also, import the CountDevicesAction class from the count_devices_action module.
The complete main.py file should then be similar to the following:
With all of the code ready, you are one step away from testing the new action, but to do that, you will need to add some devices to NSO. So, first, add a couple of simulated routers to the NSO instance:
Before the packages can be loaded, you must compile them:
You can start the NSO now and connect to the CLI:
Finally, invoke the action:
You can use the show devices list command to verify that the result is correct. You can alter the address of any device and see how it affects the result. You can even use a hostname, such as localhost.
NSO supports a number of extension points for custom callbacks:
Each extension point in the list has a corresponding YANG extension that defines to which part of the data model the callbacks apply, as well as the individual name of the call point. The name is required during callback registration and helps distinguish between multiple uses of the extension. Each extension generally specifies multiple callbacks, however, you often need to implement only the main one, e.g. create for services or action for actions.
In addition, NSO supports some specific callbacks from internal systems, such as the transaction or the authorization engine, but these have very narrow use and are in general not recommended.
Services and actions are examples of something that happens directly as a result of a user (or other northbound agent) request. That is, a user takes an active role in starting service instantiation or invoking an action. Contrast this to a change that happens in the network and requires the orchestration system to take some action. In this latter case, the system monitors the notifications that the network generates, such as losing a link, and responds to the new data.
NSO provides out-of-the-box support for the automation of not only notifications but also changes to the operational and configuration data, using the concept of kickers. With kickers, you can watch for a particular change to occur in the system and invoke a custom action that handles the change.
The kicker system is further described in .
Services, actions, and other features all rely on callback registration. In Python code, the class responsible for registration derives from the ncs.application.Application. This allows NSO to manage the application code as appropriate, such as starting and stopping in response to NSO events. These events include package load or unload and NSO start or stop events.
While the Python package skeleton names the derived class Main, you can choose a different name if you also update the package-meta-data.xml file accordingly. This file defines a component with the name of the Python class to use:
When starting the package, NSO reads the class name from package-meta-data.xml, starts the Python interpreter, and instantiates a class instance. The base Application class takes care of establishing communication with the NSO process and calling the setup and teardown methods. The two methods are a good place to do application-specific initialization and cleanup, along with any callback registrations you require.
The communication between the application process and NSO happens through a dedicated control socket, as described in the section called in Administration. This setup prevents a faulty application from bringing down the whole system along with it and enables NSO to support different application environments.
In fact, NSO can manage applications written in Java or Erlang in addition to those in Python. If you replace the python-class-name element of a component with java-class-name in the package-meta-data.xml file, NSO will instead try to run the specified Java class in the managed Java VM. If you wanted to, you could implement all of the same services and actions in Java, too. For example, see to compare Python and Java code.
Regardless of the programming language you use, the high-level approach to automation with NSO does not change, registering and implementing callbacks as part of your network application. Of course, the actual function calls (the API) and other specifics differ for each language. The , , and cover the details. Even so, the concepts of actions, services, and YANG modeling remain the same.
As you have seen, everything in NSO is ultimately tied to the YANG model, making YANG knowledge such a valuable skill for any NSO developer.
As your NSO application evolves, you will create newer versions of your application package, which will replace the existing one. If the application becomes sufficiently complex, you might even split it across multiple packages.
When you replace a package, NSO must redeploy the application code and potentially replace the package-provided part of the YANG schema. For the latter, NSO can perform the data migration for you, as long as the schema is backward compatible. This process is documented in and is automatic when you request a reload of the package with packages reload or a similar command.
If your schema changes are not backward compatible, you can implement a data migration procedure, which NSO invokes when upgrading the schema. Among other things, this allows you to reuse and migrate the data that is no longer present in the new schema. You can specify the migration procedure as part of the package-meta-data.xml file, using a component of the upgrade type. See (Python) and examples.ncs/getting-started/developing-with-ncs/14-upgrade-service example (Java) for details.
Note that changing the schema in any way requires you to recompile the .fxs files in the package, which is typically done by running make in the package's src folder.
However, if the schema does not change, you can request that only the application code and templates be redeployed by using the packages package`` ``my-pkg`` ``redeploy command.
Next Steps
admin@nso-paris# license smart register idtoken YzY2Yj...
admin@nso-london# license smart register idtoken YzY2Yj... $ ncs --printlog /var/log/ncs/ncserr.log.1$ curl -ik -H "X-Auth-Token: TsZTNwJZoYWBYhOPuOaMC6l41CyX1+oDaasYqQZqqok=" \
https://paris:8888/restconf/data/tailf-ncs-alarms:alarmsadmin@ncs# show metric sysadmin counter session cli-total
metric sysadmin counter session cli-total 1admin@ncs# show metric sysadmin gauge session cli-open
metric sysadmin gauge session cli-open 1admin@ncs# show metric sysadmin gauge-rate session cli-open
NAME RATE
-------------
1m 0.0
5m 0.2
15m 0.066$ cat /etc/ncs/ipc_access
cat: /etc/ncs/ipc_access: Permission denied
$ sudo chown root:ncsadmin /etc/ncs/ipc_access
$ sudo chmod g+r /etc/ncs/ipc_access
$ ls -lat /etc/ncs/ipc_access
$ cat /etc/ncs/ipc_access
.......rootadmin/etc/systemd/system/ncs.servicesystemd--ignore-init-scripts

$ openssl x509 -in /etc/ncs/ssl/cert/host.cert -text -noout
Certificate:
Data:
Version: 1 (0x0)
Serial Number: 2 (0x2)
Signature Algorithm: sha256WithRSAEncryption
Issuer: C=US, ST=California, O=Internet Widgits Pty Ltd, CN=John Smith
Validity
Not Before: Dec 18 11:17:50 2015 GMT
Not After : Dec 15 11:17:50 2025 GMT
Subject: C=US, ST=California, O=Internet Widgits Pty Ltd
Subject Public Key Info:
....... <prompt1>\u@nso-\H> </prompt1>
<prompt2>\u@nso-\H% </prompt2>
<c-prompt1>\u@nso-\H# </c-prompt1>
<c-prompt2>\u@nso-\H(\m)# </c-prompt2>Data Provider
Java, Python (low-level API with experimental high-level API), Erlang
tailf:callpoint
Defines callbacks for transparently accessing external data (data not stored in the CDB) or callbacks for special processing of data nodes (transforms, set, and transaction hooks). Requires careful implementation and understanding of transaction intricacies. Rarely used in NSO.
Service
Python, Java, Erlang
ncs:servicepoint
Transforms a list or container into a model for service instances. When the configuration of a service instance changes, NSO invokes Service Manager and FASTMAP, which may call service create and similar callbacks. See Developing a Simple Service for an introduction.
Action
Python, Java, Erlang
tailf:actionpoint
Defines callbacks when an action or RPC is invoked. See Actions for an introduction.
Validation
Python, Java, Erlang
tailf:validate


Defines callbacks for additional validation of data when the provided YANG functionality, such as must and unique statements are insufficient. See the respective API documentation for examples; the section (Python), the section (Java), and (Erlang).
Start by setting up your system to install and run NSO.
To install NSO:
Fulfill at least the primary requirements.
If you intend to build and run NSO deployment examples, you also need to install additional applications listed under Additional Requirements.
Where requirements list a specific or higher version, there always exists a (small) possibility that a higher version introduces breaking changes. If in doubt whether the higher version is fully backwards compatible, always use the specific version.
To download the Cisco NSO installer and example NEDs:
Go to the Cisco's official Software Download site.
Search for the product "Network Services Orchestrator" and select the desired version.
There are two versions of the NSO installer, i.e. for macOS and Linux systems. For System Install, choose the Linux OS version.
If your downloaded file is a signed.bin file, it means that it has been digitally signed by Cisco, and upon execution, you will verify the signature and unpack the installer.bin.
If you only have installer.bin, skip to the next step.
To unpack the installer:
In the terminal, list the binaries in the directory where you downloaded the installer, for example:
Use the sh command to run the signed.bin to verify the certificate and extract the installer binary and other files. An example output is shown below.
List the files to check if extraction was successful.
To run the installer:
Navigate to your Install Directory.
Run the installer with the --system-install option to perform System Install. This option creates an Install of NSO that is suitable for production deployment.
For example:
The installation is configured for PAM authentication, with group assignment based on the OS group database (e.g. /etc/group file). Users that need access to NSO must belong to either the ncsadmin group (for unlimited access rights) or the ncsoper group (for minimal access rights).
To set up user access:
To create the ncsadmin group, use the OS shell command:
To create the ncsoper group, use the OS shell command:
To add an existing user to one of these groups, use the OS shell command:
To set environment variables:
Change to Super User privileges.
The installation program creates a shell script file in each NSO installation which sets the environment variables needed to run NSO. With the --system-install option, by default, these settings are set on the shell. To explicitly set the variables, source ncs.sh or ncs.csh depending on your shell type.
Start NSO.
NSO starts at boot going forward.
Once you log on with the user that belongs to ncsadmin or ncsoper, you can directly access the CLI as shown below:
As part of the System Install, the NSO daemon ncs is automatically started at boot time. You do not need to create a Runtime Directory for System Install.
To conclude the NSO installation, a license registration token must be created using a (CSSM) account. This is because NSO uses Cisco Smart Licensing to make it easy to deploy and manage NSO license entitlements. Login credentials to the Cisco Smart Software Manager (CSSM) account are provided by your Cisco contact and detailed instructions on how to create a registration token can be found in the Cisco Smart Licensing. General licensing information covering licensing models, how licensing works, usage compliance, etc., is covered in the Cisco Software Licensing Guide.
To generate a license registration token:
When you have a token, start a Cisco CLI towards NSO and enter the token, for example:
Upon successful registration, NSO automatically requests a license entitlement for its own instance and for the number of devices it orchestrates and their NED types. If development mode has been enabled, only development entitlement for the NSO instance itself is requested.
Inspect the requested entitlements using the command show license all (or by inspecting the NSO daemon log). An example output is shown below.
Frequently Asked Questions (FAQs) about System Install.
Prepare
Install
Finalize
CLI command reference.
To get a full XML listing of the commands available in a running NSO instance, use the ncs option --cli-c-dump <file>. The generated file is only intended for documentation purposes and cannot be used as input to the ncsc compiler. The command show parser dump can be used to get a command listing.
Get started with service development using a simple example.
The device YANG models contained in the Network Element Drivers (NEDs) enable NSO to store device configurations in the CDB and expose a uniform API to the network for automation, such as by Python scripts. The concept of NSO services builds on top of this network API and adds the ability to store service-specific parameters with each service instance.
This section introduces the main service building blocks and shows you how to build one yourself.
Network automation includes provisioning and de-provisioning configuration, even though the de-provisioning part often doesn't get as much attention. It is nevertheless significant since leftover, residual configuration can cause hard-to-diagnose operational problems. Even more importantly, without proper de-provisioning, seemingly trivial changes may prove hard to implement correctly.
Consider the following example. You create a simple script that configures a DNS server on a router, by adding the IP address of the server to the DNS server list. This should work fine for initial provisioning. However, when the IP address of the DNS server changes, the configuration on the router should be updated as well.
Can you still use the same script in this case? Most likely not, since you need to remove the old server from the configuration and add the new one. The original script would just add the new IP address after the old one, resulting in both entries on the device. In turn, the device may experience slow connectivity as the system periodically retries the old DNS IP address and eventually times out.
The following figure illustrates this process, where a simple script first configures the IP address 192.0.2.1 (“.1”) as the DNS server, then later configures 192.0.2.8 (“.8”), resulting in a leftover old entry (“.1”).
In such a situation, the script could perhaps simply replace the existing configuration, by removing all existing DNS server entries before adding the new one. But is this a reliable practice? What if a device requires an additional DNS server that an administrator configured manually? It would be overwritten and lost.
In general, the safest approach is to keep track of the previous changes and only replace the parts that have changed. This, however, is a lot of work and nontrivial to implement yourself. Fortunately, NSO provides such functionality through the FASTMAP algorithm, which is used when deploying services.
The other major benefit of using NSO services for automation is the service interface definition using YANG, which specifies the name and format of the service parameters. Many new NSO users wonder why use a service YANG model when they could just use the Python code or templates directly. While it might be difficult to see the benefits without much prior experience, YANG allows you to write better, more maintainable code, which simplifies the solution in the long run.
Many, if not most, security issues and provisioning bugs stem from unexpected user input. You must always validate user input (service parameter values) and YANG compels you to think about that when writing the service model. It also makes it easy to write the validation rules by using a standardized syntax, specifically designed for this purpose.
Moreover, the separation of concerns into the user interface, validation, and provisioning code allows for better organization, which becomes extremely important as the project grows. It also gives NSO the ability to automatically expose the service functionality through its APIs for integration with other systems.
For these reasons, services are the preferred way of implementing network automation in NSO.
As you may already know, services are added to NSO with packages. Therefore, you need to create a package if you want to implement a service of your own. NSO ships with an ncs-make-package utility that makes creating packages effortless. Adding the --service-skeleton python option creates a service skeleton, that is, an empty service, which you can tailor to your needs. As the last argument, you must specify the package name, which in this case is the service name. The command then creates a new directory with that name and places all the required files in the appropriate subdirectories.
The package contains the two most important parts of the service:
the service YANG model and
the service provisioning code also called the mapping logic.
Let's first look at the provisioning part. This is the code that performs the network configuration necessary for your service. The code often includes some parameters, for example, the DNS server IP address or addresses to use if your service is in charge of DNS configuration. So, we say that the code maps the service parameters into the device parameters, which is where the term mapping logic originates from. NSO, with the help of the NED, then translates the device parameters to the actual configuration. This simple tree-to-tree mapping describes how to create the service and NSO automatically infers how to update, remove, or re-deploy the service, hence the name FASTMAP.
How do you create the provisioning code and where do you place it? Is it similar to a stand-alone Python script? Indeed, the code is mostly the same. The main difference is that now you don't have to create a session and a transaction yourself because NSO already provides you with one. Through this transaction, the system tracks the changes to the configuration made by your code.
The package skeleton contains a directory called python. It holds a Python package named after your service. In the package, the ServiceCallbacks class (the main.py file) is used for provisioning code. The same file also contains the Main class, which is responsible for registering the ServiceCallbacks class as a service provisioning code with NSO.
Of the most interest is the cb_create() method of the ServiceCallbacks class:
NSO calls this method for service provisioning. Now, let's see how to evolve a stand-alone automation script into a service. Suppose you have Python code for DNS configuration on a router, similar to the following:
Taking into account the cb_create() signature and the fact that the NSO manages the transaction for a service, you won't need the transaction and root variable setup. The NSO service framework already takes care of setting up the root variable with the right transaction. There is also no need to call apply() because NSO does that automatically.
You only have to provide the core of the code (the middle portion in the above stand-alone script) to the cb_create():
You can run this code by adding the service package to NSO and provisioning a service instance. It will achieve the same effect as the stand-alone script but with all the benefits of a service, such as tracking changes.
In practice, all services have some variable parameters. Most often parameter values change from service instance to service instance, as the desired configuration is a little bit different for each of them. They may differ in the actual IP address that they configure or in whether the switch for some feature is on or off. Even the DNS configuration service requires a DNS server IP address, which may be the same across the whole network but could change with time if the DNS server is moved elsewhere. Therefore, it makes sense to expose the variable parts of the service as service parameters. This allows a service operator to set the parameter value without changing the service provisioning code.
With NSO, service parameters are defined in the service model, written in YANG. The YANG module describing your service is part of the service package, located under the src/yang path, and customarily named the same as the package. In addition to the module-related statements (description, revision, imports, and so on), a typical service module includes a YANG list, named after the service. Having a list allows you to configure multiple service instances with slightly different parameter values. For example, in a DNS configuration service, you might have multiple service instances with different DNS servers. The reason is, that some devices, such as those in the Demilitarized Zone (DMZ), might not have access to the internal DNS servers and would need to use a different set.
The service model skeleton already contains such a list statement. The following is another example, similar to the one in the skeleton:
Along with the description, the service specifies a key, nameto uniquely identify each service instance. This can be any free-form text, as denoted by its type (string). The statements starting with tailf: are NSO-specific extensions for customizing the user interface NSO presents for this service. After that come two lines, the uses and ncs:servicepoint, which tells NSO this is a service and not just some ordinary list. At the end, there are two parameters defined, device and server-ip.
NSO then allows you to add the values for these parameters when configuring a service instance, as shown in the following CLI transcript:
Finally, your Python script can read the supplied values inside the cb_create() method via the provided service variable. This variable points to the currently-provisioning service instance, allowing you to use code such as service.server_ip for the value of the server-ip parameter.
No previous NSO or netsim processes are running. Use the ncs --stop and ncs-netsim stop commands to stop them if necessary.
NSO Local Install with a fresh runtime directory has been created by the ncs-setup --dest ~/nso-lab-rundir or a similar command.
The environment variable NSO_RUNDIR points to this runtime directory, such as set by the export NSO_RUNDIR=~/nso-lab-rundir
The getting-started/developing-with-ncs set of examples contains three simulated routers that you can use for this scenario. The 0-router-network directory holds the data necessary for starting the routers and connecting them to your NSO instance.
First, change the current working directory:
From this directory, you can start a fresh set of routers by running the following make command:
The routers are now running. The required NED package and a CDB initialization file ncs-cdb/ncs_init.xmlwere also added to your NSO instance. The latter contains connection details for the routers and will be automatically loaded on the first NSO start.
In case you're not using a fresh working directory, you may need to use the ncs_load command to load the file manually. Older versions of the system may also be missing the above make target, which you can add to the Makefile yourself:
You create a new service package with the ncs-make-package command. Without the --dest option, the package is created in the current working directory. Normally you run the command without this option, as it is shorter. For NSO to find and load this package, it has to be placed (or referenced via a symbolic link) in the packages subfolder of the NSO running directory.
Change the current working directory before creating the package:
You need to provide two parameters to ncs-make-package. The first is the --service-skeleton python option, which selects the Python programming language for scaffolding code. The second parameter is the name of the service. As you are creating a service for DNS configuration, dns-config is a fitting name for it. Run the final, full command:
If you look at the file structure of the newly created package, you will see it contains a number of files.
The package-meta-data.xml describes the package and tells NSO where to find the code. Inside the python folder is a service-specific Python package, where you add your own Python code (to main.py file). There is also a README file that you can update with the information relevant to your service. The src folder holds the source code that you must compile before you can use it with NSO. That's why there is also a Makefile that takes care of the compilation process. In the yang subfolder is the service YANG module. The templates folder can contain additional XML files, discussed later. Lastly, there's the test folder where you can put automated testing scripts, which won't be discussed here.
While you can always hard-code the desired parameters, such as the DNS server IP address, in the Python code, it means you have to change the code every time the parameter value (the IP address) changes. Instead, you can define it as an input parameter in the YANG file. Fortunately, the skeleton already has a leaf called a dummy that you can rename and use for this purpose.
Open the dns-config.yang, located inside dns-config/src/yang/, in a text or code editor and find the following line:
Replace the word dummy with the word dns-server, save the file, and return to the shell. Run the make command in the dns-config/src folder to compile the updated YANG file.
In a text or code editor, open the main.py file, located inside dns-config/python/dns_config/. Find the following snippet:
Right after the self.log.info() call, read the value of the dns-server parameter into a dns_ip variable:
Mind the 8 spaces in front to make sure that the line is correctly aligned. After that, add the code that configures the ex1 router:
Here, you are using the dns_ip variable that contains the operator-provided IP address instead of a hard-coded value. Also, note that there is no need to check if the entry for this DNS server already exists in the list.
In the end, the cb_create() method should look like the following:
Save the file and let's see the service in action!
Start the NSO from the running directory:
Then, start the NSO CLI:
If you have started a fresh NSO instance, the packages are loaded automatically. Still, there's no harm in requesting a package reload anyway:
As you will be making changes on the simulated routers, make sure NSO has their current configuration with the devices sync-from command.
Now you can test out your service package by configuring a service instance. First, enter the configuration mode.
Configure a test instance and specify the DNS server IP address:
The easiest way to see configuration changes from the service code is to use the commit dry-run command.
The output tells you the new DNS server is being added in addition to an existing one already there. Commit the changes:
Finally, change the IP address of the DNS server:
With the help of commit dry-run observe how the old IP address gets replaced with the new one, without any special code needed for provisioning.
The DNS configuration example intentionally performs very little configuration, a single line really, to focus on the service concepts. In practice, services can become more complex in two different ways. First, the DNS configuration service takes the IP address of the DNS server as an input parameter, supplied by the operator. Instead, the provisioning code could leverage another system, such as an IP Address Management (IPAM), to get the required information. In such cases, you have to add additional logic to your service code to generate the parameters (variables) to be used for configuration.
Second, generating the configuration from the parameters can become more complex when it touches multiple subsystems or spans across multiple devices. An example would be a service that adds a new VLAN, configures an IP address and a DHCP server, and adds the new route to a routing protocol. Or perhaps the service has to be duplicated on two separate devices for redundancy.
An established approach to the second challenge is to use a templating system for configuration generation. Templates separate the process of constructing parameter values from how they are used, adding a degree of flexibility and decoupling. NSO uses XML-based configuration (config) templates, which you can invoke from provisioning code or link directly to services. In the latter case, you don't even have to write any Python code.
XML templates are snippets of configuration, similar to the CDB init files, but more powerful. Let's see how you could implement the DNS configuration service using a template instead of navigating the data model with Python.
While you are free to write an XML template by hand, it has to follow the target data model. Fortunately, the NSO CLI can help you and do most of the hard work for you. First, you'll need a sample instance with the desired configuration. As you are configuring the DNS server on a router and the ex1 device already has one configured, you can just reuse that one. Otherwise, you might configure one by hand, using the CLI. You do that by displaying the existing configuration in the XML format and saving it to a file, by piping it through the display xml and save filters, as shown here:
The file structure of a package usually contains a templates folder and that is where the template belongs. When loading packages, NSO will scan this folder and process any .xml files it finds as templates.
Of course, a template with hard-coded values is of limited use, as it would always produce the exact same configuration. It becomes a lot more useful with variable substitution. In its simplest form, you define a variable value in the provisioning (Python) code and reference it from the XML template, by using curly braces and a dollar sign: {$VARIABLE}. Also, many users prefer to keep the variable name uppercased to make it stand out more from the other XML elements in the file. For example, in the template XML file for the DNS service, you would likely replace the IP address 192.0.2.1 with the variable {$DNS_IP} to control its value from the Python code.
You apply the template by creating a new ncs.template.Template object and calling its apply() method. This method takes the name of the XML template as the first parameter, without the trailing .xml extension, and an object of type ncs.template.Variables as the second parameter. Using the Variables object, you provide values for the variables in the template.
Variables in a template can take a more complex form of an XPath expression, where the parameter for the Template constructor comes into play. This parameter defines the root node (starting point) when evaluating XPath paths. Use the provided service variable, unless you specifically need a different value. It is what the so-called template-based services use as well.
Template-based services are no-code, pure template services that only contain a YANG model and an XML template. Since there is no code to set the variables, they must rely on XPath for the dynamic parts of the template. Such services still have a YANG data model with service parameters, that XPath can access. For example, if you have a parameter leaf defined in the service YANG file by the name dns-server, you can refer to its value with the {/dns-server} code in the XML template.
Likewise, you can use the same XPath in a template of a Python service. Then you don't have to add this parameter to the variables object but can still access its value in the template, saving you a little bit of Python code.
No previous NSO or netsim processes are running. Use the ncs --stop and ncs-netsim stop commands to stop them if necessary.
NSO local install with a fresh runtime directory has been created by the ncs-setup --dest ~/nso-lab-rundir or similar command.
The environment variable NSO_RUNDIR points to this runtime directory, such as set by the export NSO_RUNDIR=~/nso-lab-rundir
The getting-started/developing-with-ncs set of examples contains three simulated routers that you can use for this scenario. The 0-router-network directory holds the data necessary for starting the routers and connecting them to your NSO instance.
First, change the current working directory:
From this directory, you can start a fresh set of routers by running the following make command:
The routers are now running. The required NED package and a CDB initialization file, ncs-cdb/ncs_init.xml, were also added to your NSO instance. The latter contains connection details for the routers and will be automatically loaded on the first NSO start.
In case you're not using a fresh working directory, you may need to use the ncs_load command to load the file manually. Older versions of the system may also be missing the above make target, which you can add to the Makefile yourself:
The DNS configuration service that you are implementing will have three parts: the YANG model, the service code, and the XML template. You will put all of these in a package named dns-config. First, navigate to the packages subdirectory:
Then, run the following command to set up the service package:
In case you are building on top of the previous showcase, the package folder may already exist and will be updated.
You can leave the YANG model as is for this scenario but you need to add some Python code that will apply an XML template during provisioning. In a text or code editor open the main.py file, located inside dns-config/python/dns_config/, and find the definition of the cb_create() function:
You will define one variable for the template, the IP address of the DNS server. To pass its value to the template, you have to create the Variables object and add each variable, along with its value. Replace the body of the cb_create() function with the following:
The template_vars object now contains a value for the DNS_IP template variable, to be used with the apply() method that you are adding next:
Here, the first argument to apply() defines the template to use. In particular, using dns-config-tpl, you are requesting the template from the dns-config-tpl.xml file, which you will be creating shortly.
This is all the Python code that is required. The final, complete cb_create method is as follows:
The most straightforward way to create an XML template is by using the NSO CLI. Return to the running directory and start the NSO:
The --with-package-reload option will make sure that NSO loads any added packages and save a packages reload command on the NSO CLI.
Next, start the NSO CLI:
As you are starting with a new NSO instance, first invoke the sync-from action.
Next, make sure that the ex1 router already has an existing entry for a DNS server in its configuration.
Pipe the command through the display xml and save CLI filters to save this configuration in an XML format. According to the Python code, you need to create a template file dns-config-tpl.xml. Use packages/dns-config/templates/dns-config-tpl.xml for the full file path.
At this point, you have created a complete template that will provision the 10.2.3.4 as the DNS server on the ex1 device. The only problem is, that the IP address is not the one you have specified in the Python code. To correct that, open the dns-config-tpl.xml file in a text editor and replace the line that reads <address>10.2.3.4</address> with the following:
The only static part left in the template now is the target device and it's possible to parameterize that, too. The skeleton, created by the ncs-make-package command, already contains a node device in the service YANG file. It is there to allow the service operator to choose the target device to be configured.
One way to use the device service parameter is to read its value in the Python code and then set up the template parameters accordingly. However, there is a simpler way with XPath. In the template, replace the line that reads <name>ex1</name> with the following:
The XPath expression inside the curly braces instructs NSO to get the value for the device name from the service instance's data, namely the node called device. In other words, when configuring a new service instance, you have to add the device parameter, which selects the router for provisioning. The final XML template is then:
Remember to save the template file and return to the NSO CLI. Because you have updated the service code, you have to redeploy it for NSO to pick up the changes:
Alternatively, you could call the packages reload command, which does a full reload of all the packages.
Next, enter the configuration mode:
As you are using the device node in the service model for target router selection, configure a service instance for the ex2 router in the following way:
Finally, using the commit dry-run command, observe the ex2 router being configured with an additional DNS server.
As a bonus for using an XPath expression to a leaf-list in the service template, you can actually select multiple router devices in a single service instance and they will all be configured.
class Main(ncs.application.Application):
def setup(self):
# Service callbacks require a registration for a 'service point',
# as specified in the corresponding data model.
#
self.register_service('my-svc-servicepoint', ServiceCallbacks)Error: no registration found for callpoint my-svc-servicepoint/service_create of type=externallist my-svc {
description "This is an RFS skeleton service";
uses ncs:service-data;
ncs:servicepoint my-svc-servicepoint;
}action my-test {
tailf:actionpoint my-test-action;
input {
leaf test-string {
type string;
}
}
output {
leaf has-nso {
type boolean;
}
}
}def setup(self):
self.register_action('my-test-action', MyTestAction)class MyTestAction(Action):
@Action.action
def cb_action(self, uinfo, name, kp, input, output, trans):
self.log.info('Action invoked: ', name)
output.has_nso = 'NSO' in input.test_string$ cd $NSO_RUNDIR/packages$ ncs-make-package --service-skeleton python --action-example count-devices descriptionmodule count-devices {
namespace "http://example.com/count-devices";
prefix count-devices;
import ietf-inet-types {
prefix inet;
}
import tailf-common {
prefix tailf;
}
import tailf-ncs {
prefix ncs;
} yang-version 1.1; container custom-actions {
action count-devices {
tailf:actionpoint count-devices-action;
input {
leaf in-subnet {
type inet:ipv4-prefix;
}
}
output {
leaf result {
type uint16;
}
}
}
}}module count-devices {
yang-version 1.1;
namespace "http://example.com/count-devices";
prefix count-devices;
import ietf-inet-types {
prefix inet;
}
import tailf-common {
prefix tailf;
}
import tailf-ncs {
prefix ncs;
}
container custom-actions {
action count-devices {
tailf:actionpoint count-devices-action;
input {
leaf in-subnet {
type inet:ipv4-prefix;
}
}
output {
leaf result {
type uint16;
}
}
}
}
}from ipaddress import IPv4Address, IPv4Network
import socket
import ncs
from ncs.dp import Action
class CountDevicesAction(Action):
@Action.action
def cb_action(self, uinfo, name, kp, input, output, trans): count = 0
root = ncs.maagic.get_root(trans) for device in root.devices.device:
address = socket.gethostbyname(device.address) if IPv4Address(address) in IPv4Network(input.in_subnet):
count = count + 1 output.result = countimport ncs
class Main(ncs.application.Application):
def setup(self): self.register_action('count-devices-action', CountDevicesAction)import ncs
from count_devices_action import CountDevicesAction
class Main(ncs.application.Application):
def setup(self):
self.register_action('count-devices-action', CountDevicesAction)$ cd $NCS_DIR/examples.ncs/getting-started/developing-with-ncs/0-router-network$ cp ncs-cdb/ncs_init.xml $NSO_RUNDIR/ncs-cdb/$ cp -a packages/router $NSO_RUNDIR/packages/$ cd $NSO_RUNDIR$ make -C packages/router/src && make -C packages/count-devices/src
make: Entering directory 'packages/router/src'
< ... output omitted ... >
make: Leaving directory 'packages/router/src'
make: Entering directory 'packages/count-devices/src'
mkdir -p ../load-dir
mkdir -p java/src//
bin/ncsc `ls count-devices-ann.yang > /dev/null 2>&1 && echo "-a count-devices-ann.yang"` \
-c -o ../load-dir/count-devices.fxs yang/count-devices.yang
make: Leaving directory 'packages/count-devices/src'$ ncs --with-package-reload && ncs_cli -C -u admin$ admin@ncs# custom-actions count-devices in-subnet 127.0.0.0/16
result 3<ncs-package xmlns="http://tail-f.com/ns/ncs-packages">
< ... output omitted ... >
<component>
<name>main</name>
<application>
<python-class-name>dns_config.main.Main</python-class-name>
</application>
</component>
</ncs-package>cd ~/Downloads
ls -l nso*.bin
-rw-r--r--@ 1 user staff 199M Dec 15 11:45 nso-6.0.linux.x86_64.installer.bin
-rw-r--r--@ 1 user staff 199M Dec 15 11:45 nso-6.0.linux.x86_64.signed.binsh nso-6.0.linux.x86_64.signed.bin
# Output
Unpacking...
Verifying signature...
Downloading CA certificate from http://www.cisco.com/security/pki/certs/crcam2.cer ...
Successfully downloaded and verified crcam2.cer.
Downloading SubCA certificate from http://www.cisco.com/security/pki/certs/innerspace.cer ...
Successfully downloaded and verified innerspace.cer.
Successfully verified root, subca and end-entity certificate chain.
Successfully fetched a public key from tailf.cer.
Successfully verified the signature of nso-6.0.linux.x86_64.installer.bin using tailf.cerls -l
# Output
-rw-r--r-- 1 user staff 1.8K Nov 29 06:05 README.signature
-rw-r--r-- 1 user staff 12K Nov 29 06:05 cisco_x509_verify_release.py
-rwxr-xr-x 1 user staff 199M Nov 29 05:55 nso-6.0.linux.x86_64.installer.bin
-rw-r--r-- 1 user staff 256B Nov 29 06:05 nso-6.0.linux.x86_64.installer.bin.signature
-rwxr-xr-x@ 1 user staff 199M Dec 15 11:45 nso-6.0.linux.x86_64.signed.bin
-rw-r--r-- 1 user staff 1.4K Nov 29 06:05 tailf.cer$ sudo sh nso-VERSION.OS.ARCH.installer.bin --system-install$ sudo sh nso-6.0.linux.x86_64.installer.bin --system-install# echo 2 > /proc/sys/vm/overcommit_memory# groupadd ncsadmin# groupadd ncsoper# usermod -a -G 'groupname' 'username'$ sudo -s# source /etc/profile.d/ncs.sh# systemctl daemon-reload
# systemctl start ncs$ ncs_cli -Cu admin
admin@ncs# license smart register idtoken
YzIzMDM3MTgtZTRkNC00YjkxLTk2ODQtOGEzMTM3OTg5MG
Registration process in progress.
Use the 'show license status' command to check the progress and result.admin@ncs# show license all
...
<INFO> 21-Apr-2016::11:29:18.022 miosaterm confd[8226]:
Smart Licensing Global Notification:
type = "notifyRegisterSuccess",
agentID = "sa1",
enforceMode = "notApplicable",
allowRestricted = false,
failReasonCode = "success",
failMessage = "Successful."
<INFO> 21-Apr-2016::11:29:23.029 miosaterm confd[8226]:
Smart Licensing Entitlement Notification: type = "notifyEnforcementMode",
agentID = "sa1",
notificationTime = "Apr 21 11:29:20 2016",
version = "1.0",
displayName = "regid.2015-10.com.cisco.NSO-network-element",
requestedDate = "Apr 21 11:26:19 2016",
tag = "regid.2015-10.com.cisco.NSO-network-element",
enforceMode = "inCompliance",
daysLeft = 90,
expiryDate = "Jul 20 11:26:19 2016",
requestedCount = 8
... ...
<INFO> 13-Apr-2016::13:22:29.178 miosaterm confd[16260]:
Starting the NCS Smart Licensing Java VM
<INFO> 13-Apr-2016::13:22:34.737 miosaterm confd[16260]:
Smart Licensing evaluation time remaining: 90d 0h 0m 0s
...
<INFO> 13-Apr-2016::13:22:34.737 miosaterm confd[16260]:
Smart Licensing evaluation time remaining: 89d 23h 0m 0s
...<INFO> 21-Apr-2016::11:29:18.022 miosaterm confd[8226]:
Smart Licensing Global Notification:
type = "notifyRegisterSuccess"admin@ncs# show license status
Smart Licensing is ENABLED
Registration:
Status: REGISTERED
Smart Account: Network Services Orchestrator
Virtual Account: Default
Export-Controlled Functionality: Allowed
Initial Registration: SUCCEEDED on Apr 21 09:29:11 2016 UTC
Last Renewal Attempt: SUCCEEDED on Apr 21 09:29:16 2016 UTC
Next Renewal Attempt: Oct 18 09:29:16 2016 UTC
Registration Expires: Apr 21 09:26:13 2017 UTC
Export-Controlled Functionality: Allowed
License Authorization:
License Authorization:
Status: IN COMPLIANCE on Apr 21 09:29:18 2016 UTC
Last Communication Attempt: SUCCEEDED on Apr 21 09:26:30 2016 UTC
Next Communication Attempt: Apr 21 21:29:32 2016 UTC
Communication Deadline: Apr 21 09:26:13 2017 UTCopenssl command. Generate self-signed certificates for HTTPS.
find command. Used to find out if all required libraries are available.
which command. Used by the NSO package manager.
libpam.so.0. Pluggable Authentication Module library.
libexpat.so.1. EXtensible Markup Language parsing library.
libz.so.1 version 1.2.7.1 or higher. Data compression library.
Google Chrome
ncs-netsim(1): Command to create and manipulate a simulated network.ncs-setup(1): Command to create an initial NSO setup.
ncs.conf: NSO daemon configuration file format.


To show the aaa settings for the admin user:
To show all users that have group ID 1000, omit the user ID and instead specify gid 1000:
and-quit: Commit to running and quit configure mode.
comment <text>: Associate a comment with the commit. The comment can later be seen when examining rollback files.
label <text>: Associate a label with the commit. The label can later be seen when examining rollback files.
overridereplace
Configuration from file/terminal replaces the current configuration.
If this is the current configuration:
The shutdown value for the entry GigabitEthernet 0/0/0/0 should be deleted. As the configuration file is basically just a sequence of commands with comments in between, the configuration file should look like this:
The file can then be used with the command load merge`` ``FILENAME to achieve the desired results.
mergeoverride
Configuration from file/terminal overwrites the current configuration.
replace
Configuration from file/terminal replaces the current configuration.
fixed-number <number> to address an absolute rollback number or id <number> to address a relative number. For e.g., the latest commit has relative rollback id 0, the second-latest has id 1, and so on.The optional path argument allows subtrees to be rolled back while the rest of the configuration tree remains unchanged.
Instead of undoing all changes from rollback10001 to rollbackN it is possible to undo only the changes stored in a specific rollback file. This may or may not work depending on which changes have been made to the configuration after the rollback was created. In some cases applying the rollback file may fail, or the configuration may require additional changes in order to be valid. E.g. to undo the changes recorded in rollback 10019, but not the changes in 10020-N run the command rollback-files apply-rollback-file selective fixed-number 10019.
Example:
This command is only available if rollback has been enabled in ncs.conf.
eval
Evaluate an XPath expression.
must
Evaluate the expression as a YANG must expression.
when
Evaluate the expression as a YANG when expression.
listDisplay the current set of commands.
tailf:action shutdown {
tailf:actionpoint actions;
input {
tailf:constant-leaf flags {
type uint64 {
range "1 .. max";
}
tailf:constant-value 42;
}
leaf timeout {
type xs:duration;
default PT60S;
}
leaf message {
type string;
}
container options {
leaf rebootAfterShutdown {
type boolean;
default false;
}
leaf forceFsckAfterReboot {
type boolean;
default false;
}
leaf powerOffAfterShutdown {
type boolean;
default true;
}
}
}
}admin@ncs> shutdown timeout 10s message reboot options { \
forceFsckAfterReboot true }admin@ncs# commit abort$ man ncs.conf$ ncs --help$ ncsc --helpsh nso-6.0.linux.x86_64.installer.bin --help# cat /proc/meminfo | grep "MemTotal\|SwapTotal"
MemTotal: 8039352 kB
SwapTotal: 1048572 kB100 * ((8039352-1048572)/8039352) = ~86.9%# echo 2 > /proc/sys/vm/overcommit_memory
# echo 86.9 > /proc/sys/vm/overcommit_ratio# cat /proc/meminfo | grep "MemTotal\|SwapTotal"
MemTotal: 16000000 kB
SwapTotal: 16000000 kB# echo 2 > /proc/sys/vm/overcommit_memory
# echo 100 > /proc/sys/vm/overcommit_ratio$ ncs_cli -Cdef cb_create(self, tctx, root, service, proplist)with ncs.maapi.single_write_trans('admin', 'python') as t:
root = ncs.maagic.get_root(t)
ex1_device = root.devices.device['ex1']
ex1_config = ex1_device.config
dns_server_list = ex1_config.sys.dns.server
dns_server_list.create('192.0.2.1')
t.apply()def cb_create(self, tctx, root, service, proplist):
ex1_device = root.devices.device['ex1']
ex1_config = ex1_device.config
dns_server_list = ex1_config.sys.dns.server
dns_server_list.create('192.0.2.1')list my-svc {
description "This is an RFS skeleton service";
key name;
leaf name {
tailf:info "Unique service id";
tailf:cli-allow-range;
type string;
}
uses ncs:service-data;
ncs:servicepoint my-svc-servicepoint;
// Devices configured by this service instance
leaf-list device {
type leafref {
path "/ncs:devices/ncs:device/ncs:name";
}
}
// An example generic parameter
leaf server-ip {
type inet:ipv4-address;
}
}admin@ncs(config)# my-svc instance1 ?
Possible completions:
check-sync Check if device config is according to the service
commit-queue
deep-check-sync Check if device config is according to the service
device
< ... output omitted ... >
server-ip
< ... output omitted ... >$ cd $NCS_DIR/examples.ncs/getting-started/developing-with-ncs/0-router-network$ make showcase-clean-start
< ... output omitted ... >
DEVICE ex0 OK STARTED
DEVICE ex1 OK STARTED
DEVICE ex2 OK STARTED
make: Leaving directory 'examples.ncs/getting-started/developing-with-ncs/0-router-network'showcase-clean-start:
$(MAKE) clean all
cp ncs-cdb/ncs_init.xml ${NSO_RUNDIR}/ncs-cdb/
cp -a ../packages/router ${NSO_RUNDIR}/packages/
ncs-netsim start$ cd $NSO_RUNDIR/packages$ ncs-make-package --service-skeleton python dns-configdns-config/
+-- package-meta-data.xml
+-- python
| '-- dns_config
| +-- __init__.py
| '-- main.py
+-- README
+-- src
| +-- Makefile
| '-- yang
| '-- dns-config.yang
+-- templates
'-- test
+-- < ... output omitted ... > leaf dummy {$ make -C dns-config/src
make: Entering directory 'dns-config/src'
mkdir -p ../load-dir
mkdir -p java/src//
bin/ncsc `ls dns-config-ann.yang > /dev/null 2>&1 && echo "-a dns-config-ann.yang"` \
-c -o ../load-dir/dns-config.fxs yang/dns-config.yang
make: Leaving directory 'dns-config/src' @Service.create
def cb_create(self, tctx, root, service, proplist):
self.log.info('Service create(service=', service._path, ')') dns_ip = service.dns_server ex1_device = root.devices.device['ex1']
ex1_config = ex1_device.config
dns_server_list = ex1_config.sys.dns.server
dns_server_list.create(dns_ip) @Service.create
def cb_create(self, tctx, root, service, proplist):
self.log.info('Service create(service=', service._path, ')')
dns_ip = service.dns_server
ex1_device = root.devices.device['ex1']
ex1_config = ex1_device.config
dns_server_list = ex1_config.sys.dns.server
dns_server_list.create(dns_ip)$ cd $NSO_RUNDIR; ncs$ ncs_cli -C -u adminadmin@ncs# packages reload
reload-result {
package dns-config
result true
}
reload-result {
package router-nc-1.0
result true
}admin@ncs# devices sync-from
sync-result {
device ex0
result true
}
sync-result {
device ex1
result true
}
sync-result {
device ex2
result true
}admin@ncs# configadmin@ncs(config)# dns-config test dns-server 192.0.2.1admin@ncs(config-dns-config-test)# commit dry-run
cli {
local-node {
data devices {
device ex1 {
config {
sys {
dns {
+ # after server 10.2.3.4
+ server 192.0.2.1;
}
}
}
}
}
+dns-config test {
+ dns-server 192.0.2.1;
+}
}
}admin@ncs(config-dns-config-test)# commitadmin@ncs(config-dns-config-test)# dns-server 192.0.2.8admin@ncs(config-dns-config-test)# commit dry-run
cli {
local-node {
data devices {
device ex1 {
config {
sys {
dns {
- server 192.0.2.1;
+ # after server 10.2.3.4
+ server 192.0.2.8;
}
}
}
}
}
dns-config test {
- dns-server 192.0.2.1;
+ dns-server 192.0.2.8;
}
}
}admin@ncs# show running-config devices device ex1 config sys dns | display xml
<config xmlns="http://tail-f.com/ns/config/1.0">
<devices xmlns="http://tail-f.com/ns/ncs">
<device>
<name>ex1</name>
<config>
<sys xmlns="http://example.com/router">
<dns>
<server>
<address>192.0.2.1</address>
</server>
</dns>
</sys>
</config>
</device>
</devices>
</config>
admin@ncs# show running-config devices device ex1 config sys dns | \
display xml | save template.xmltemplate_vars = ncs.template.Variables()
template_vars.add('VARIABLE', 'some value')
template = ncs.template.Template(service)
template.apply('template', template_vars)$ cd $NCS_DIR/examples.ncs/getting-started/developing-with-ncs/0-router-network$ make showcase-clean-start
< ... output omitted ... >
DEVICE ex0 OK STARTED
DEVICE ex1 OK STARTED
DEVICE ex2 OK STARTED
make: Leaving directory 'examples.ncs/getting-started/developing-with-ncs/0-router-network'showcase-clean-start:
$(MAKE) clean all
cp ncs-cdb/ncs_init.xml ${NSO_RUNDIR}/ncs-cdb/
cp -a ../packages/router ${NSO_RUNDIR}/packages/
ncs-netsim start$ cd $NSO_RUNDIR/packages$ ncs-make-package --build --service-skeleton python dns-config
bin/ncsc `ls dns-config-ann.yang > /dev/null 2>&1 && echo "-a dns-config-ann.yang"` \
-c -o ../load-dir/dns-config.fxs yang/dns-config.yang @Service.create
def cb_create(self, tctx, root, service, proplist):
... template_vars = ncs.template.Variables()
template_vars.add('DNS_IP', '192.0.2.1') template = ncs.template.Template(service)
template.apply('dns-config-tpl', template_vars) @Service.create
def cb_create(self, tctx, root, service, proplist):
template_vars = ncs.template.Variables()
template_vars.add('DNS_IP', '192.0.2.1')
template = ncs.template.Template(service)
template.apply('dns-config-tpl', template_vars)$ cd $NSO_RUNDIR && ncs --with-package-reload$ ncs_cli -C -u adminadmin@ncs# devices sync-from
sync-result {
device ex0
result true
}
sync-result {
device ex1
result true
}
sync-result {
device ex2
result true
}admin@ncs# show running-config devices device ex1 config sys dns
devices device ex1
config
sys dns server 10.2.3.4
!
!
!admin@ncs# show running-config devices device ex1 config sys dns \
| display xml | save packages/dns-config/templates/dns-config-tpl.xml<address>{$DNS_IP}</address>leaf-list device {
type leafref {
path "/ncs:devices/ncs:device/ncs:name";
}
}<name>{/device}</name><config xmlns="http://tail-f.com/ns/config/1.0">
<devices xmlns="http://tail-f.com/ns/ncs">
<device>
<name>{/device}</name>
<config>
<sys xmlns="http://example.com/router">
<dns>
<server>
<address>{$DNS_IP}</address>
</server>
</dns>
</sys>
</config>
</device>
</devices>
</config>admin@ncs# packages package dns-config redeploy
result true]admin@ncs# configadmin@ncs(config)# dns-config dns-for-ex2 device ex2admin@ncs(config-dns-config-dns-for-ex2)# commit dry-runadmin@ncs# config terminal
Entering configuration mode terminaladmin@ncs# file list /config
rollback10001
rollback10002
rollback10003
rollback10004
rollback10005admin@ncs# file show /etc/skel/.bash_profile
# /etc/skel/.bash_profile
# This file is sourced by bash for login shells. The following line
# runs our .bashrc and is recommended by the bash info pages.
[[ -f ~/.bashrc ]] && . ~/.bashrcadmin@ncs# help job
Help for command: job
Job operationsadmin@ncs# monitor start /var/log/messages
[ok][...]
admin@ncs# show jobs
JOB COMMAND
3 monitor start /var/log/messages
admin@ncs# job stop 3
admin@ncs# show jobs
JOB COMMANDadmin@ncs# who
Session User Context From Proto Date Mode
25 oper cli 192.168.1.72 ssh 12:10:40 operational
*24 admin cli 192.168.1.72 ssh 12:05:50 operational
admin@ncs# logout session 25
admin@ncs# who
Session User Context From Proto Date Mode
*24 admin cli 192.168.1.72 ssh 12:05:50 operationaladmin@ncs# who
Session User Context From Proto Date Mode
25 oper cli 192.168.1.72 ssh 12:10:40 operational
*24 admin cli 192.168.1.72 ssh 12:05:50 operational
admin@ncs# logout user oper
admin@ncs# who
Session User Context From Proto Date Mode
*24 admin cli 192.168.1.72 ssh 12:05:50 operationaladmin@ncs# send oper "I will reboot system in 5 minutes."admin@ncs# show cli
autowizard false
complete-on-space true
display-level 99999999
history 100
idle-timeout 1800
ignore-leading-space false
output-file terminal
paginate true
prompt1 \h\M#
prompt2 \h(\m)#
screen-length 71
screen-width 80
service prompt config true
show-defaults false
terminal xterm-256color
timestamp disableadmin@ncs# show history
06-19 14:34:02 -- ping router
06-20 14:42:35 -- show running-config
06-20 14:42:37 -- who
06-20 14:42:40 -- show history
admin@ncs# show history 3
14:42:37 -- who
14:42:40 -- show history
14:42:46 -- show history 3admin@ncs# show jobs
JOB COMMAND
3 monitor start /var/log/messagesadmin@ncs# timecmd id
user = admin(501), gid=20, groups=admin, gids=12,20,33,61,79,80,81,98,100
Command executed in 0.00 sec
admin@ncs#admin@ncs# who
Session User Context From Proto Date Mode
25 oper cli 192.168.1.72 ssh 12:10:40 operational
*24 admin cli 192.168.1.72 ssh 12:05:50 operational
admin@ncs#admin@ncs(config)# devices template host_temp
admin@ncs(config-template-host_temp)# exit
admin@ncs(config)# copy cfg merge devices device ce0 config \
ios:ethernet to devices template host_temp config ios:ethernet
admin@ncs(config)# show configuration diff
+devices template host_temp
+ config
+ ios:ethernet cfm global
+ !
+!admin@ncs# timecmd id
user = admin(501), gid=20, groups=admin, gids=12,20,33,61,79,80,81,98,100
Command executed in 0.00 sec
admin@ncs#oper@ncs# Message from admin@ncs at 13:16:41...
I will reboot system in 5 minutes.
EOFadmin@ncs# show running-config aaa authentication users user admin
aaa authentication users user admin
uid 1000
gid 1000
password $1$JA.1O3Tx$Zt1ycpnMlg1bVMqM/zSZ7/
ssh_keydir /var/ncs/homes/admin/.ssh
homedir /var/ncs/homes/admin
!admin@ncs# show running-config aaa authentication users user * gid 1000
...admin@ncs# show devices device ce0 module
NAME REVISION FEATURE DEVIATION
-----------------------------------------------------------
tailf-ned-cisco-ios 2015-03-16 - -
tailf-ned-cisco-ios-stats 2015-03-16 - -devices device p1
config
cisco-ios-xr:interface GigabitEthernet 0/0/0/0
shutdown
exit
cisco-ios-xr:interface GigabitEthernet 0/0/0/1
shutdown
!
!devices device p1
config
cisco-ios-xr:interface GigabitEthernet 0/0/0/0
no shutdown
exit
!
!admin@ncs(config)# rollback-files apply-rollback-file fixed-number 10005Create NETCONF NEDs.
Creating and installing a NETCONF NED consists of the following steps:
Make the device YANG data models available to NSO
Build the NED package from the YANG data models using NSO tools
Install the NED with NSO
Configure the device connection and notification events in NSO
Creating a NETCONF NED that uses the built-in NSO NETCONF client can be a pleasant experience with devices and nodes that strictly follow the specification for the NETCONF protocol and YANG mappings to NETCONF. If the device does not, the smooth sailing will quickly come to a halt, and you are recommended to visit the in Administration and get help from the Cisco NSO NED team who can diagnose, develop and maintain NEDs that bypass misbehaving devices special quirks.
Before NSO can manage a NETCONF-capable device, a corresponding NETCONF NED needs to be loaded. While no code needs to be written for such NED, it must contain YANG data models for this kind of device. While in some cases, the YANG models may be provided by the device's vendor, devices that implement RFC 6022 YANG Module for NETCONF Monitoring can provide their YANG models using the functionality described in this RFC.
The NSO example under $NCS_DIR/examples.ncs/development-guide/ned-development/netconf-ned implements two shell scripts that use different tools to build a NETCONF NED from a simulated hardware chassis system controller device.
netconf-console and ncs-make-package ToolsThe netconf-console NETCONF client tool is a Python script that can be used for testing, debugging, and simple client duties. For example, making the device YANG models available to NSO using the NETCONF IETF RFC 6022 get-schema operation to download YANG modules and the RFC 6241get operation, where the device implements the RFC 7895 YANG module library to provide information about all the YANG modules used by the NETCONF server. Type netconf-console -h for documentation.
Once the required YANG models are downloaded or copied from the device, the ncs-make-package bash script tool can be used to create and build, for example, the NETCONF NED package. See in Manual Pages and ncs-make-package -h for documentation.
The demo.sh script in the netconf-ned example uses the netconf-console and ncs-make-package combination to create, build, and install the NETCONF NED. When you know beforehand which models you need from the device, you often begin with this approach when encountering a new NETCONF device.
The NETCONF NED builder uses the functionality of the two previous tools to assist the NSO developer onboard NETCONF devices by fetching the YANG models from a device and building a NETCONF NED using CLI commands as a frontend.
The demo_nb.sh script in the netconf-ned example uses the NSO CLI NETCONF NED builder commands to create, build, and install the NETCONF NED. This tool can be beneficial for a device where the YANG models are required to cover the dependencies of the must-have models. Also, devices known to have behaved well with previous versions can benefit from using this tool and its selection profile and production packaging features.
netconf-console and ncs-make-package CombinationFor a demo of the steps below, see README in the $NCS_DIR/examples.ncs/development-guide/ned-development/netconf-ned example and run the demo.sh script.
List the YANG version 1.0 models the device supports using NETCONF hello message.
List the YANG version 1.1 models supported by the device from the device yang-library.
The ietf-hardware.yang model is of interest to manage the device hardware. Use the netconf-console NETCONF get-schema operation to get the ietf-hardware.yang model.
The ietf-hardware.yang import a few YANG models.
Two of the imported YANG models are shipped with NSO.
Use the netconf-console NETCONF get-schema operation to get the iana-hardware.yang module.
The timestamp-hardware.yang module augments a node onto the ietf-hardware.yang model. This is not visible in the YANG library. Therefore, information on the augment dependency must be available, or all YANG models must be downloaded and checked for imports and augments of the ietf-hardware.yang model to make use of the augmented node(s).
Create and build the NETCONF NED package from the device YANG models using the ncs-make-package script.
If you make any changes to, for example, the YANG models after creating the package above, you can rebuild the package using make -C nso-rundir/packages/devsim all.
Start NSO. NSO will load the new package. If the package was loaded previously, use the --with-package-reload option. See in Manual Pages for details. If NSO is already running, use the packages reload CLI command.
As communication with the devices being managed by NSO requires authentication, a custom authentication group will likely need to be created with mapping between the NSO user and the remote device username and password, SSH public-key authentication, or external authentication. The example used here has a 1-1 mapping between the NSO admin user and the ConfD-enabled simulated device admin user for both username and password.
In the example below, the device name is set to hw0, and as the device here runs on the same host as NSO, the NETCONF interface IP address is 127.0.0.1 while the port is set to 12022 to not collide with the NSO northbound NETCONF port. The standard NETCONF port, 830, is used for production.
The default authentication group, as shown above, is used.
Fetch the public SSH host key from the device and sync the configuration covered by the ietf-hardware.yang from the device.
NSO can now configure the device, state data can be read, actions can be executed, and notifications can be received. See the $NCS_DIR/examples.ncs/development-guide/ned-development/netconf-ned/demo.sh example script for a demo.
For a demo of the steps below, see README in the $NCS_DIR/examples.ncs/development-guide/ned-development/netconf-ned example and run the demo_nb.sh script.
As communication with the devices being managed by NSO requires authentication, a custom authentication group will likely need to be created with mapping between the NSO user and the remote device username and password, SSH public-key authentication, or external authentication.
The example used here has a 1-1 mapping between the NSO admin user and the ConfD-enabled simulated device admin user for both username and password.
In the example below, the device name is set to hw0, and as the device here runs on the same host as NSO, the NETCONF interface IP address is 127.0.0.1 while the port is set to 12022 to not collide with the NSO northbound NETCONF port. The standard NETCONF port, 830, is used for production.
The default authentication group, as shown above, is used.
Create a NETCONF NED Builder project called hardware for the device, here named hw0.
The NETCONF NED Builder is a developer tool that must be enabled first through the devtools true command. The NETCONF NED Builder feature is not expected to be used by the end users of NSO.
The cache directory above is where additional YANG and YANG annotation files can be added in addition to the ones downloaded from the device. Files added need to be configured with the NED builder to be included with the project, as described below.
The project argument for the netconf-ned-builder command requires both the project name and a version number for the NED being built. A version number often picked is the version number of the device software version to match the NED to the device software it is tested with. NSO uses the project name and version number to create the NED name, here hardware-nc-1.0. The device's name is linked to the device name configured for the device connection.
Copying Manually to the Cache Directory:
After downloading the YANG data models and before building the NED with the NED builder, you need to register the YANG module with the NSO NED builder. For example, if you want to include a dummy.yang module with the NED, you first copy it to the cache directory and then, for example, create an XML file for use with the ncs_load command to update the NSO CDB operational datastore:
In some situations, you want to annotate the YANG data models that were downloaded from the device. For example, when an encrypted string is stored on the device, the encrypted value that is stored on the device will differ from the value stored in NSO if the two initialization vectors differ.
Say you have a YANG data model:
And create a YANG annotation module:
After downloading the YANG data models and before building the NED with the NED builder, you need to register the dummy-ann.yang annotation module, as was done above with the XML file for the dummy.yang module.
get-schema with the NED BuilderIf the device supports get-schema requests, the device can be contacted directly to download the YANG data models. The hardware system example returns the below YANG source files when the NETCONF get-schema operation is issued to the device from NSO. Only a subset of the list is shown.
The fetch-ssh-host-key command fetches the public SSH host key from the device to set up NETCONF over SSH. The fetch-module-list command will look for existing YANG modules in the download-cache-path folder, YANG version 1.0 models in the device NETCONF hello message, and issue a get operation to look for YANG version 1.1 models in the device yang-library. The get-schema operation fetches the YANG modules over NETCONF and puts them in the download-cache-path folder.
After the list of YANG modules is fetched, the retrieved list of modules can be shown. Select the ones you want to download and include in the NETCONF NED.
When you select a module with dependencies on other modules, the modules dependent on are automatically selected, such as those listed below for the ietf-hardware module including iana-hardware ietf-inet-types and ietf-yang-types. To select all available modules, use the wild card for both fields. Use the deselect command to exclude modules previously included from the build.
Before diving into more details, the principles of selecting the modules for inclusion in the NED are crucial steps in building the NED and deserve to be highlighted.
The best practice recommendation is to select only the modules necessary to perform the tasks for the given NSO deployment to reduce memory consumption, for example, for the sync-from command, and improve upgrade wall-clock performance.
For example, suppose the aim of the NSO installation is exclusively to manage BGP on the device, and the necessary configuration is defined in a separate module. In that case, only this module and its dependencies need to be selected. If several services are running within the NSO deployment, it will be necessary to include more data models in the single NED that may serve one or many devices. However, if the NSO installation is used to, for example, take a full backup of the device's configuration, all device modules need to be included with the NED.
Selecting a module will also require selecting the module's dependencies, namely, modules imported by the selected modules, modules that augment the selected modules with the required functionality, and modules known to deviate from the selected module in the device's implementation.
Avoid selecting YANG modules that overlap where, for example, configuring one leaf will update another. Including both will cause NSO to get out of sync with the device after a NETCONF edit-config operation, forcing time-consuming sync operations.
An NSO NED is a package containing the device YANG data models. The NED package must first be built, then installed with NSO, and finally, the package must be loaded for NSO to communicate with the device via NETCONF using the device YANG data models as the schema for what to configure, state to read, etc.
After the files have been downloaded from the device, they must be built before being used. The following example shows how to build a NED for the hw0 device.
Warnings after building the NED can be found in the build-warning leaf under the module list entry. It is good practice to clean up build warnings in your YANG models.
A build error example:
The full compiler output for debugging purposes can be found in the compiler-output leaf under the project list entry. The compiler-output leaf is hidden by hide-group debug and may be accessed in the CLI using the unhide debug command if the hide-group is configured in ncs.conf. Example ncs.conf config:
For the ncs.conf configuration change to take effect, it must be either reloaded or NSO restarted. A reload using the ncs_cmd tool:
As the compilation will halt if an error is found in a YANG data model, it can be helpful to first check all YANG data models at once using a shell script plus the NSO yanger tool.
As an alternative to debugging the NED building issues inside an NSO CLI session, the make-development-ned action creates a development version of NED, which can be used to debug and fix the issue in the YANG module.
YANG data models that do not compile due to YANG RFC compliance issues can either be updated in the cache folder directly or in the device and re-uploaded again through get-schema operation by removing them from the cache folder and repeating the previous process to rebuild the NED. The YANG modules can be deselected from the build if they are not needed for your use case.
A successfully built NED may be exported as a tar file using the export-ned action. The tar file name is constructed according to the naming convention below.
The user chooses the directory the file needs to be created in. The user must have write access to the directory. I.e., configure the NSO user with the same uid (id -u) as the non-root user:
When the NED package has been copied to the NSO run-time packages directory, the NED package can be loaded by NSO.
ned-id for the hw0 DeviceWhen the NETCONF NED has been built for the hw0 device, the ned-id for hw0 needs to be updated before the NED can be used to manage the device.
NSO can now configure the device, state data can be read, actions can be executed, and notifications can be received. See the $NCS_DIR/examples.ncs/development-guide/ned-development/netconf-ned/demo-nb.sh example script for a demo.
Installed NED packages can be removed from NSO by deleting them from the NSO project's packages folder and then deleting the device and the NETCONF NED project through the NSO CLI. To uninstall a NED built for the device hw0:
Handle tasks that require root privileges.
NSO requires some privileges to perform certain tasks. The following tasks may, depending on the target system, require root privileges.
Binding to privileged ports. The ncs.conf configuration file specifies which port numbers NSO should bind(2) to. If any of these port numbers are lower than 1024, NSO usually requires root privileges unless the target operating system allows NSO to bind to these ports as a non-root user.
If PAM is to be used for authentication, the program installed as $NCS_DIR/lib/ncs/priv/pam/epam acts as a PAM client. Depending on the local PAM configuration, this program may require root privileges. If PAM is configured to read the local passwd file, the program must either run as root or be setuid root. If the local PAM configuration instructs NSO to run, for example, pam_radius_auth, root privileges are possibly not required depending on the local PAM installation.
If the CLI is used and we want to create CLI commands that run executables, we may want to modify the permissions of the $NCS_DIR/lib/ncs/lib/core/confd/priv/cmdptywrapper program.
To be able to run an executable as root or a specific user, we need to make cmdptywrapper setuid root, i.e.:
# chown root cmdptywrapper
NSO can be instructed to terminate NETCONF over cleartext TCP. This is useful for debugging since the NETCONF traffic can then be easily captured and analyzed. It is also useful if we want to provide some local proprietary transport mechanism that is not SSH. Clear text TCP termination is not authenticated, the clear text client simply tells NSO which user the session should run as. The idea is that authentication is already done by some external entity, such as an SSH server. If clear text TCP is enabled, NSO must bind to localhost (127.0.0.1) for these connections.
Client libraries connect to NSO. For example, the CDB API is TCP based and a CDB client connects to NSO. We instruct NSO which address to use for these connections through the ncs.conf parameters /ncs-config/ncs-ipc-address/ip (default address 127.0.0.1) and /ncs-config/ncs-ipc-address/port (default port 4565).
NSO multiplexes different kinds of connections on the same socket (IP and port combination). The following programs connect on the socket:
Remote commands, such as e.g., ncs --reload.
CDB clients.
External database API clients.
MAAPI, The Management Agent API clients.
By default, all of the above are considered trusted. MAAPI clients and ncs_cli should supposedly authenticate the user before connecting to NSO whereas CDB clients and external database API clients are considered trusted and do not have to authenticate.
Thus, since the ncs-ipc-address socket allows full unauthenticated access to the system, it is important to ensure that the socket is not accessible from untrusted networks. However, it is also possible to restrict access to this socket by means of an access check, see .
# chmod u+s cmdptywrapper
Failing that, all programs will be executed as the user running the ncs daemon. Consequently, if that user is the root we do not have to perform the chmod operations above. The same applies to executables run via actions, but then we may want to modify the permissions of the $NCS_DIR/lib/ncs/lib/core/confd/priv/cmdwrapper program instead:
# chown root cmdwrapper
# chmod u+s cmdwrapper
The ncs_cli program.
$ netconf-console --port $DEVICE_NETCONF_PORT --hello | grep "module="
<capability>http://tail-f.com/ns/aaa/1.1?module=tailf-aaa&revision=2023-04-13</capability>
<capability>http://tail-f.com/ns/common/query?module=tailf-common-query&revision=2017-12-15</capability>
<capability>http://tail-f.com/ns/confd-progress?module=tailf-confd-progress&revision=2020-06-29</capability>
...
<capability>urn:ietf:params:xml:ns:yang:ietf-yang-metadata?module=ietf-yang-metadata&revision=2016-08-05</capability>
<capability>urn:ietf:params:xml:ns:yang:ietf-yang-types?module=ietf-yang-types&revision=2013-07-15</capability>$ netconf-console --port=$DEVICE_NETCONF_PORT --get -x /yang-library/module-set/module/name
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="1">
<data>
<yang-library xmlns="urn:ietf:params:xml:ns:yang:ietf-yang-library">
<module-set>
<name>common</name>
<module>
<name>iana-crypt-hash</name>
</module>
<module>
<name>ietf-hardware</name>
</module>
<module>
<name>ietf-netconf</name>
</module>
<module>
<name>ietf-netconf-acm</name>
</module>
<module>
...
<module>
<name>tailf-yang-patch</name>
</module>
<module>
<name>timestamp-hardware</name>
</module>
</module-set>
</yang-library>
</data>
</rpc-reply>$ netconf-console --port=$DEVICE_NETCONF_PORT \
--get-schema=ietf-hardware > dev-yang/ietf-hardware.yang$ cat dev-yang/ietf-hardware.yang | grep import
<import ietf-inet-types {
import ietf-yang-types {
import iana-hardware {$ find ${NCS_DIR} \
\( -name "ietf-inet-types.yang" -o -name "ietf-yang-types.yang" -o -name "iana-hardware.yang" \)
/path/to/nso/src/ncs/builtin_yang/ietf-inet-types.yang
/path/to/nso/src/ncs/builtin_yang/ietf-yang-types.yang$ netconf-console --port=$DEVICE_NETCONF_PORT --get-schema=iana-hardware > \
dev-yang/iana-hardware.yang$ netconf-console --port=$DEVICE_NETCONF_PORT --get-schema=timestamp-hardware > \
dev-yang/timestamp-hardware.yang$ ncs-make-package --netconf-ned dev-yang --dest nso-rundir/packages/devsim --build \
--verbose --no-test --no-java --no-netsim --no-python --no-template --vendor "Tail-f" \
--package-version "1.0" devsim$ ncs --cd ./nso-rundir$ ncs_cli -u admin -C
# config
Entering configuration mode terminal
(config)# devices device hw0 address 127.0.0.1 port 12022 authgroup default
(config-device-hw0)# devices device hw0 trace pretty
(config-device-hw0)# state admin-state unlocked
(config-device-hw0)# device-type netconf ned-id devsim-nc-1.0
(config-device-hw0)# commit
Commit complete.$ ncs_cli -u admin -C
# devices fetch-ssh-host-keys
fetch-result {
device hw0
result updated
fingerprint {
algorithm ssh-ed25519
value 00:11:22:33:44:55:66:77:88:99:aa:bb:cc:dd:ee:ff
}
}
# device device hw0 sync-from
result trueadmin@ncs# show running-config devices authgroups group
devices authgroups group default
umap admin
remote-name admin
remote-password $9$xrr1xtyI/8l9xm9GxPqwzcEbQ6oaK7k5RHm96Hkgysg=
!
umap oper
remote-name oper
remote-password $9$Pr2BRIHRSWOW2v85PvRGvU7DNehWL1hcP3t1+cIgaoE=
!
!# config
Entering configuration mode terminal
(config)# devices device hw0 address 127.0.0.1 port 12022 authgroup default
(config-device-hw0)# devices device hw0 trace pretty
(config-device-hw0)# state admin-state unlocked
(config-device-hw0)# device-type netconf ned-id netconf
(config-device-hw0)# commit# devtools true
# config
(config)# netconf-ned-builder project hardware 1.0 device hw0 local-user admin vendor Tail-f
(config)# commit
(config)# end
# show netconf-ned-builder project hardware
netconf-ned-builder project hardware 1.0
download-cache-path /path/to/nso/examples.ncs/development-guide/ned-development/netconf-ned/nso-rundir/
state/netconf-ned-builder/cache/hardware-nc-1.0
ned-directory-path /path/to/nso/examples.ncs/development-guide/ned-development/netconf-ned/nso-rundir/
state/netconf-ned-builder/hardware-nc-1.0$ cp dummy.yang $NCS_DIR/examples.ncs/development-guide/ned-development/netconf-ned/\
nso-rundir/state/netconf-ned-builder/cache/hardware-nc-1.0/
$ cat dummy.xml
<config xmlns="http://tail-f.com/ns/config/1.0">
<netconf-ned-builder xmlns="http://tail-f.com/ns/ncs/netconf-ned-builder">
<project>
<family-name>hardware</family-name>
<major-version>1.0</major-version>
<module>
<name>dummy</name>
<revision>2023-11-10</revision>
<location>NETCONF</location>
<status>selected downloaded</status>
</module>
</project>
</netconf-ned-builder>
</config>
$ ncs_load -O -m -l dummy.xml
$ ncs_cli -u admin -C
# devtools true
# show netconf-ned-builder project hardware 1.0 module dummy 2023-11-10
SELECT BUILD BUILD
NAME REVISION NAMESPACE FEATURE LOCATION STATUS
-----------------------------------------------------------------------
dummy 2023-11-10 - - [ NETCONF ] selected,downloadedmodule dummy {
namespace "urn:dummy";
prefix dummy;
revision 2023-11-10 {
description
"Initial revision.";
}
grouping my-grouping {
container my-container {
leaf my-encrypted-password {
type tailf:aes-cfb-128-encrypted-string;
}
}
}
}module dummy-ann {
namespace "urn:dummy-ann";
prefix dummy-ann;
import tailf-common {
prefix tailf;
}
tailf:annotate-module "dummy" {
tailf:annotate-statement "grouping[name='my-grouping']" {
tailf:annotate-statement "container[name='my-container']" {
tailf:annotate-statement "leaf[name=' my-encrypted-password']" {
tailf:ned-ignore-compare-config;
}
}
}
}
}$ ncs_cli -u admin -C
# devtools true
# devices fetch-ssh-host-keys
fetch-result {
device hw0
result updated
fingerprint {
algorithm ssh-ed25519
value 00:11:22:33:44:55:66:77:88:99:aa:bb:cc:dd:ee:ff
}
}
# netconf-ned-builder project hardware 1.0 fetch-module-list
# show netconf-ned-builder project hardware 1.0 module
module iana-crypt-hash 2014-08-06
namespace urn:ietf:params:xml:ns:yang:iana-crypt-hash
feature [ crypt-hash-md5 crypt-hash-sha-256 crypt-hash-sha-512 ]
location [ NETCONF ]
module iana-hardware 2018-03-13
namespace urn:ietf:params:xml:ns:yang:iana-hardware
location [ NETCONF ]
module ietf-datastores 2018-02-14
namespace urn:ietf:params:xml:ns:yang:ietf-datastores
location [ NETCONF ]
module ietf-hardware 2018-03-13
namespace urn:ietf:params:xml:ns:yang:ietf-hardware
location [ NETCONF ]
module ietf-inet-types 2013-07-15
namespace urn:ietf:params:xml:ns:yang:ietf-inet-types
location [ NETCONF ]
module ietf-interfaces 2018-02-20
namespace urn:ietf:params:xml:ns:yang:ietf-interfaces
feature [ arbitrary-names if-mib pre-provisioning ]
location [ NETCONF ]
module ietf-ip 2018-02-22
namespace urn:ietf:params:xml:ns:yang:ietf-ip
feature [ ipv4-non-contiguous-netmasks ipv6-privacy-autoconf ]
location [ NETCONF ]
module ietf-netconf 2011-06-01
namespace urn:ietf:params:xml:ns:netconf:base:1.0
feature [ candidate confirmed-commit rollback-on-error validate xpath ]
location [ NETCONF ]
module ietf-netconf-acm 2018-02-14
namespace urn:ietf:params:xml:ns:yang:ietf-netconf-acm
location [ NETCONF ]
module ietf-netconf-monitoring 2010-10-04
namespace urn:ietf:params:xml:ns:yang:ietf-netconf-monitoring
location [ NETCONF ]
...
module ietf-yang-types 2013-07-15
namespace urn:ietf:params:xml:ns:yang:ietf-yang-types
location [ NETCONF ]
module tailf-aaa 2023-04-13
namespace http://tail-f.com/ns/aaa/1.1
location [ NETCONF ]
module tailf-acm 2013-03-07
namespace http://tail-f.com/yang/acm
location [ NETCONF ]
module tailf-common 2023-10-16
namespace http://tail-f.com/yang/common
location [ NETCONF ]
...
module timestamp-hardware 2023-11-10
namespace urn:example:timestamp-hardware
location [ NETCONF ]$ ncs_cli -u admin -C
# devtools true
# netconf-ned-builder project hardware 1.0 module ietf-hardware 2018-03-13 select
# netconf-ned-builder project hardware 1.0 module timestamp-hardware 2023-11-10 select
# show netconf-ned-builder project hardware 1.0 module status
NAME REVISION STATUS
-----------------------------------------------------
iana-hardware 2018-03-13 selected,downloaded
ietf-hardware 2018-03-13 selected,downloaded
ietf-inet-types 2013-07-15 selected,pending
ietf-yang-types 2013-07-15 selected,pending
timestamp-hardware 2023-11-10 selected,pending
Waiting for NSO to download the selected YANG models (see demo-nb.sh for details)
NAME REVISION STATUS
-----------------------------------------------------
iana-hardware 2018-03-13 selected,downloaded
ietf-hardware 2018-03-13 selected,downloaded
ietf-inet-types 2013-07-15 selected,downloaded
ietf-yang-types 2013-07-15 selected,downloaded
timestamp-hardware 2023-11-10 selected,downloaded# devtools true
# netconf-ned-builder project hardware 1.0 build-ned
# show netconf-ned-builder project hardware 1.0 build-status
build-status success
# show netconf-ned-builder project hardware 1.0 module build-warning
% No entries found.
# show netconf-ned-builder project hardware 1.0 module build-error
% No entries found.
# unhide debug
# show netconf-ned-builder project hardware 1.0 compiler-output
% No entries found.# netconf-ned-builder project cisco-iosxr 6.6 build-ned
Error: Failed to compile NED bundle
# show netconf-ned-builder project cisco-iosxr 6.6 build-status
build-status error
# show netconf-ned-builder project cisco-iosxr 6.6 module build-error
module openconfig-telemetry 2016-02-04
build-error at line 700: <error message><hide-group>
<name>debug</name>
</hide-group>$ ncs_cmd -c reload$ ls -1
check.sh
yang # directory with my YANG modules
$ cat check.sh
#!/bin/sh
for f in yang/*.yang
do
$NCS_DIR/bin/yanger -p yang $f
done$ ncs_cli -u admin -C
# devtools true
(config)# netconf-ned-builder project hardware 1.0 make-development-ned in-directory /tmp
ned-path /tmp/hardware-nc-1.0
(config)# end
# exit
$ cd /tmp/hardware-nc-1.0/src
$ make clean allncs-<ncs-version>-<ned-family>-nc-<ned-version>.tar.gz$ id -u
501
$ ncs_cli -u admin -C
# devtools true
# config
(config)# aaa authentication users user admin uid 501
(config-user-admin)# commit
Commit complete.
(config-user-admin)# end
# netconf-ned-builder project hardware 1.0 export-ned to-directory \
/path/to/nso/examples.ncs/development-guide/ned-development/netconf-ned/nso-rundir/packages
tar-file /path/to/nso/examples.ncs/development-guide/ned-development/netconf-ned/
nso-rundir/packages/ncs-6.2-hardware-nc-1.0.tar.gz# packages reload
>>>> System upgrade is starting.
>>>> Sessions in configure mode must exit to operational mode.
>>>> No configuration changes can be performed until upgrade has completed.
>>>> System upgrade has completed successfully.
reload-result {
package hardware-nc-1.0
result true
}
# show packages | nomore
packages package hardware-nc-1.0
package-version 1.0
description "Generated by NETCONF NED builder"
ncs-min-version [ 6.2 ]
directory ./state/packages-in-use/1/hardware-nc-1.0
component hardware
ned netconf ned-id hardware-nc-1.0
ned device vendor Tail-f
oper-status up$ ncs_cli -u admin -C
# show packages package hardware-nc-1.0 component hardware ned netconf ned-id
ned netconf ned-id hardware-nc-1.0
# config
(config)# devices device hw0 device-type netconf ned-id hardware-nc-1.0
(config-device-hw0)# commit
Commit complete.
(config-device-hw0)# end
# devices device hw0 sync-from
result true
# show running-config devices device hw0 config | nomore
devices device hw0
config
hardware component carbon
class module
parent slot-1-4-1
parent-rel-pos 1040100
alias dummy
asset-id dummy
uri [ urn:dummy ]
!
hardware component carbon-port-4
class port
parent carbon
parent-rel-pos 1040104
alias dummy-port
asset-id dummy
uri [ urn:dummy ]
!
...$ ncs_cli -C -u admin
# devtools true
# config
(config)# no netconf-ned-builder project hardware 1.0
(config)# commit
Commit complete.
(config)# end
# packages reload
Error: The following modules will be deleted by upgrade:
hardware-nc-1.0: iana-hardware
hardware-nc-1.0: ietf-hardware
hardware-nc-1.0: hardware-nc
hardware-nc-1.0: hardware-nc-1.0
If this is intended, proceed with 'force' parameter.
# packages reload force
>>>> System upgrade is starting.
>>>> Sessions in configure mode must exit to operational mode.
>>>> No configuration changes can be performed until upgrade has completed.
>>>> System upgrade has completed successfully.Run your Java code using Java Virtual Machine (VM).
The NSO Java VM is the execution container for all Java classes supplied by deployed NSO packages.
The classes, and other resources, are structured in jar files and the specific use of these classes is described in the component tag in the respective package-meta-data.xml file. Also as a framework, it starts and controls other utilities for the use of these components. To accomplish this, a main class com.tailf.ncs.NcsMain, implementing the Runnable interface is started as a thread. This thread can be the main thread (running in a java main()) or be embedded into another Java program.
When the NcsMain thread starts it establishes a socket connection towards NSO. This is called the NSO Java VM control socket. It is the responsibility of NcsMain to respond to command requests from NSO and pass these commands as events to the underlying finite state machine (FSM). The NcsMain FSM will execute all actions as requested by NSO. This includes class loading and instantiation as well as registration and start of services, NEDs, etc.
When NSO detects the control socket connection from the NSO Java VM, it starts an initialization process:
First, NSO sends a INIT_JVM request to the NSO Java VM. At this point, the NSO Java VM will load schemas i.e. retrieve all known YANG module definitions. The NSO Java VM responds when all modules are loaded.
Then, NSO sends a LOAD_SHARED_JARS request for each deployed NSO package. This request contains the URLs for the jars situated in the shared-jar directory in the respective NSO package. The classes and resources in these jars will be globally accessible for all deployed NSO packages.
The next step is to send a LOAD_PACKAGE
See for tips on customizing startup behavior and debugging problems when the Java VM fails to start
The file tailf-ncs-java-vm.yang defines the java-vm container which, along with ncs.conf, is the entry point for controlling the NSO Java VM functionality. Study the content of the YANG model in the example below (The Java VM YANG model). For a full explanation of all the configuration data, look at the YANG file and man ncs.conf.
Many of the nodes beneath java-vm are by default invisible due to a hidden attribute. To make everything under java-vm visible in the CLI, two steps are required:
First, the following XML snippet must be added to ncs.conf:\
Next, the unhide command may be used in the CLI session:
Each NSO package will have a specific java classloader instance that loads its private jar classes. These package classloaders will refer to a single shared classloader instance as its parent. The shared classloader will load all shared jar classes for all deployed NSO packages.
The purpose of this is first to keep integrity between packages which should not have access to each other's classes, other than the ones that are contained in the shared jars. Secondly, this way it is possible to hot redeploy the private jars and classes of a specific package while keeping other packages in a run state.
Should this class loading scheme not be desired, it is possible to suppress it by starting the NSO Java VM with the system property TAILF_CLASSLOADER set to false.
This will force NSO Java VM to use the standard Java system classloader. For this to work, all jar's from all deployed NSO packages need to be part of the classpath. The drawback of this is that all classes will be globally accessible and hot redeploy will have no effect.
There are four types of components that the NSO Java VM can handle:
The ned type. The NSO Java VM will handle NEDs of sub-type cli and generic which are the ones that have a Java implementation.
The callback type. These are any forms of callbacks that are defined by the DP API.
The application
In some situations, several NSO packages are expected to use the same code base, e.g. when third-party libraries are used or the code is structured with some common parts. Instead of duplicate jars in several NSO packages, it is possible to create a new NSO package, add these jars to the shared-jar directory, and let the package-meta-data.xml file contains no component definitions at all. The NSO Java VM will load these shared jars and these will be accessible from all other NSO packages.
Inside the NSO Java VM, each component type has a specific Component Manager. The responsibility of these Managers is to manage a set of component classes for each NSO package. The Component Manager acts as an FSM that controls when a component should be registered, started, stopped, etc.
For instance, the DpMuxManager controls all callback implementations (services, actions, data providers, etc). It can load, register, start, and stop such callback implementations.
NEDs can be of type netconf, snmp, cli, or generic. Only the cli and generic types are relevant for the NSO Java VM because these are the ones that have a Java implementation. Normally these NED components come in self-contained and prefabricated NSO packages for some equipment or class of equipment. It is however possible to tailor make NEDs for any protocol. For more information on this see and in NED Development
Callbacks are the collective name for a number of different functions that can be implemented in Java. One of the most important is the service callbacks, but also actions, transaction control, and data provision callbacks are in common use in an NSO implementation. For more on how to program callback using the DP API, see .
For programs that are none of the above types but still need to access NSO as a daemon process, it is possible to use the ApplicationComponent Java interface. The ApplicationComponent interface expects the implementing classes to implement a init(), finish() and a run() method.
The NSO Java VM will start each class in a separate thread. The init() is called before the thread is started. The run() runs in a thread similar to the run() method in the standard Java Runnable interface. The finish() method is called when the NSO Java VM wants the application thread to stop. It is the responsibility of the programmer to stop the application thread i.e., stop the execution in the run() method when finish() is called. Note, that making the thread stop when finish() is called is important so that the NSO Java VM will not be hanging at a STOP_VM request.
An example of an application component implementation is found in .
User Implementations typically need resources like Maapi, Maapi Transaction, Cdb, Cdb Session, etc. to fulfill their tasks. These resources can be instantiated and used directly in the user code. This implies that the user code needs to handle connection and close of additional sockets used by these resources. There is however another recommended alternative, and that is to use the Resource manager. The Resource manager is capable of injecting these resources into the user code. The principle is that the programmer will annotate the field that should refer to the resource rather than instantiate it.
This way the NSO Java VM and the Resource manager can keep control over used resources and also can intervene e.g. close sockets at forced shutdowns.
The Resource manager can handle two types of resources: MAAPI and CDB.
For both the Maapi and Cdb resource types a socket connection is opened towards NSO by the Resource manager. At a stop, the Resource manager will disconnect these sockets before ending the program. User programs can also tell the resource manager when its resources are no longer needed with a call to ResourceManager.unregisterResources().
The resource annotation has three attributes:
type defines the resource type.
scope defines if this resource should be unique for each instance of the Java class (Scope.INSTANCE) or shared between different instances and classes (Scope.CONTEXT). For CONTEXT scope the sharing is confined to the defining NSO package, i.e., a resource cannot be shared between NSO packages.
qualifier
When the NSO Java VM starts it will receive component classes to load from NSO. Note, that the component classes are the classes that are referred to in the package-meta-data.xml file. For each component class, the Resource Manager will scan for annotations and inject resources as specified.
However, the package jars can contain lots of classes in addition to the component classes. These will be loaded at runtime and will be unknown by the NSO Java VM and therefore not handled automatically by the Resource Manager. These classes can also use resource injection but need a specific call to the Resource Manager for the mechanism to take effect. Before the resources are used for the first time the resource should be used, a call of ResourceManager.registerResources(...) will force the injection of the resources. If the same class is registered several times the Resource manager will detect this and avoid multiple resource injections.
The AlarmSourceCentral and AlarmSinkCentral, which is part of the NSO Alarm API, can be used to simplify reading and writing alarms. The NSO Java VM will start these centrals at initialization. User implementations can therefore expect this to be set up without having to handle the start and stop of either the AlarmSinkCentral or the AlarmSourceCentral. For more information on the alarm API, see .
As stated above the NSO Java VM is executed in a thread implemented by the NcsMain. This implies that somewhere a java main() must be implemented that launches this thread. For NSO this is provided by the NcsJVMLauncher class. In addition to this, there is a script named ncs-start-java-vm that starts Java with the NcsJVMLauncher.main(). This is the recommended way of launching the NSO Java VM and how it is set up in a default installation. If there is a need to run the NSO Java VM as an embedded thread inside another program. This can be done simply by instantiating the class NcsMain and starting this instance in a new thread.
However, with the embedding of the NSO Java VM comes the responsibility to manage the life cycle of the NSO Java VM thread. This thread cannot be started before NSO has started and is running or else the NSO Java VM control socket connection will fail. Also, running NSO without the NSO Java VM being launched will render runtime errors as soon as NSO needs NSO Java VM functionality.
To be able to control an embedded NSO Java VM from another supervising Java thread or program an optional JMX interface is provided. The main functionality in this interface is listing, starting, and stopping the NSO Java VM and its Component Managers.
NSO has extensive logging functionality. Log settings are typically very different for a production system compared to a development system. Furthermore, the logging of the NSO daemon and the NSO Java VM is controlled by different mechanisms. During development, we typically want to turn on the developer-log. The sample ncs.conf that comes with the NSO release has log settings suitable for development, while the ncs.conf created by a System Install are suitable for production deployment.
The NSO Java VM uses Log4j for logging and will read its default log settings from a provided log4j2.xml file in the ncs.jar. Following that, NSO itself has java-vm log settings that are directly controllable from the NSO CLI. We can do:
This will dynamically reconfigure the log level for package com.tailf.maapi to be at the level trace. Where the Java logs end up is controlled by the log4j2.xml file. By default, the NSO Java VM writes to stdout. If the NSO Java VM is started by NSO, as controlled by the ncs.conf parameter /java-vm/auto-start, NSO will pick up the stdout of the service manager and write it to:
(The details pipe command also displays default values)
The section /ncs-config/japi in ncs.conf contains a number of very important timeouts. See $NCS_DIR/src/ncs/ncs_config/tailf-ncs-config.yang and in Manual Pages for details.
new-session-timeout controls how long NSO will wait for the NSO Java VM to respond to a new session.
query-timeout controls how long NSO will wait for the NSO Java VM to respond to a request to get data.
connect-timeout controls how long NSO will wait for the NSO Java VM to initialize a DP connection after the initial socket connect.
Whenever any of these timeouts trigger, NSO will close the sockets from NSO to the NSO Java VM. The NSO Java VM will detect the socket close and exit. If NSO is configured to start (and restart) the NSO Java VM, the NSO Java VM will be automatically restarted. If the NSO Java VM is started by some external entity, if it runs within an application server, it is up to that entity to restart the NSO Java VM.
When using the auto-start feature (the default), NSO will start the NSO Java VM (as outlined in the start of this section), there are a number of different settings in the java-vm YANG model (see $NCS_DIR/src/ncs/yang/tailf-ncs-java-vm.yang) that controls what happens when something goes wrong during the startup.
The two timeout configurations connect-time and initialization-time are most relevant during startup. If the Java VM fails during the initial stages (during INIT_JVM, LOAD_SHARED_JARS, or LOAD_PACKAGE) either because of a timeout or because of a crash, NSO will log The NCS Java VM synchronization failed in ncs.log.
After logging, NSO will take action based on the synchronization-timeout-action setting:
log: NSO will log the failure, and if auto-restart is set to true NSO will try to restart the Java VM
log-stop (default): NSO will log the failure, and if the Java VM has not stopped already NSO will also try to stop it. No restart action is taken.
exit: NSO will log the failure, and then stop NSO itself.
If you have problems with the Java VM crashing during startup, a common pitfall is running out of memory (either total memory on the machine, or heap in the JVM). If you have a lot of Java code (or a loaded system) perhaps the Java VM did not start in time. Try to determine the root cause, check ncs.log and ncs-java-vm.log, and if needed increase the timeout.
For complex problems, for example with the class loader, try logging the internals of the startup:
Setting this will result in a lot more detailed information in ncs-java-vm.log during startup.
When the auto-restart setting is true (the default), it means that NSO will try to restart the Java VM when it fails (at any point in time, not just during startup). NSO will at most try three restarts within 30 seconds, i.e., if the Java VM crashes more than three times within 30 seconds NSO gives up. You can check the status of the Java VM using the java-vm YANG model. For example in the CLI:
The start-status can have the following values:
auto-start-not-enabled: Autostart is not enabled.
stopped: The Java VM has been stopped or is not yet started.
started: The Java VM has been started. See the leaf 'status' to check the status of the Java application code.
The status can have the following values:
not-connected: The Java application code is not connected to NSO.
initializing: The Java application code is connected to NSO, but not yet initialized.
running: The Java application code is connected and initialized.
private-jarcomponentpackage-meta-data.xmlNSO will send a INSTANTIATE_COMPONENT request for each component in each deployed NSO package. At this point, the NSO Java VM will register a start method for the respective component. NSO will send these requests in a proper start phase order. This implies that the INSTANTIATE_COMPONENT requests can be sent in an order that mixes components from different NSO packages.
Lastly, NSO sends a DONE_LOADING request which indicates that the initialization process is finished. After this, the NSO Java VM is up and running.
ApplicationComponentThe upgrade type. This component type is activated when deploying a new version of a NSO package and the NSO automatic CDB data upgrade is not sufficient. See Writing an Upgrade Package Component for more information.
DEFAULTDEFAULTclosedfailed: The Java VM has been terminated. If auto-restart is enabled, the Java VM restart has been disabled due to too frequent restarts.timeout: The Java application connected to NSO, but failed to initialize within the stipulated timeout 'initialization-time'.

<hide-group>
<name>debug</name>
</hide-group>admin@ncs(config)# unhide debug
admin@ncs(config)# > yanger -f tree tailf-ncs-java-vm.yang
submodule: tailf-ncs-java-vm (belongs-to tailf-ncs)
+--rw java-vm
+--rw stdout-capture
| +--rw enabled? boolean
| +--rw file? string
| +--rw stdout? empty
+--rw connect-time? uint32
+--rw initialization-time? uint32
+--rw synchronization-timeout-action? enumeration
+--rw exception-error-message
| +--rw verbosity? error-verbosity-type
+--rw java-logging
| +--rw logger* [logger-name]
| +--rw logger-name string
| +--rw level log-level-type
+--rw jmx!
| +--rw jndi-address? inet:ip-address
| +--rw jndi-port? inet:port-number
| +--rw jmx-address? inet:ip-address
| +--rw jmx-port? inet:port-number
+--ro start-status? enumeration
+--ro status? enumeration
+---x stop
| +--ro output
| +--ro result? string
+---x start
| +--ro output
| +--ro result? string
+---x restart
+--ro output
+--ro result? stringjava -DTAILF_CLASSLOADER=false ...package com.tailf.ncs;
/**
* User defined Applications should implement this interface that
* extends Runnable, hence also the run() method has to be implemented.
* These applications are registered as components of type
* "application" in a Ncs packages.
*
* Ncs Java VM will start this application in a separate thread.
* The init() method is called before the thread is started.
* The finish() method is expected to stop the thread. Hence stopping
* the thread is user responsibility
*
*/
public interface ApplicationComponent extends Runnable {
/**
* This method is called by the Ncs Java vm before the
* thread is started.
*/
public void init();
/**
* This method is called by the Ncs Java vm when the thread
* should be stopped. Stopping the thread is the responsibility of
* this method.
*/
public void finish();
}@Resource(type=ResourceType.MAAPI, scope=Scope.INSTANCE)
public Maapi m;package com.tailf.ncs.annotations;
/**
* ResourceType set by the Ncs ResourceManager
*/
public enum ResourceType {
MAAPI(1),
CDB(2);
}package com.tailf.ncs.annotations;
/**
* Annotation class for Action Callbacks Attributes are callPoint and callType
*/
@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.FIELD)
public @interface Resource {
public ResourceType type();
public Scope scope();
public String qualifier() default "DEFAULT";
}package com.tailf.ncs.annotations;
/**
* Scope for resources managed by the Resource Manager
*/
public enum Scope {
/**
* Context scope implies that the resource is
* shared for all fields having the same qualifier in any class.
* The resource is shared also between components in the package.
* However sharing scope is confined to the package i.e sharing cannot
* be extended between packages.
* If the qualifier is not given it becomes "DEFAULT"
*/
CONTEXT(1),
/**
* Instance scope implies that all instances will
* get new resource instances. If the instance needs
* several resources of the same type they need to have
* separate qualifiers.
*/
INSTANCE(2);
}MyClass myclass = new MyClass();
try {
ResourceManager.registerResources(myclass);
} catch (Exception e) {
LOGGER.error("Error injecting Resources", e);
}NcsMain ncsMain = NcsMain.getInstance(host);
Thread ncsThread = new Thread(ncsMain);
ncsThread.start();admin@ncs(config)# java-vm java-logging logger com.tailf.maapi level level-trace
admin@ncs(config-logger-com.tailf.maapi)# commit
Commit complete.admin@ncs(config)# show full-configuration java-vm stdout-capture
java-vm stdout-capture file /var/log/ncs/ncs-java-vm.logadmin@ncs(config)# java-vm java-logging logger com.tailf.ncs level level-all
admin@ncs(config-logger-com.tailf.maapi)# commit
Commit complete.admin@ncs# show java-vm
java-vm start-status started
java-vm status runningDeploy NSO in a containerized setup using Cisco-provided images.
NSO can be deployed in your environment using a container, such as Docker. Cisco offers two pre-built images for this purpose that you can use to run NSO and build packages (see ).
Migration Information
If you are migrating from an existing NSO System Install to a container-based setup, follow the guidelines given below in .
Perform NSO system management and configuration.
NSO consists of a number of modules and executable components. These executable components will be referred to by their command-line name, e.g. ncs, ncs-netsim, ncs_cli, etc. ncs is used to refer to the executable, the running daemon.
When NSO is started, it reads its configuration file and starts all subsystems configured to start (such as NETCONF, CLI, etc.).
Run user code in NSO using packages.
All user code that needs to run in NSO must be part of a package. A package is basically a directory of files with a fixed file structure. A package consists of code, YANG modules, custom Web UI widgets, etc., that are needed to add an application or function to NSO. Packages are a controlled way to manage the loading and versions of custom applications.
A package is a directory where the package name is the same as the directory name. At the top level of this directory, a file called package-meta-data.xml must exist. The structure of that file is defined by the YANG model $NCS_DIR/src/ncs/yang/tailf-ncs-packages.yang. A package may also be a tar archive with the same directory layout. The tar archive can be either uncompressed with the suffix .tar, or gzip-compressed with the suffix .tar.gz or .tgz. The archive file should also follow some naming conventions. There are two acceptable naming conventions for archive files, one is that after the introduction of CDM in the NSO 5.1, it can be named by ncs-<ncs-version>-<package-name>-<package-version>.<suffix>, e.g. ncs-5.3-my-package-1.0.tar.gz
Run a container image of a specific version of NSO and your packages which can then be distributed as one unit.
Deploy and distribute the same version across your production environment.
Use the Build Image containing the necessary environment for compiling NSO packages.
Cisco provides the following two NSO images based on Red Hat UBI.
Development Host
None or Local Install
Build Image
System Install
The Production Image is a production-ready NSO image for system-wide deployment and use. It is based on NSO System Install and is available from the Cisco Software Download site.
Use the pre-built image as the base image in the container file (e.g., Dockerfile) and mount your own packages (such as NEDs and service packages) to run a final image for your production environment (see examples below).
The Build Image is a separate standalone NSO image with the necessary environment and software for building packages. It is provided specifically to address the developer needs of building packages.
The image is available as a signed package (e.g., nso-VERSION.container-image-build.linux.ARCH.signed.bin) from the Cisco Software Download site. You can run the Build Image in different ways, and a simple tool for defining and running multi-container Docker applications is Docker Compose (see examples below).
The container provides the necessary environment to build custom packages. The Build Image adds a few Linux packages that are useful for development, such as Ant, JDK, net-tools, pip, etc. Additional Linux packages can be added using, for example, the dnf command. The dnf list installed command lists all the installed packages.
To fetch and extract NSO images:
On Cisco's official Software Download site, search for "Network Services Orchestrator". Select the relevant NSO version in the drop-down list, e.g., "Crosswork Network Services Orchestrator 6", and click "Network Services Orchestrator Software". Locate the binary, which is delivered as a signed package (e.g., nso-6.3.container-image-prod.linux.x86_64.signed.bin).
Extract the image and other files from the signed package, for example:
To run the images, make sure that your system meets the following requirements:
A system running Linux x86_64 or ARM64, or macOS x86_64 or Apple Silicon. Linux for production.
A container platform. Docker is the recommended platform and is used as an example in this guide for running NSO images. You may use another container runtime of your choice. Note that commands in this guide are Docker-specific. if you use another container runtime, make sure to use the respective commands.
To check the Java (JDK) and Python versions included in the container, use the following command, (where cisco-nso-prod:6.3 is the image you want to check):
{% code title="Example: Check Java and Python Versions of Container" %}
{% endcode %}
This section covers the necessary administrative information about the NSO Production Image.
If you have NSO installed as a System Install, you can migrate to the Containerized NSO setup by following the instructions in this section. Migrating your Network Services Orchestrator (NSO) to a containerized setup can provide numerous benefits, including improved scalability, easier version management, and enhanced isolation of services.
The migration process is designed to ensure a smooth transition from a System-Installed NSO to a container-based deployment. Detailed steps guide you through preparing your existing environment, exporting the necessary configurations and state data, and importing them into your new containerized NSO instance. During the migration, consider the container runtime you plan to use, as this impacts the migration process.
Before You Start
We recommend reading through this guide to understand better the expectations, requirements, and functioning aspects of a containerized deployment.
Verify the compatibility of your current system configurations with the containerized NSO setup. See System Requirements for more information.
Determine and install the container orchestration tool you plan to use (e.g., Docker, etc.).
Ensure that your current NSO installation is fully operational and backed up and that you have a clear rollback strategy in case any issues arise. Pay special attention to customizations and integrations that your current NSO setup might have, and verify their compatibility with the containerized version of NSO.
Have a contingency plan in place for quick recovery in case any issues are encountered during migration.
Migration Steps
Prepare:
Document your current NSO environment's specifics, including custom configurations and packages.
Perform a complete backup of your existing NSO instance, including configurations, packages, and data.
Set up the container environment and download/extract the NSO production image. See Downloading and Extracting the Images for details.
Migrate:
Stop the current NSO instance.
Save the run directory from the NSO instance in an appropriate place.
Use the same ncs.conf and High Availability (HA) setup previously used with your System Install. We assume that the ncs.conf follows the best practice and uses the NCS_DIR, NCS_RUN_DIR, NCS_CONFIG_DIR, and NCS_LOG_DIR variables for all paths. The ncs.conf can be added to a volume and mounted to /nso/etc in the container.
Add the run directory as a volume, mounted to /nso/run in the container and copy the CDB data, packages, etc., from the previous System Install instance.
Create a volume for the log directory.
Start the container. Example:
Finalize:
Ensure that the containerized NSO instance functions as expected and validate system operations.
Plan and execute your cutover transition from the System-Installed NSO to the containerized version with minimal disruption.
Monitor the new setup thoroughly to ensure stability and performance.
The run-nso.sh script runs a check at startup to determine which ncs.conf file to use. The order of preference is as below:
The ncs.conf file specified in the Dockerfile (i.e., ENV $NCS_CONFIG_DIR /etc/ncs/) is used as the first preference.
The second preference is to use the ncs.conf file mounted in the /nso/etc/ run directory.
If no ncs.conf file is found at either /etc/ncs or /nso/etc, the default ncs.conf file provided with the NSO image in /defaults is used.
If you need to perform operations before or after the ncs process is started in the Production container, you can use Python and/or Bash scripts to achieve this. Add the scripts to the $NCS_CONFIG_DIR/pre-ncs-start.d/ and $NCS_CONFIG_DIR/post-ncs-start.d/ directories to have the run-nso.sh script run them.
An admin user can be created on startup by the run script in the container. Three environment variables control the addition of an admin user:
ADMIN_USERNAME: Username of the admin user to add, default is admin.
ADMIN_PASSWORD: Password of the admin user to add.
ADMIN_SSHKEY: Private SSH key of the admin user to add.
As ADMIN_USERNAME already has a default value, only ADMIN_PASSWORD, or ADMIN_SSHKEY need to be set in order to create an admin user. For example:
This can be useful when starting up a container in CI for testing or development purposes. It is typically not required in a production environment where CDB already contains the required user accounts.
The default ncs.conf NSO configuration file does not enable any northbound interfaces, and no ports are exposed externally to the container. Ports can be exposed externally of the container when starting the container with the northbound interfaces and their ports correspondingly enabled in ncs.conf.
The backup behavior of running NSO in vs. outside the container is largely the same, except that when running NSO in a container, the SSH and SSL certificates are not included in the backup produced by the ncs-backup script. This is different from running NSO outside a container where the default configuration path /etc/ncs is used to store the SSH and SSL certificates, i.e., /etc/ncs/ssh for SSH and /etc/ncs/ssl for SSL.
Take a Backup
Let's assume we start a production image container using:
To take a backup:
Run the ncs-backup command. The backup file is written to /nso/run/backups.
Restore a Backup
To restore a backup, NSO must be stopped. As you likely only have access to the ncs-backup tool, the volume containing CDB and other run-time data from inside of the NSO container, this poses a slight challenge. Additionally, shutting down NSO will terminate the NSO container.
To restore a backup:
Shut down the NSO container:
Run the ncs-backup --restore command. Start a new container with the same persistent shared volumes mounted but with a different command. Instead of running the /run-nso.sh, which is the normal command of the NSO container, run the restore command.
Restoring an NSO backup should move the current run directory (/nso/run to /nso/run.old) and restore the run directory from the backup to the main run directory (/nso/run). After this is done, start the regular NSO container again as usual.\
The NSO image /run-nso.sh script looks for an SSH host key named ssh_host_ed25519_key in the /nso/etc/ssh directory to be used by the NSO built-in SSH server for the CLI and NETCONF interfaces.
If an SSH host key exists, which is for a typical production setup stored in a persistent shared volume, it remains the same after restarts or upgrades of NSO. If no SSH host key exists, the script generates a private and public key.
In a high-availability (HA) setup, the host key is typically shared by all NSO nodes in the HA group and stored in a persistent shared volume. This is done to avoid fetching the public host key from the new primary after each failover.
NSO expects to find a TLS certificate and key at /nso/ssl/cert/host.cert and /nso/ssl/cert/host.key respectively. Since the /nso path is usually on persistent shared volume for production setups, the certificate remains the same across restarts or upgrades.
If no certificate is present, one will be generated. It is a self-signed certificate valid for 30 days making it possible to use both in development and staging environments. It is not meant for the production environment. You should replace it with a properly signed certificate for production and it is encouraged to do so even for test and staging environments. Simply generate one and place it at the provided path, for example using the following, which is the command used to generate the temporary self-signed certificate:
The database in NSO, called CDB, uses YANG models as the schema for the database. It is only possible to store data in CDB according to the YANG models that define the schema.
If the YANG models are changed, particularly if the nodes are removed or renamed (rename is the removal of one leaf and an addition of another), any data in CDB for those leaves will also be removed. NSO normally warns about this when you attempt to load new packages, for example, request packages reload command refuses to reload the packages if the nodes in the YANG model have disappeared. You would then have to add the force argument, e.g., request packages reload force.
The base Production Image comes with a basic container health check. It uses ncs_cmd to get the state that NCS is currently in. Only the result status is observed to check if ncs_cmd was able to communicate with the ncs process. The result indicates if the ncs process is responding to IPC requests.
By default, the Linux kernel allows overcommit of memory. However, memory overcommit produces an unexpected and unreliable environment for NSO since the Linux Out Of Memory Killer, or OOM-killer, may terminate NSO without restarting it if the system is critically low on memory.
Also, when the OOM-killer terminates NSO, NSO will not produce a system dump file, and the debug information will be lost. Thus, it is strongly recommended that overcommit is disabled with Linux NSO production container hosts with an overcommit ratio of less than 100% (max).
See Step - 4. Run the Installer in System Install for information on memory overcommit recommendations for a Linux system hosting NSO production containers.
The /nso-run.sh script that starts NSO is executed as an ENTRYPOINT instruction and the CMD instruction can be used to provide arguments to the entrypoint-script. Another alternative is to use the EXTRA_ARGS variable to provide arguments. The /nso-run.sh script will first check the EXTRA_ARGS variable before the CMD instruction.
An example using docker run with the CMD instruction:
With the EXTRA_ARGS variable:
An example using a Docker Compose file, compose.yaml, with the CMD instruction:
With the EXTRA_ARGS variable:
This section provides examples to exhibit the use of NSO images.
This example shows how to run the standalone NSO Production Image using the Docker CLI.
The instructions and CLI examples used in this example are Docker-specific. If you are using a non-Docker container runtime, you will need to: fetch the NSO image from the Cisco software download site, then load and run the image with packages and networking, and finally log in to NSO CLI to run commands.
If you intend to run multiple images (i.e., both Production and Build), Docker Compose is a tool that simplifies defining and running multi-container Docker applications. See the example (Running the NSO Images using Docker Compose) below for detailed instructions.
Steps
Follow the steps below to run the Production Image using Docker CLI:
Start your container engine.
Next, load the image and run it. Navigate to the directory where you extracted the base image and load it. This will restore the image and its tag:
Start a container from the image. Supply additional arguments to mount the packages and ncs.conf as separate volumes (-v flag), and publish ports for networking (-p flag) as needed. The container starts NSO using the /run-nso.sh script. To understand how the ncs.conf file is used, see ncs.conf File Configuration and Preference.
Overriding Environment Variables
Overriding basic environment variables (NCS_CONFIG_DIR, NCS_LOG_DIR, NCS_RUN_DIR, etc.) is not supported and therefore should be avoided. Using, for example, the NCS_CONFIG_DIR environment variable to mount a configuration directory will result in an error. Instead, to mount your configuration directory, do it appropriately in the correct place, which is under /nso/etc.
Finally, log in to NSO CLI to run commands. Open an interactive shell on the running container and access the NSO CLI.
You can also use the docker exec -it cisco-nso ncs_cli -u admin command to access the CLI from the host's terminal.
This example describes how to upgrade your NSO to run a newer NSO version in the container. The overall upgrade process is outlined in the steps below. In the example below, NSO is to be upgraded from version 6.2 to 6.3.
To upgrade your NSO version:
Start a container with the docker run command. In the example below, it mounts the /nso directory in the container to the NSO-vol named volume to persist the data. Another option is using a bind mount of the directory on the host machine. At this point, the /cdb directory is empty.
Perform a backup, either by running the docker exec command (make sure that the backup is placed somewhere we have mounted) or by creating a tarball of /data/nso on the host machine.
Stop the NSO by issuing the following command, or by stopping the container itself which will run the ncs stop command automatically.
Remove the old NSO.
Start a new container and mount the /nso directory in the container to the NSO-vol named volume. This time the /cdb folder is not empty, so instead of starting a fresh NSO, an upgrade will be performed.
At this point, you only have one container that is running the desired version 6.3 and you do not need to uninstall the old NSO.
This example covers the necessary information to manifest the use of NSO images to compile packages and run NSO. Using Docker Compose is not a requirement, but a simple tool for defining and running a multi-container setup where you want to run both the Production and Build images in an efficient manner.
The packages used in this example are taken from the examples.ncs/development-guide/nano-services/netsim-sshkey example:
distkey: A simple Python + template service package that automates the setup of SSH public key authentication between netsim (ConfD) devices and NSO using a nano service.
ne: A NETCONF NED package representing a netsim network element that implements a configuration subscriber Python application that adds or removes the configured public key, which the netsim (ConfD) network element checks when authenticating public key authentication clients.
A basic Docker Compose file is shown in the example below. It describes the containers running on a machine:
The Production container runs NSO.
The Build container builds the NSO packages.
A third example container runs the netsim device.
Note that the packages use a shared volume in this simple example setup. In a more complex production environment, you may want to consider a dedicated redundant volume for your packages.
Follow the steps below to run the images using Docker Compose:
Start the Build container. This starts the services in the Compose file with the profile build.
Copy the packages from the netsim-sshkey example and compile them in the NSO Build container. The easiest way to do this is by using the docker exec command, which gives more control over what to build and the order of it. You can also do this with a script to make it easier and less verbose. Normally you populate the package directory from the host. Here, we use the packages from an example.
Start the netsim container. This outputs the generated init.xml and ncs.conf files to the NSO Production container. The --wait flag instructs to wait until the health check returns healthy.
Start the NSO Production container.
At this point, NSO is ready to run the service example to configure the netsim device(s). A bash script (demo.sh) that runs the above steps and showcases the netsim-sshkey example is given below:
This example describes how to upgrade NSO when using Docker Compose.
To upgrade to a new minor or major version, for example, from 6.2 to 6.3, follow the steps below:
Change the image version in the Compose file to the new version, here 6.3.
Run the docker compose up --profile build -d command to start the Build container with the new image.
Compile the packages using the Build container.
Run the docker compose up --profile prod --wait command to start the Production container with the new packages that were just compiled.
To upgrade to a new maintenance release version, for example, 6.3.1, follow the steps below:
Change the image version in the Compose file to the new version, here 6.3.1.
Run the docker compose up --profile prod --wait command.
Upgrading in this way does not require a recompile. Docker detects changes and upgrades the image in the container to the new version.
init script that starts NSO when the system boots, and makes NSO start the service manager.NSO is licensed using Cisco Smart Licensing. To register your NSO instance, you need to enter a token from your Cisco Smart Software Manager account. For more information on this topic, see Cisco Smart Licensing.
NSO is configured in the following two ways:
Through its configuration file, ncs.conf.
Through whatever data is configured at run-time over any northbound, for example, turning on trace using the CLI.
The configuration file ncs.conf is read at startup and can be reloaded. Below is an example of the most common settings. It is included here as an example and should be self-explanatory. See ncs.conf in Manual Pages for more information. Important configuration settings are:
load-path: where NSO should look for compiled YANG files, such as data models for NEDs or Services.
db-dir: the directory on disk that CDB uses for its storage and any temporary files being used. It is also the directory where CDB searches for initialization files. This should be a local disk and not NFS mounted for performance reasons.
Various log settings.
AAA configuration.
Rollback file directory and history length.
Enabling north-bound interfaces like REST, and WebUI.
Enabling of High-Availability mode.
The ncs.conf file is described in the NSO Manual Pages. There is a large number of configuration items in ncs.conf, most of them have sane default values. The ncs.conf file is an XML file that must adhere to the tailf-ncs-config.yang model. If we start the NSO daemon directly, we must provide the path to the NCS configuration file as in:
However, in a System Install, systemd is typically used to start NSO, and it will pass the appropriate options to the ncs command. Thus, NSO is started with the command:
It is possible to edit the ncs.conf file, and then tell NSO to reload the edited file without restarting the daemon as in:
This command also tells NSO to close and reopen all log files, which makes it suitable to use from a system like logrotate.
In this section, some of the important configuration settings will be described and discussed.
NSO allows access through a number of different interfaces, depending on the use case. In the default configuration, clients can access the system locally through an unauthenticated IPC socket (with the ncs* family of commands, port 4569) and plain (non-HTTPS) HTTP web server (port 8080). Additionally, the system enables remote access through SSH-secured NETCONF and CLI (ports 2022 and 2024).
We strongly encourage you to review and customize the exposed interfaces to your needs in the ncs.conf configuration file. In particular, set:
/ncs-config/webui/match-host-name to true.
/ncs-config/webui/server-name to the hostname of the server.
If you decide to allow remote access to the web server, also make sure you use TLS-secured HTTPS instead of HTTP. Not doing so exposes you to security risks.
To additionally secure IPC access, refer to Restricting Access to the IPC port.
For more details on individual interfaces and their use, see Northbound APIs.
Let's look at all the settings that can be manipulated through the NSO northbound interfaces. NSO itself has a number of built-in YANG modules. These YANG modules describe the structure that is stored in CDB. Whenever we change anything under, say /devices/device, it will change the CDB, but it will also change the configuration of NSO. We call this dynamic configuration since it can be changed at will through all northbound APIs.
We summarize the most relevant parts below:
This is the most important YANG module that is used to control and configure NSO. The module can be found at: $NCS_DIR/src/ncs/yang/tailf-ncs.yang in the release. Everything in that module is available through the northbound APIs. The YANG module has descriptions for everything that can be configured.
tailf-common-monitoring2.yang and tailf-ncs-monitoring2.yang are two modules that are relevant to monitoring NSO.
NSO has a built-in SSH server which makes it possible to SSH directly into the NSO daemon. Both the NSO northbound NETCONF agent and the CLI need SSH. To configure the built-in SSH server we need a directory with server SSH keys - it is specified via /ncs-config/aaa/ssh-server-key-dir in ncs.conf. We also need to enable /ncs-config/netconf-north-bound/transport/ssh and /ncs-config/cli/ssh in ncs.conf. In a System Install, ncs.conf is installed in the "config directory", by default /etc/ncs, with the SSH server keys in /etc/ncs/ssh.
There are also configuration parameters that are more related to how NSO behaves when talking to the devices. These reside in devices global-settings.
Users are configured at the path aaa authentication users.
Access control, including group memberships, is managed using the NACM model (RFC 6536).
Adding a user includes the following steps:
Create the user: admin@ncs(config)# aaa authentication users user <user-name>.
Add the user to a NACM group: admin@ncs(config)# nacm groups <group-name> admin user-name <user-name>.
Verify/change access rules.
It is likely that the new user also needs access to work with device configuration. The mapping from NSO users and corresponding device authentication is configured in authgroups. So, the user needs to be added there as well.
If the last step is forgotten, you will see the following error:
This section describes how to monitor NSO. See also NSO Alarms.
Use the command ncs --status to get runtime information on NSO.
Checking the overall status of NSO can be done using the shell:
Or, in the CLI:
For details on the output see $NCS_DIR/src/yang/tailf-common-monitoring2.yang.
Below is an overview of the output:
daemon-status
You can see the NSO daemon mode, starting, phase0, phase1, started, stopping. The phase0 and phase1 modes are schema upgrade modes and will appear if you have upgraded any data models.
version
The NSO version.
smp
Number of threads used by the daemon.
ha
The High-Availability mode of the NCS daemon will show up here: secondary, primary, relay-secondary.
internal/callpoints
The next section is callpoints. Make sure that any validation points, etc. are registered. (The ncs-rfs-service-hook is an obsolete callpoint, ignore this one).
UNKNOWN code tries to register a call-point that does not exist in a data model.
NOT-REGISTERED a loaded data model has a call-point but no code has registered.
Of special interest is of course the servicepoints. All your deployed service models should have a corresponding service-point
internal/cdb
The cdb section is important. Look for any locks. This might be a sign that a developer has taken a CDB lock without releasing it. The subscriber section is also important. A design pattern is to register subscribers to wait for something to change in NSO and then trigger an action. Reactive FASTMAP is designed around that. Validate that all expected subscribers are OK.
loaded-data-models
The next section shows all namespaces and YANG modules that are loaded. If you, for example, are missing a service model, make sure it is loaded.
It is also important to look at the packages that are loaded. This can be done in the CLI with:
NSO runs the following processes:
The daemon: ncs.smp: this is the NCS process running in the Erlang VM.
Java VM: com.tailf.ncs.NcsJVMLauncher: service applications implemented in Java run in this VM. There are several options on how to start the Java VM, it can be monitored and started/restarted by NSO or by an external monitor. See the ncs.conf(5) Manual Page and the java-vm settings in the CLI.
Python VMs: NSO packages can be implemented in Python. The individual packages can be configured to run a VM each or share a Python VM. Use the show python-vm status current to see current threads and show python-vm status start to see which threads were started at startup time.
NSO has extensive logging functionality. Log settings are typically very different for a production system compared to a development system. Furthermore, the logging of the NSO daemon and the NSO Java VM/Python VM is controlled by different mechanisms. During development, we typically want to turn on the developer-log. The sample ncs.conf that comes with the NSO release has log settings suitable for development, while the ncs.conf created by a System Install are suitable for production deployment.
NSO logs in /logs in your running directory, (depends on your settings in ncs.conf). You might want the log files to be stored somewhere else. See man ncs.conf for details on how to configure the various logs. Below is a list of the most useful log files:
ncs.log : NCS daemon log. See Log Messages and Formats. Can be configured to Syslog.
ncserr.log.1, ncserr.log.idx, ncserr.log.siz: if the NSO daemon has a problem. this contains debug information relevant to support. The content can be displayed with ncs --printlog ncserr.log.
audit.log: central audit log covering all northbound interfaces. See . Can be configured to Syslog.
localhost:8080.access: all HTTP requests to the daemon. This is an access log for the embedded Web server. This file adheres to the Common Log Format, as defined by Apache and others. This log is not enabled by default and is not rotated, i.e. use logrotate(8). Can be configured to Syslog.
devel.log: developer-log is a debug log for troubleshooting user-written code. This log is enabled by default and is not rotated, i.e. use logrotate(8). This log shall be used in combination with the java-vm or python-vm logs. The user code logs in the VM logs and the corresponding library logs in devel.log. Disable this log in production systems. Can be configured to Syslog.
You can manage this log and set its logging level in ncs.conf.
ncs-java-vm.log, ncs-python-vm.log: logger for code running in Java or Python VM, for example, service applications. Developers writing Java and Python code use this log (in combination with devel.log) for debugging. Both Java and Python log levels can be set from their respective VM settings in, for example, the CLI.
netconf.log, snmp.log: Log for northbound agents. Can be configured to Syslog.
rollbackNNNNN: All NSO commits generate a corresponding rollback file. The maximum number of rollback files and file numbering can be configured in ncs.conf.
xpath.trace: XPATH is used in many places, for example, XML templates. This log file shows the evaluation of all XPATH expressions and can be enabled in the ncs.conf.
To debug XPATH for a template, use the pipe target debug in the CLI instead.
ned-cisco-ios-xr-pe1.trace (for example): if device trace is turned on a trace file will be created per device. The file location is not configured in ncs.conf but is configured when the device trace is turned on, for example in the CLI.
Progress trace log: When a transaction or action is applied, NSO emits specific progress events. These events can be displayed and recorded in a number of different ways, either in CLI with the pipe target details on a commit, or by writing it to a log file. You can read more about it in the .
Transaction error log: log for collecting information on failed transactions that lead to either a CDB boot error or a runtime transaction failure. The default is false (disabled). More information about the log is available in the Manual Pages under (see logs/transaction-error-log).
Upgrade log: log containing information about CDB upgrade. The log is enabled by default and not rotated (i.e., use logrotate). With the NSO example set, the following examples populate the log in the logs/upgrade.log file: examples.ncs/development-guide/ned-upgrade/yang-revision, examples.ncs/development-guide/high-availability/upgrade-basic, examples.ncs/development-guide/high-availability/upgrade-cluster, and examples.ncs/getting-started/developing-with-ncs/14-upgrade-service. More information about the log is available in the Manual Pages under (see logs/upgrade-log).
NSO can syslog to a local Syslog. See man ncs.conf how to configure the Syslog settings. All Syslog messages are documented in Log Messages. The ncs.conf also lets you decide which of the logs should go into Syslog: ncs.log, devel.log, netconf.log, snmp.log, audit.log, WebUI access log. There is also a possibility to integrate with rsyslog to log the NCS, developer, audit, netconf, SNMP, and WebUI access logs to syslog with the facility set to daemon in ncs.conf. For reference, see the upgrade-l2 example, located in examples.ncs/development-guide/high-availability/hcc .
Below is an example of Syslog configuration:
Log messages are described on the link below:
NSO generates alarms for serious problems that must be remedied. Alarms are available over all the northbound interfaces and exist at the path /alarms. NSO alarms are managed as any other alarms by the general NSO Alarm Manager, see the specific section on the alarm manager in order to understand the general alarm mechanisms.
The NSO alarm manager also presents a northbound SNMP view, alarms can be retrieved as an alarm table, and alarm state changes are reported as SNMP Notifications. See the "NSO Northbound" documentation on how to configure the SNMP Agent.
This is also documented in the example /examples.ncs/getting-started/using-ncs/5-snmp-alarm-northbound.
Alarms are described on the link below:
NSO can issue a unique Trace ID per northbound request, visible in logs and trace headers. This Trace ID can be used to follow the request from service invocation to configuration changes pushed to any device affected by the change. The Trace ID may either be passed in from an external client or generated by NSO.
Trace ID is enabled by default, and can be turned off by adding the following snippet to NSO.conf:
Trace ID is propagated downwards in LSA setups and is fully integrated with commit queues.
Trace ID can be passed to NSO over NETCONF, RESTCONF, JSON-RPC, or CLI as a commit parameter.
If Trace ID is not given as a commit parameter, NSO will generate one. The generated Trace ID is an array of 16 random bytes, encoded as a 32-character hexadecimal string, in accordance with Trace ID.
For RESTCONF requests, this generated Trace ID will be communicated back to the requesting client as an HTTP header called X-Cisco-NSO-Trace-ID. The trace-id query parameter can also be used with RPCs and actions to relay a trace-id from northbound requests.
For NETCONF, the Trace ID will be returned as an attribute called trace-id.
Trace ID will appear in relevant log entries and trace file headers on the form trace-id=....
This section describes a number of disaster scenarios and recommends various actions to take in the different disaster variants.
CDB keeps its data in four files A.cdb, C.cdb, O.cdb and S.cdb. If NSO is stopped, these four files can be copied, and the copy is then a full backup of CDB.
Furthermore, if neither files exist in the configured CDB directory, CDB will attempt to initialize from all files in the CDB directory with the suffix .xml.
Thus, there exist two different ways to re-initiate CDB from a previously known good state, either from .xml files or from a CDB backup. The .xml files would typically be used to reinstall factory defaults whereas a CDB backup could be used in more complex scenarios.
If the S.cdb file has become inconsistent or has been removed, all commit queue items will be removed, and devices not yet processed out of sync. For such an event, appropriate alarms will be raised on the devices and any service instance that has unprocessed device changes will be set in the failed state.
When NSO starts and fails to initialize, the following exit codes can occur:
Exit codes 1 and 19 mean that an internal error has occurred. A text message should be in the logs, or if the error occurred at startup before logging had been activated, on standard error (standard output if NSO was started with --foreground --verbose). Generally, the message will only be meaningful to the NSO developers, and an internal error should always be reported to support.
Exit codes 2 and 3 are only used for the NCS control commands (see the section COMMUNICATING WITH NCS in the ncs(1) in Manual Pages manual page) and mean that the command failed due to timeout. Code 2 is used when the initial connect to NSO didn't succeed within 5 seconds (or the TryTime if given), while code 3 means that the NSO daemon did not complete the command within the time given by the --timeout option.
Exit code 10 means that one of the init files in the CDB directory was faulty in some way — further information in the log.
Exit code 11 means that the CDB configuration was changed in an unsupported way. This will only happen when an existing database is detected, which was created with another configuration than the current in ncs.conf.
Exit code 13 means that the schema change caused an upgrade, but for some reason, the upgrade failed. Details are in the log. The way to recover from this situation is either to correct the problem or to re-install the old schema (fxs) files.
Exit code 14 means that the schema change caused an upgrade, but for some reason the upgrade failed, corrupting the database in the process. This is rare and usually caused by a bug. To recover, either start from an empty database with the new schema, or re-install the old schema files and apply a backup.
Exit code 15 means that A.cdb or C.cdb is corrupt in a non-recoverable way. Remove the files and re-start using a backup or init files.
Exit code 16 means that CDB ran into an unrecoverable file error (such as running out of space on the device while performing journal compaction).
Exit code 20 means that NSO failed to bind a socket.
Exit code 21 means that some NSO configuration file is faulty. More information is in the logs.
Exit code 22 indicates an NSO installation-related problem, e.g., that the user does not have read access to some library files, or that some file is missing.
If the NSO daemon starts normally, the exit code is 0.
If the AAA database is broken, NSO will start but with no authorization rules loaded. This means that all write access to the configuration is denied. The NSO CLI can be started with a flag ncs_cli --noaaa that will allow full unauthorized access to the configuration.
NSO attempts to handle all runtime problems without terminating, e.g., by restarting specific components. However, there are some cases where this is not possible, described below. When NSO is started the default way, i.e. as a daemon, the exit codes will of course not be available, but see the --foreground option in the ncs(1) Manual Page.
Out of memory: If NSO is unable to allocate memory, it will exit by calling abort(3). This will generate an exit code, as for reception of the SIGABRT signal - e.g. if NSO is started from a shell script, it will see 134, as the exit code (128 + the signal number).
Out of file descriptors for accept(2): If NSO fails to accept a TCP connection due to lack of file descriptors, it will log this and then exit with code 25. To avoid this problem, make sure that the process and system-wide file descriptor limits are set high enough, and if needed configure session limits in ncs.conf. The out-of-file descriptors issue may also manifest itself in that applications are no longer able to open new file descriptors.
In many Linux systems, the default limit is 1024, but if we, for example, assume that there are four northbound interface ports, CLI, RESTCONF, SNMP, WebUI/JSON-RPC, or similar, plus a few hundred IPC ports, x 1024 == 5120. But one might as well use the next power of two, 8192, to be on the safe side.
Several application issues can contribute to consuming extra ports. In the scope of an NSO application that could, for example, be a script application that invokes CLI command or a callback daemon application that does not close the connection socket as it should.
A commonly used command for changing the maximum number of open file descriptors is ulimit -n [limit]. Commands such as netstat and lsof can be useful to debug file descriptor-related issues.
When the system is updated, NSO executes a two-phase commit protocol towards the different participating databases including CDB. If a participant fails in the commit() phase although the participant succeeded in the preparation phase, the configuration is possibly in an inconsistent state.
When NSO considers the configuration to be in an inconsistent state, operations will continue. It is still possible to use NETCONF, the CLI, and all other northbound management agents. The CLI has a different prompt which reflects that the system is considered to be in an inconsistent state and also the Web UI shows this:
The MAAPI API has two interface functions that can be used to set and retrieve the consistency status, those are maapi_set_running_db_status() and maapi_get_running_db_status() corresponding. This API can thus be used to manually reset the consistency state. The only alternative to reset the state to a consistent state is by reloading the entire configuration.
All parts of the NSO installation can be backed up and restored with standard file system backup procedures.
The most convenient way to do backup and restore is to use the ncs-backup command. In that case, the following procedure is used.
NSO Backup backs up the database (CDB) files, state files, config files, and rollback files from the installation directory. To take a complete backup (for disaster recovery), use:
The backup will be stored in the "run directory", by default /var/opt/ncs, as /var/opt/ncs/backups/[email protected].
For more information on backup, refer to the ncs-backup(1) in Manual Pages.
NSO Restore is performed if you would like to switch back to a previous good state or restore a backup.
It is always advisable to stop NSO before performing a restore.
First stop NSO if NSO is not stopped yet.
Restore the backup.
Select the backup to be restored from the available list of backups. The configuration and database with run-time state files are restored in /etc/ncs and /var/opt/ncs.
Start NSO.
NSO supports creating rollback files during the commit of a transaction that allows for rolling back the introduced changes. Rollbacks do not come without a cost and should be disabled if the functionality is not going to be used. Enabling rollbacks impacts both the time it takes to commit a change and requires sufficient storage on disk.
Rollback files contain a set of headers and the data required to restore the changes that were made when the rollback was created. One of the header fields includes a unique rollback ID that can be used to address the rollback file independent of the rollback numbering format.
The use of rollbacks from the supported APIs and the CLI is documented in the documentation for the given API.
As described earlier, NSO is configured through the configuration file, ncs.conf. In that file, we have the following items related to rollbacks:
/ncs-config/rollback/enabled: If set to true, then a rollback file will be created whenever the running configuration is modified.
/ncs-config/rollback/directory: Location where rollback files will be created.
/ncs-config/rollback/history-size: The number of old rollback files to save.
New users can face problems when they start to use NSO. If you face an issue, reach out to our support team regardless if your problem is listed here or not.
A useful tool in this regard is the ncs-collect-tech-report tool, which is the Bash script that comes with the product. It collects all log files, CDB backup, and several debug dumps as a TAR file. Note that it works only with a System Install.
Some noteworthy issues are covered here.
If you have trouble starting or running NSO, examples, or the clients you write, here are some troubleshooting tips.
<package-name>-<package-version>.<suffix>my-package-1.0.tar.gzpackage-name: should use letters, and digits and may include underscores (_) or dashes (-), but no additional punctuation, and digits can not follow underscores or dashes immediately.
package-version: should use numbers and dot (.).
Packages are composed of components. The following types of components are defined: NED, Application, and Callback.
The file layout of a package is:
The package-meta-data.xml defines several important aspects of the package, such as the name, dependencies on other packages, the package's components, etc. This will be thoroughly described later in this section.
When NSO starts, it needs to search for packages to load. The ncs.conf parameter /ncs-config/load-path defines a list of directories. At initial startup, NSO searches these directories for packages and copies the packages to a private directory tree in the directory defined by the /ncs-config/state-dir parameter in ncs.conf, and loads and starts all the packages found. All .fxs (compiled YANG files) and .ccl (compiled CLI spec files) files found in the directory load-dir in a package are loaded. On subsequent startups, NSO will by default only load and start the copied packages - see Loading Packages for different ways to get NSO to search the load path for changed or added packages.
A package usually contains Java code. This Java code is loaded by a class loader in the NSO Java VM. A package that contains Java code must compile the Java code so that the compilation results are divided into .jar files where code, that is supposed to be shared among multiple packages, is compiled into one set of .jar files, and code that is private to the package itself is compiled into another set of .jar files. The shared and the common jar files shall go into the shared-jar directory and the private-jar directory, respectively. By putting for example the code for a specific service in a private jar, NSO can dynamically upgrade the service without affecting any other service.
The optional webui directory contains the WEB UI customization files.
The NSO example collection for developers contains a number of small self-contained examples. The collection resides at $NCS_DIR/examples.ncs/getting-started/developing-with-ncs Each of these examples defines a package. Let's take a look at some of these packages. The example 3-aggregated-stats has a package ./packages/stats. The package-meta-data.xml file for that package looks like this:
The file structure in the package looks like this:
The package-meta-data.xml file defines the name of the package, additional settings, and one component. Its settings are defined by the $NCS_DIR/src/ncs/yang/tailf-ncs-packages.yang YANG model, where the package list name gets renamed to ncs-package. See the tailf-ncs-packages.yang module where all options are described in more detail. To get an overview, use the IETF RFC 8340-based YANG tree diagram.
A sample package configuration is taken from the $NCS_DIR/examples.ncs/development-guide/nano-services/netsim-vrouterexample:
Below is a brief list of the configurables in the tailf-ncs-packages.yang YANG model that applies to the metadata file. A more detailed description can be found in the YANG model:
name - the name of the package. All packages in the system must have unique names.
package-version - the version of the package. This is for administrative purposes only, NSO cannot simultaneously handle two versions of the same package.
ncs-min-version - the oldest known NSO version where the package works.
ncs-max-version - the latest known NSO version where the package works.
python-package - Python-specific package data.
vm-name - the Python VM name for the package. Default is the package vm-name. Packages with the same vm-name run in the same Python VM. Applicable only when callpoint-model = threading.
directory - the path to the directory of the package.
templates - the templates defined by the package.
template-loading-mode - control if the templates are interpreted in strict or relaxed mode.
supported-ned-id - the list of ned-ids supported by this package. An example of the expected format taken from the $NCS_DIR/examples.ncs/development-guide/nano-services/netsim-vrouter example:
supported-ned-id-match - the list of regular expressions for ned-ids supported by this package. Ned-ids in the system that matches at least one of the regular expressions in this list are added to the supported-ned-id list. The following example demonstrates how all minor versions with a major number of 1 of the router-nc NED can be added to a package's list of supported ned-ids:
required-package - a list of names of other packages that are required for this package to work.
component - Each package defines zero or more components.
Each component in a package has a name. The names of all the components must be unique within the package. The YANG model for packages contains:
Lots of additional information can be found in the YANG module itself. The mandatory choice that defines a component must be one of ned, callback, application, or upgrade.
A Network Element Driver component is used southbound of NSO to communicate with managed devices (described in Network Element Drivers (NEDs). The easiest NED to understand is the NETCONF NED which is built into NSO.
There are four different types of NEDs:
NETCONF: used for NETCONF-enabled devices such as Juniper routers, ConfD-powered devices, or any device that speaks proper NETCONF and also has YANG models. Plenty of packages in the NSO example collection have NETCONF NED components, for example $NCS_DIR/examples.ncs/getting-started/developing-with-ncs/0-router-network/packages/router.
SNMP: Used for SNMP devices.
The example $NCS_DIR/examples.ncs/snmp-ned/basic has a package that has an SNMP NED component.
CLI: used for CLI devices. The package $NCS_DIR/packages/neds/cisco-ios is an example of a package that has a CLI NED component.
Generic: used for generic NED devices. The example $NCS_DIR/examples.ncs/generic-ned/xmlrpc-device has a package called xml-rpc which defines a NED component of type generic_._
A CLI NED and a generic NED component must also come with additional user-written Java code, whereas a NETCONF NED and an SNMP NED have no Java code.
This defines a component with one or many Java classes that implement callbacks using the Java callback annotations.
If we look at the components in the stats package above, we have:
The Stats class here implements a read-only data provider. See DP API.
The callback type of component is used for a wide range of callback-type Java applications, where one of the most important are the Service Callbacks. The following list of Java callback annotations applies to callback components.
ServiceCallback to implement service-to-device mappings. See the example: $NCS_DIR/examples.ncs/getting-started/developing-with-ncs/4-rfs-service See Developing NSO Services for a thorough introduction to services.
ActionCallback to implement user-defined tailf:actions or YANG RPCs. See the example: $NCS_DIR/examples.ncs/getting-started/developing-with-ncs/2-actions.
DataCallback to implement the data getters and setters for a data provider. See the example $NCS_DIR/examples.ncs/getting-started/developing-with-ncs/3-aggregated-stats.
TransCallback to implement the transaction portions of a data provider callback. See the example $NCS_DIR/examples.ncs/getting-started/developing-with-ncs/3-aggregated-stats.
DBCallback to implement an external database. See the example: $NCS_DIR/examples.ncs/getting-started/developing-with-ncs/6-extern-db.
SnmpInformResponseCallback to implement an SNMP listener - See the example $NCS_DIR/examples.ncs/snmp-notification-receiver.
TransValidateCallback, ValidateCallback to implement a user-defined validation hook that gets invoked on every commit.
AuthCallback to implement a user hook that gets called whenever a user is authenticated by the system.
AuthorizationCallback to implement an authorization hook that allows/disallows users to do operations and/or access data. Note, that this callback should normally be avoided since, by nature, invoking a callback for any operation and/or data element is a performance impairment.
A package that has a callback component usually has some YANG code and then also some Java code that relates to that YANG code. By convention, the YANG and the Java code reside in a src directory in the component. When the source of the package is built, any resulting fxs files (compiled YANG files) must reside in the load-dir of package and any resulting Java compilation results must reside in the shared-jar and private-jar directories. Study the 3-aggregated-stats example to see how this is achieved.
Used to cover Java applications that do not fit into the callback type. Typically this is functionality that should be running in separate threads and work autonomously.
The example $NCS_DIR/examples.ncs/getting-started/developing-with-ncs/1-cdb contains three components that are of type application. These components must also contain a java-class-name element. For application components, that Java class must implement the ApplicationComponent Java interface.
Used to migrate data for packages where the yang model has changed and the automatic CDB upgrade is not sufficient. The upgrade component consists of a Java class with a main method that is expected to run one time only.
The example $NCS_DIR/examples.ncs/getting-started/developing-with-ncs/14-upgrade-service illustrates user CDB upgrades using upgrade components.
NSO ships with a tool ncs-make-package that can be used to create packages. Package Development discusses in depth how to develop a package.
This use case applies if we have a set of YANG files that define a managed device. If we wish to develop an EMS solution for an existing device and that device has YANG files and also speaks NETCONF, we need to create a package for that device to be able to manage it. Assuming all YANG files for the device are stored in ./acme-router-yang-files, we can create a package for the router as:
The above command will create a package called acme in ./acme. The acme package can be used for two things; managing real acme routers, and as input to the ncs-netsim tool to simulate a network of acme routers.
In the first case, managing real acme routers, all we need to do is to put the newly generated package in the load-path of NSO, start NSO with package reload (see Loading Packages), and then add one or more acme routers as managed devices to NSO. The ncs-setup tool can be used to do this:
The above command generates a directory ./ncs-project which is suitable for running NSO. Assume we have an existing router at the IP address 10.2.3.4 and that we can log into that router over the NETCONF interface using the username bob, and password secret. The following session shows how to set up NSO to manage this router:
We can also use the newly generated acme package to simulate a network of acme routers. During development, this is especially useful. The ncs-netsim tool can create a simulated network of acme routers as:
Finally, ncs-setup can be used to initialize an environment where NSO is used to manage all devices in an ncs-netsim network:
Similarly, if we have a device that has a set of MIB files, we can use ncs-make-package to generate a package for that device. An SNMP NED package can, similarly to a NETCONF NED package, be used to both manage real devices and also be fed to ncs-netsim to generate a simulated network of SNMP devices.
Assuming we have a set of MIB files in ./mibs, we can generate a package for a device with those mibs as:
For CLI NEDs and Generic NEDs, we cannot (yet) generate the package. Probably the best option for such packages is to start with one of the examples. A good starting point for a CLI NED is $NCS_DIR/packages/neds/cisco-ios and a good starting point for a Generic NED is the example $NCS_DIR/examples.ncs/generic-ned/xmlrpc-device
The ncs-make-package can be used to generate empty skeleton packages for a data provider and a simple service. The flags --service-skeleton and --data-provider-skeleton.
Alternatively, one of the examples can be modified to provide a good starting point. For example $NCS_DIR/examples.ncs/getting-started/developing-with-ncs/4-rfs-service
Learn about NSO SSH key management.
The SSH protocol uses public key technology for two distinct purposes:
Server Authentication: This use is a mandatory part of the protocol. It allows an SSH client to authenticate the server, i.e. verify that it is really talking to the intended server and not some man-in-the-middle intruder. This requires that the client has prior knowledge of the server's public keys, and the server proves its possession of one of the corresponding private keys by using it to sign some data. These keys are normally called 'host keys', and the authentication procedure is typically referred to as 'host key verification' or 'host key checking'.
Client Authentication: This use is one of several possible client authentication methods, i.e. it is an alternative to the commonly used password authentication. The server is configured with one or more public keys which are authorized for authentication of a user. The client proves possession of one of the corresponding private keys by using it to sign some data - i.e. the exact reverse of the server authentication provided by host keys. The method is called 'public key authentication' in SSH terminology.
sh nso-6.3.container-image-prod.linux.x86_64.signed.bindocker exec -it cisco-nso ncs-backup
INFO Backup /nso/run/backups/ncs-6.3@2024-11-03T11:31:07.backup.gz created successfullydocker stop cisco-nso
docker rm cisco-nsodocker run -it --rm -v NSO-vol:/nso -v NSO-log-vol:/log \
--entrypoint ncs-backup cisco-nso-prod:6.3 \
--restore /nso/run/backups/ncs-6.3@2024-11-03T11:31:07.backup.gz
Restore /etc/ncs from the backup (y/n)? y
Restore /nso/run from the backup (y/n)? y
INFO Restore completed successfullydocker run -itd --name cisco-nso \
-p 8888:8888 \
-e ADMIN_USERNAME=admin\
-e ADMIN_PASSWORD=admin\
cisco-nso-prod
docker run -itd --name cisco-nso \
-v NSO-vol:/nso \
-p 8888:8888 \
-e ADMIN_USERNAME=admin\
-e ADMIN_PASSWORD=admin\
cisco-nso-proddocker run -itd --name cisco-nso \
-v NSO-vol:/nso \
-v NSO-log-vol:/log \
-p 8888:8888 \
-e ADMIN_USERNAME=admin\
-e ADMIN_PASSWORD=admin\
cisco-nso-proddocker run -itd -—name cisco-nso -v NSO-vol:/nso cisco-nso-prod:6.2docker exec -it cisco-nso ncs-backupdocker compose --profile build up -ddocker exec -it build-nso-pkgs sh -c 'cp -r ${NCS_DIR}/examples.ncs/development-guide \
/nano-services/netsim-sshkey/packages ${NCS_RUN_DIR}'
docker exec -it build-nso-pkgs sh -c 'for f in ${NCS_RUN_DIR}/packages/*/src; \
do make -C "$f" all || exit 1; done'docker exec -it build-nso-pkgs sh -c 'for f in
${NCS_RUN_DIR}/packages/*/src;do make -C "$f" all || exit 1; done'docker run -itd --name cisco-nso -e ADMIN_PASSWORD=admin cisco-nso-prod:6.3docker run -d --name cisco-nso -v NSO-vol:/nso -v NSO-log-vol:/log cisco-nso-prod:6.3openssl req -new -newkey rsa:4096 -x509 -sha256 -days 30 -nodes \
-out /nso/ssl/cert/host.cert -keyout /nso/ssl/cert/host.key \
-subj "/C=SE/ST=NA/L=/O=NSO/OU=WebUI/CN=Mr. Self-Signed"docker run --name nso -itd cisco-nso-prod:6.3 --with-package-reload \
--ignore-initial-validationdocker run --name nso \
-e EXTRA_ARGS='--with-package-reload --ignore-initial-validation' \
-itd cisco-nso-prod:6.3services:
nso:
image: cisco-nso-prod:6.3
container_name: nso
command:
- --with-package-reload
- --ignore-initial-validationservices:
nso:
image: cisco-nso-prod:6.3
container_name: nso
environment:
- EXTRA_ARGS=--with-package-reload --ignore-initial-validationdocker load -i nso-6.3.container-image-prod.linux.x86_64.tar.gzdocker run -itd --name cisco-nso \
-v NSO-vol:/nso \
-v NSO-log-vol:/log \
--net=host \
-e ADMIN_USERNAME=admin \
-e ADMIN_PASSWORD=admin \
cisco-nso-prod:6.3docker exec -it cisco-nso bash
# ncs_cli -u admin
admin@ncs> version: '1.0'
volumes:
NSO-1-rvol:
networks:
NSO-1-net:
services:
NSO-1:
image: cisco-nso-prod:6.3
container_name: nso1
profiles:
- prod
environment:
- EXTRA_ARGS=--with-package-reload
- ADMIN_USERNAME=admin
- ADMIN_PASSWORD=admin
networks:
- NSO-1-net
ports:
- "2024:2024"
- "8888:8888"
volumes:
- type: bind
source: /path/to/packages/NSO-1
target: /nso/run/packages
- type: bind
source: /path/to/log/NSO-1
target: /log
- type: volume
source: NSO-1-rvol
target: /nso
healthcheck:
test: ncs_cmd -c "wait-start 2"
interval: 5s
retries: 5
start_period: 10s
timeout: 10s
BUILD-NSO-PKGS:
image: cisco-nso-build:6.3
container_name: build-nso-pkgs
network_mode: none
profiles:
- build
volumes:
- type: bind
source: /path/to/packages/NSO-1
target: /nso/run/packages
EXAMPLE:
image: cisco-nso-prod:6.3
container_name: ex-netsim
profiles:
- example
networks:
- NSO-1-net
healthcheck:
test: test -f /nso-run-prod/etc/ncs.conf && ncs-netsim --dir /netsim is-alive ex0
interval: 5s
retries: 5
start_period: 10s
timeout: 10s
entrypoint: bash
command: -c 'rm -rf /netsim
&& mkdir /netsim
&& ncs-netsim --dir /netsim create-network /network-element 1 ex
&& PYTHONPATH=/opt/ncs/current/src/ncs/pyapi ncs-netsim --dir
/netsim start
&& mkdir -p /nso-run-prod/run/cdb
&& echo "<devices xmlns=\"http://tail-f.com/ns/ncs\">
<authgroups><group><name>default</name>
<umap><local-user>admin</local-user>
<remote-name>admin</remote-name><remote-password>
admin</remote-password></umap></group>
</authgroups></devices>"
> /nso-run-prod/run/cdb/init1.xml
&& ncs-netsim --dir /netsim ncs-xml-init >
/nso-run-prod/run/cdb/init2.xml
&& sed -i.orig -e "s|127.0.0.1|ex-netsim|"
/nso-run-prod/run/cdb/init2.xml
&& mkdir -p /nso-run-prod/etc
&& sed -i.orig -e "s|</cli>|<style>c</style>
</cli>|" -e "/<ssh>/{n;s|<enabled>false
</enabled>|
<enabled>true</enabled>|}" defaults/ncs.conf
&& sed -i.bak -e "/<local-authentication>/{n;s|
<enabled>false</enabled>|<enabled>true
</enabled>|}" defaults/ncs.conf
&& sed "/<ssl>/{n;s|<enabled>false</enabled>|
<enabled>true</enabled>|}" defaults/ncs.conf
> /nso-run-prod/etc/ncs.conf
&& mv defaults/ncs.conf.orig defaults/ncs.conf
&& tail -f /dev/null'
volumes:
- type: bind
source: /path/to/packages/NSO-1/ne
target: /network-element
- type: volume
source: NSO-1-rvol
target: /nso-run-prodsystemctl stop ncsncs-backup --restoresystemctl start ncsroot@linux:/# ncs-collect-tech-report --full tar: Skipping to next header
gzip: stdin: invalid compressed data--format violatedInternal error: Open failed: /lib/tls/libc.so.6: version
`GLIBC_2.3.4' not found (required by
.../lib/ncs/priv/util/syst_drv.so)$ source /etc/profile.d/ncs.sh$ ncs --status$ ncs --check-callbacks$ ncs --debug-dump mydump1# strace -f -o mylog1.strace -s 1024 ncs ...# ktrace -ad -f mylog1.ktrace ncs ...
# kdump -f mylog1.ktrace > mylog1.kdump# truss -f -o mylog1.truss ncs ...# ncs -c /etc/ncs/ncs.conf# systemctl nso start# ncs --reloadncs@ncs(config)#
Possible completions:
aaa AAA management, users and groups
cluster Cluster configuration
devices Device communication settings
java-vm Control of the NCS Java VM
nacm Access control
packages Installed packages
python-vm Control of the NCS Python VM
services Global settings for services, (the services themselves might be augmented somewhere else)
session Global default CLI session parameters
snmp Top-level container for SNMP related configuration and status objects.
snmp-notification-receiver Configure reception of SNMP notifications
software Software management
ssh Global SSH connection configurationadmin@ncs(config)# devices global-settings
Possible completions:
backlog-auto-run Auto-run the backlog at successful connection
backlog-enabled Backlog requests to non-responding devices
commit-queue
commit-retries Retry commits on transient errors
connect-timeout Timeout in seconds for new connections
ned-settings Control which device capabilities NCS uses
out-of-sync-commit-behaviour Specifies the behaviour of a commit operation involving a device that is out of sync with NCS.
read-timeout Timeout in seconds used when reading data
report-multiple-errors By default, when the NCS device manager commits data southbound and when there are errors, we only
report the first error to the operator, this flag makes NCS report all errors reported by managed
devices
trace Trace the southbound communication to devices
trace-dir The directory where trace files are stored
write-timeout Timeout in seconds used when writing
dataadmin@ncs(config)# show full-configuration aaa authentication users user
aaa authentication users user admin
uid 1000
gid 1000
password $1$GNwimSPV$E82za8AaDxukAi8Ya8eSR.
ssh_keydir /var/ncs/homes/admin/.ssh
homedir /var/ncs/homes/admin
!
aaa authentication users user oper
uid 1000
gid 1000
password $1$yOstEhXy$nYKOQgslCPyv9metoQALA.
ssh_keydir /var/ncs/homes/oper/.ssh
homedir /var/ncs/homes/oper
!...admin@ncs(config)# show full-configuration nacm
nacm write-default permit
nacm groups group admin
user-name [ admin private ]
!
nacm groups group oper
user-name [ oper public ]
!
nacm rule-list admin
group [ admin ]
rule any-access
action permit
!
!
nacm rule-list any-group
group [ * ]
rule tailf-aaa-authentication
module-name tailf-aaa
path /aaa/authentication/users/user[name='$USER']
access-operations read,update
action permit
!admin@ncs(config)# show full-configuration devices authgroups
devices authgroups group default
umap admin
remote-name admin
remote-password $4$wIo7Yd068FRwhYYI0d4IDw==
!
umap oper
remote-name oper
remote-password $4$zp4zerM68FRwhYYI0d4IDw==
!
!jim@ncs(config)# devices device c0 config ios:snmp-server community fee
jim@ncs(config-config)# commit
Aborted: Resource authgroup for jim doesn't exist$ ncs --statusncs# show ncs-stateadmin> show packages
packages package cisco-asa
package-version 3.4.0
description "NED package for Cisco ASA"
ncs-min-version [ 3.2.2 3.3 3.4 4.0 ]
directory ./state/packages-in-use/1/cisco-asa
component upgrade-ned-id
upgrade java-class-name com.tailf.packages.ned.asa.UpgradeNedId
component ASADp
callback java-class-name [ com.tailf.packages.ned.asa.ASADp ]
component cisco-asa
ned cli ned-id cisco-asa
ned cli java-class-name com.tailf.packages.ned.asa.ASANedCli
ned device vendor Cisco <syslog-config>
<facility>daemon</facility>
</syslog-config>
<ncs-log>
<enabled>true</enabled>
<file>
<name>./logs/ncs.log</name>
<enabled>true</enabled>
</file>
<syslog>
<enabled>true</enabled>
</syslog>
</ncs-log><trace-id>false</trace-id> -- WARNING ------------------------------------------------------
Running db may be inconsistent. Enter private configuration mode and
install a rollback configuration or load a saved configuration.
------------------------------------------------------------------# ncs-backup <package-name>/package-meta-data.xml
load-dir/
shared-jar/
private-jar/
webui/
templates/
src/
doc/
netsim/<ncs-package xmlns="http://tail-f.com/ns/ncs-packages">
<name>stats</name>
<package-version>1.0</package-version>
<description>Aggregating statistics from the network</description>
<ncs-min-version>3.0</ncs-min-version>
<required-package>
<name>router-nc-1.0</name>
</required-package>
<component>
<name>stats</name>
<callback>
<java-class-name>com.example.stats.Stats</java-class-name>
</callback>
</component>
</ncs-package>|----package-meta-data.xml
|----private-jar
|----shared-jar
|----src
| |----Makefile
| |----yang
| | |----aggregate.yang
| |----java
| |----build.xml
| |----src
| |----com
| |----example
| |----stats
| |----namespaces
| |----Stats.java
|----doc
|----load-dir$ yanger -f tree tailf-ncs-packages.yangsubmodule: tailf-ncs-packages (belongs-to tailf-ncs)
+--ro packages
+--ro package* [name] <-- renamed to "ncs-package" in package-meta-data.xml
+--ro name string
+--ro package-version version
+--ro description? string
+--ro ncs-min-version* version
+--ro ncs-max-version* version
+--ro python-package!
| +--ro vm-name? string
| +--ro callpoint-model? enumeration
+--ro directory? string
+--ro templates* string
+--ro template-loading-mode? enumeration
+--ro supported-ned-id* union
+--ro supported-ned-id-match* string
+--ro required-package* [name]
| +--ro name string
| +--ro min-version? version
| +--ro max-version? version
+--ro component* [name]
+--ro name string
+--ro description? string
+--ro entitlement-tag? string
+--ro (type)
+--:(ned)
| +--ro ned
| +--ro (ned-type)
| | +--:(netconf)
| | | +--ro netconf
| | | +--ro ned-id? identityref
| | +--:(snmp)
| | | +--ro snmp
| | | +--ro ned-id? identityref
| | +--:(cli)
| | | +--ro cli
| | | +--ro ned-id identityref
| | | +--ro java-class-name string
| | +--:(generic)
| | +--ro generic
| | +--ro ned-id identityref
| | +--ro java-class-name string
| +--ro device
| | +--ro vendor string
| | +--ro product-family? string
| +--ro option* [name]
| +--ro name string
| +--ro value? string
+--:(upgrade)
| +--ro upgrade
| +--ro (type)
| +--:(java)
| | +--ro java-class-name? string
| +--:(python)
| +--ro python-class-name? string
+--:(callback)
| +--ro callback
| +--ro java-class-name* string
+--:(application)
+--ro application
+--ro (type)
| +--:(java)
| | +--ro java-class-name string
| +--:(python)
| +--ro python-class-name string
+--ro start-phase? enumeration$ ncs_load -o -Fp -p /packages<config xmlns="http://tail-f.com/ns/config/1.0">
<packages xmlns="http://tail-f.com/ns/ncs">
<package>
<name>router-nc-1.1</name>
<package-version>1.1</package-version>
<description>Generated netconf package</description>
<ncs-min-version>5.7</ncs-min-version>
<directory>./state/packages-in-use/1/router</directory>
<component>
<name>router</name>
<ned>
<netconf>
<ned-id xmlns:router-nc-1.1="http://tail-f.com/ns/ned-id/router-nc-1.1">
router-nc-1.1:router-nc-1.1</ned-id>
</netconf>
<device>
<vendor>Acme</vendor>
</device>
</ned>
</component>
<oper-status>
<up/>
</oper-status>
</package>
<package>
<name>vrouter</name>
<package-version>1.0</package-version>
<description>Nano services netsim virtual router example</description>
<ncs-min-version>5.7</ncs-min-version>
<python-package>
<vm-name>vrouter</vm-name>
<callpoint-model>threading</callpoint-model>
</python-package>
<directory>./state/packages-in-use/1/vrouter</directory>
<templates>vrouter-configured</templates>
<template-loading-mode>strict</template-loading-mode>
<supported-ned-id xmlns:router-nc-1.1="http://tail-f.com/ns/ned-id/router-nc-1.1">
router-nc-1.1:router-nc-1.1</supported-ned-id>
<required-package>
<name>router-nc-1.1</name>
<min-version>1.1</min-version>
</required-package>
<component>
<name>nano-app</name>
<description>Nano service callback and post-actions example</description>
<application>
<python-class-name>vrouter.nano_app.NanoApp</python-class-name>
<start-phase>phase2</start-phase>
</application>
</component>
<oper-status>
<up/>
</oper-status>
</package>
</packages>
</config>....
list component {
key name;
leaf name {
type string;
}
...
choice type {
mandatory true;
case ned {
...
}
case callback {
...
}
case application {
...
}
case upgrade {
...
}
....
}
.... <component>
<name>stats</name>
<callback>
<java-class-name>
com.example.stats.Stats
</java-class-name>
</callback>
</component> $ ncs-make-package --netconf-ned ./acme-router-yang-files acme
$ cd acme/src; make $ ncs-setup --ned-package ./acme --dest ./ncs-project $ cd ./ncs-project
$ ncs
$ ncs_cli -u admin
> configure
> set devices authgroups group southbound-bob umap admin \
remote-name bob remote-password secret
> set devices device acme1 authgroup southbound-bob address 10.2.3.4
> set devices device acme1 device-type netconf
> commit $ ncs-netsim create-network ./acme 5 a --dir ./netsim
$ ncs-netsim start
DEVICE a0 OK STARTED
DEVICE a1 OK STARTED
DEVICE a2 OK STARTED
DEVICE a3 OK STARTED
DEVICE a4 OK STARTED
$ $ ncs-setup --netsim-dir ./netsim --dest ncs-project $ ncs-make-package --snmp-ned ./mibs acme
$ cd acme/src; makecallpoint-model - A Python package runs Services, Nano Services, and Actions in the same OS process. If the callpoint-model is set to multiprocessing each will get a separate worker process. Running Services, Nano Services, and Actions in parallel can, depending on the application, improve the performance at the cost of complexity. See The Application Component for details.
cli, netconf, rest, snmp, webui
All northbound agents like CLI, REST, NETCONF, SNMP, etc. are listed with their IP and port. So if you want to connect over REST, for example, you can see the port number here.
patches
Lists any installed patches.
upgrade-mode
If the node is in upgrade mode, it is not possible to get any information from the system over NETCONF. Existing CLI sessions can get system information.
These two usages are fundamentally independent, i.e. host key verification is done regardless of whether the client authentication is via public key, password, or some other method. However host key verification is of particular importance when client authentication is done via password, since failure to detect a man-in-the-middle attack in this case will result in the cleartext password being divulged to the attacker.
NSO can act as an SSH server for northbound connections to the CLI or the NETCONF agent, and for connections from other nodes in an NSO cluster - cluster connections use NETCONF, and the server side setup used is the same as for northbound connections to the NETCONF agent. It is possible to use either the NSO built-in SSH server or an external server such as OpenSSH, for all of these cases. When using an external SSH server, host keys for server authentication and authorized keys for client/user authentication need to be set up per the documentation for that server, and there is no NSO-specific key management in this case.
When the NSO built-in SSH server is used, the setup is very similar to the one OpenSSH uses:
The private host key(s) must be placed in the directory specified by /ncs-config/aaa/ssh-server-key-dir in ncs.conf, and named either ssh_host_dsa_key (for a DSA key) or ssh_host_rsa_key (for a RSA key). The key(s) must be in PEM format (e.g. as generated by the OpenSSH ssh-keygen command), and must not be encrypted - protection can be achieved by file system permissions (not enforced by NSO). The corresponding public key(s) is/are typically stored in the same directory with a .pub extension to the file name, but they are not used by NSO. The NSO installation creates a DSA private/public key pair in the directory specified by the default ncs.conf.
The public keys that are authorized for authentication of a given user must be placed in the user's SSH directory. Refer to Public Key Login for details on how NSO searches for the keys to use.
NSO can act as an SSH client for connections to managed devices that use SSH (this is always the case for devices accessed via NETCONF, typically also for devices accessed via CLI), and for connections to other nodes in an NSO cluster. In all cases, a built-in SSH client is used. The $NCS_DIR/examples.ncs/getting-started/using-ncs/8-ssh-keys example in the NSO example collection has a detailed walk-through of the NSO functionality that is described in this section.
The level of host key verification can be set globally via /ssh/host-key-verification. The possible values are:
reject-unknown: The host key provided by the device or cluster node must be known by NSO for the connection to succeed.
reject-mismatch: The host key provided by the device or cluster node may be unknown, but it must not be different from the "known" key for the same key algorithm, for the connection to succeed.
none: No host key verification is done - the connection will never fail due to the host key provided by the device or cluster node.
The default is reject-unknown, and it is not recommended to use a different value, although it can be useful or needed in certain circumstances. E.g. none maybe useful in a development scenario, and temporary use of reject-mismatch maybe motivated until host keys have been configured for a set of existing managed devices.
The public host keys for a device that is accessed via SSH are stored in the /devices/device/ssh/host-key list. There can be several keys in this list, one each for the ssh-ed25519 (ED25519 key), ssh-dss (DSA key) and ssh-rsa (RSA key) key algorithms. In case a device has entries in its live-status-protocol list that use SSH, the host keys for those can be stored in the /devices/device/live-status-protocol/ssh/host-key list, in the same way as the device keys - however if /devices/device/live-status-protocol/ssh does not exist, the keys from /devices/device/ssh/host-key are used for that protocol. The keys can be configured e.g. via input directly in the CLI, but in most cases, it will be preferable to use the actions described below to retrieve keys from the devices. These actions will also retrieve any live-status-protocol keys for a device.
The level of host key verification can also be set per device, via /devices/device/ssh/host-key-verification. The default is to use the global value (or default) for /ssh/host-key-verification, but any explicitly set value will override the global value. The possible values are the same as for /ssh/host-key-verification.
There are several actions that can be used to retrieve the host keys from a device and store them in the NSO configuration:
/devices/fetch-ssh-host-keys: Retrieve the host keys for all devices. Successfully retrieved keys are committed to the configuration.
/devices/device-group/fetch-ssh-host-keys: Retrieve the host keys for all devices in a device group. Successfully retrieved keys are committed to the configuration.
/devices/device/ssh/fetch-host-keys: Retrieve the host keys for one or more devices. In the CLI, range expressions can be used for the device name, e.g. using '*' will retrieve keys for all devices, etc. The action will commit the retrieved keys if possible, i.e. if the device entry is already committed, otherwise (i.e. if the action is invoked from "configure mode" when the device entry has been created but not committed), the keys will be written to the current transaction, but not committed.
The fingerprints of the retrieved keys will be reported as part of the result from these actions, but it is also possible to ask for the fingerprints of already retrieved keys by invoking the /devices/device/ssh/host-key/show-fingerprint action (/devices/device/live-status-protocol/ssh/host-key/show-fingerprint for live-status protocols that use SSH).
This is very similar to the case of a connection to a managed device, it differs mainly in locations - and in the fact that SSH is always used for connection to a cluster node. The public host keys for a cluster node are stored in the /cluster/remote-node/ssh/host-key list, in the same way as the host keys for a device. The keys can be configured e.g. via input directly in the CLI, but in most cases, it will be preferable to use the action described below to retrieve keys from the cluster node.
The level of host key verification can also be set per cluster node, via /cluster/remote-node/ssh/host-key-verification. The default is to use the global value (or default) for /ssh/host-key-verification, but any explicitly set value will override the global value. The possible values are the same as for /ssh/host-key-verification.
The /cluster/remote-node/ssh/fetch-host-keys action can be used to retrieve the host keys for one or more cluster nodes. In the CLI, range expressions can be used for the node name, e.g. using '*' will retrieve keys for all nodes, etc. The action will commit the retrieved keys if possible, but if it is invoked from "configure mode" when the node entry has been created but not committed, the keys will be written to the current transaction, but not committed.
The fingerprints of the retrieved keys will be reported as part of the result from this action, but it is also possible to ask for the fingerprints of already retrieved keys by invoking the /cluster/remote-node/ssh/host-key/show-fingerprint action.
The private key used for public key authentication can be taken either from the SSH directory for the local user or from a list of private keys in the NSO configuration. The user's SSH directory is determined according to the same logic as for the server-side public keys that are authorized for authentication of a given user, see Public Key Login, but of course, different files in this directory are used, see below. Alternatively, the key can be configured in the /ssh/private-key list, using an arbitrary name for the list key. In both cases, the key must be in PEM format (e.g. as generated by the OpenSSH ssh-keygen command), and it may be encrypted or not. Encrypted keys configured in /ssh/private-key must have the passphrase for the key configured via /ssh/private-key/passphrase.
The specific private key to use is configured via the authgroup indirection and the umap selection mechanisms as for password authentication, just a different alternative. Setting /devices/authgroups/group/umap/public-key (or default-map instead of umap for users that are not in umap) without any additional parameters will select the default of using a file called id_dsa in the local user's SSH directory, which must have an unencrypted key. A different file name can be set via /devices/authgroups/group/umap/public-key/private-key/file/name. For an encrypted key, the passphrase can be set via /devices/authgroups/group/umap/public-key/private-key/file/passphrase, or /devices/authgroups/group/umap/public-key/private-key/file/use-password can be set to indicate that the password used (if any) by the local user when authenticating to NSO should also be used as a passphrase for the key. To instead select a private key from the /ssh/private-key list, the name of the key is set via /devices/authgroups/group/umap/public-key/private-key/name.
This is again very similar to the case of a connection to a managed device, since the same authgroup/umap scheme is used. Setting /cluster/authgroup/umap/public-key (or default-map instead of umap for users that are not in umap) without any additional parameters will select the default of using a file called id_dsa in the local user's SSH directory, which must have an unencrypted key. A different file name can be set via /cluster/authgroup/umap/public-key/private-key/file/name. For an encrypted key, the passphrase can be set via /cluster/authgroup/umap/public-key/private-key/file/passphrase, or /cluster/authgroup/umap/public-key/private-key/file/use-password can be set to indicate that the password used (if any) by the local user when authenticating to NSO should also be used as a passphrase for the key. To instead select a private key from the /ssh/private-key list, the name of the key is set via /cluster/authgroup/umap/public-key/private-key/name.
<supported-ned-id xmlns:router-nc-1.1="http://tail-f.com/ns/ned-id/router-nc-1.1">
router-nc-1.1:router-nc-1.1</supported-ned-id><supported-ned-id-match>router-nc-1.\d+:router-nc-1.\d+</supported-ned-id-match> <developer-log>
<enabled>true</enabled>
<file>
<name>${NCS_LOG_DIR}/devel.log</name>
<enabled>false</enabled>
</file>
<syslog>
<enabled>true</enabled>
</syslog>
</developer-log>
<developer-log-level>trace</developer-log-level>admin@ncs(config)# python-vm logging level level-info
admin@ncs(config)# java-vm java-logging logger com.tailf.maapi level level-info <xpathTraceLog>
<enabled>true</enabled>
<filename>${NCS_LOG_DIR}/xpath.trace</filename>
</xpathTraceLog>admin@ncs(config)# commit | debug templateadmin@ncs(config)# devices device r0 trace pretty$ python
...
>>> import paramiko
>>>servicepoints:
id=l3vpn-servicepoint daemonId=10 daemonName=ncs-dp-6-l3vpn:L3VPN
id=nsr-servicepoint daemonId=11 daemonName=ncs-dp-7-nsd:NSRService
id=vm-esc-servicepoint daemonId=12 daemonName=ncs-dp-8-vm-manager-esc:ServiceforVMstarting
id=vnf-catalogue-esc daemonId=13 daemonName=ncs-dp-9-vnf-catalogue-esc:ESCVNFCatalogueServiceadmin@ncs(config)# ssh host-key-verification reject-mismatch
admin@ncs(config)# commit
Commit complete.admin@ncs# devices fetch-ssh-host-keys
fetch-result {
device c0
result unchanged
fingerprint {
algorithm ssh-dss
value 03:64:fc:b7:87:bd:34:5e:3b:6e:d8:71:4d:3f:46:76
}
}
fetch-result {
device h0
result unchanged
fingerprint {
algorithm ssh-dss
value 03:64:fc:b7:87:bd:34:5e:3b:6e:d8:71:4d:3f:46:76
}
}admin@ncs# cluster remote-node * ssh fetch-host-keys
cluster remote-node ncs1 ssh fetch-host-keys
result updated
fingerprint {
algorithm ssh-dss
value 03:64:fc:b7:87:bd:34:5e:3b:6e:d8:71:4d:3f:46:76
}
cluster remote-node ncs2 ssh fetch-host-keys
result updated
fingerprint {
algorithm ssh-dss
value 03:64:fc:b7:87:bd:34:5e:3b:6e:d8:71:4d:3f:46:76
}
cluster remote-node ncs3 ssh fetch-host-keys
result updated
fingerprint {
algorithm ssh-dss
value 03:64:fc:b7:87:bd:34:5e:3b:6e:d8:71:4d:3f:46:76
}admin@ncs(config)# devices authgroups group default umap admin
admin@ncs(config-umap-admin)# public-key private-key file name /home/admin/.ssh/id-dsa
admin@ncs(config-umap-admin)# public-key private-key file passphrase
(<AES encrypted string>): *********
admin@ncs(config-umap-admin)# commit
Commit complete.admin@ncs(config)# cluster authgroup default umap admin
admin@ncs(config-umap-admin)# public-key private-key file name /home/admin/.ssh/id-dsa
admin@ncs(config-umap-admin)# public-key private-key file passphrase
(<AES encrypted string>): *********
admin@ncs(config-umap-admin)# commit
Commit complete.The Compose file, typically named docker-compose.yaml, declares a volume called NSO-1-rvol. This is a named volume and will be created automatically by Compose. You can create this volume externally, at which point this volume must be declared as external. If the external volume doesn't exist, the container will not start.
The example netsim container will mount the network element NED in the packages directory. This package should be compiled. Note that the NSO-1-rvol volume is used by the example container to share the generated init.xml and ncs.conf files with the NSO Production container.
Production Image
System Install






Use NSO's plug-and-play scripting mechanism to add new functionality to NSO.
A scripting mechanism can be used together with the CLI (scripting is not available for any other northbound interfaces). This section is intended for users who are familiar with UNIX shell scripting and/or programming. With the scripting mechanism, an end-user can add new functionality to NSO in a plug-and-play-like manner. No special tools are needed.
There are three categories of scripts:
command scripts: Used to add new commands to the CLI.
policy scripts: Invoked at validation time and may control the outcome of a transaction. Policy scripts have the mandate to cause a transaction to abort.
post-commit scripts: Invoked when a transaction has been committed. Post-commit scripts can for example be used for logging, sending external events etc.
The terms 'script' and 'scripting' used throughout this description refer to how functionality can be added without a requirement for integration using the NSO programming APIs. NSO will only run the scripts as UNIX executables. Thus they may be written as shell scripts, or by using another scripting language that is supported by the OS, e.g., Python, or even as compiled code. The scripts are run with the same user ID as NSO.
The examples in this section are written using shell scripts as the least common denominator, but they can be written in another suitable language, e.g., Python or C.
Scripts are stored in a directory tree with a predefined structure where there is a sub-directory for each script category:
For all script categories, it suffices to just add a valid script in the correct sub-directory to enable the script. See the details for each script category for how a valid script of that category is defined. Scripts with a name beginning with a dot character ('.') are ignored.
The directory path to the location of the scripts is configured with the /ncs-config/scripts/dir configuration parameter. It is possible to have several script directories. The sample ncs.conf file that comes with the NSO release specifies two script directories: ./scripts and ${NCS_DIR}/scripts.
All scripts are required to provide a formal description of their interface. When the scripts are loaded, NSO will invoke the scripts with (one of) the following as an argument depending on the script category.
--command
--policy
--post-commit
The script must respond by writing its formal interface description on stdout and exit normally. Such a description consists of one or more sections. Which sections are required, depends on the category of the script.
The sections do however have a common syntax. Each section begins with the keyword begin followed by the type of section. After that one or more lines of settings follow. Each such setting begins with a name, followed by a colon character (:), and after that the value is stated. The section ends with the keyword end. Empty lines and spaces may be used to improve readability.
For examples see each corresponding section below.
Scripts are automatically loaded at startup and may also be manually reloaded with the CLI command script reload. The command takes an optional verbosity parameter which may have one of the following values:
diff: Shows info about those scripts that have been changed since the latest (re)load. This is the default.
all: Shows info about all scripts regardless of whether they have been changed or not.
errors: Shows info about those scripts that are erroneous, regardless of whether they have been changed or not. Typical errors are invalid file permissions and syntax errors in the interface description.
Yet another parameter may be useful when debugging the reload of scripts:
debug: Shows additional debug info about the scripts.
An example session reloading scripts:
Command scripts are used to add new commands to the CLI. The scripts are executed in the context of a transaction. When the script is run in oper mode, this is a read-only transaction, when it is run in config mode, it is a read-write transaction. In that context, the script may make use of the environment variables NCS_MAAPI_USID and NCS_MAAPI_THANDLE in order to attach to the active transaction. This makes it simple to make use of the ncs-maapi command (see the in Manual Pages manual page) for various purposes.
Each command script must be able to handle the argument --command and, when invoked, write a command section to stdout. If the CLI command is intended to take parameters, one param section per CLI parameter must also be emitted.
The command is not paginated by default in the CLI and will only do so if it is piped to more.
command SectionThe following settings can be used to define a command:
modes: Defines in which CLI mode(s) that the command should be available. The value can be oper, config or both (separated with space).
styles: Defines in which CLI styles the command should be available. The value can be one or more of c, i and j (separated with space). c
An example of a command section is:
param SectionNow let's look at various aspects of a parameter. This may both affect the parameter syntax for the end-user in the CLI as well as what the command script will get as arguments.
The following settings can be used to customize each CLI parameter:
name: Optional name of the parameter. If provided, the CLI will prompt for this name before the value. By default, the name is not forwarded to the script. See flag and prefix.
type: The type of the parameter. By default each parameter has a value, but by setting the type to void the CLI will not prompt for a value. To be useful the void type must be combined with name and either flag
If the command takes a parameter to redirect the output to a file, a param section might look like this:
command ExampleA command denying changes the configured trace-dir for a set of devices, it can use the check_dir.sh script.
Calling $NCS_DIR/examples.ncs/getting-started/using-ncs/7-scripting/scripts/command/echo.sh with the argument --command argument produces a command section and a couple of param sections:
In the complete example $NCS_DIR/examples.ncs/getting-started/using-ncs/7-scripting , there is a README file and a simple command script scripts/command/echo.sh.
Policy scripts are invoked at validation time before a change is committed. A policy script can reject the data, accept it, or accept it with a warning. If a warning is produced, it will be displayed for interactive users (e.g. through the CLI or Web UI). The user may choose to abort or continue to commit the transaction.
Policy scripts are typically assigned to individual leafs or containers. In some cases, it may be feasible to use a single policy script, e.g. on the top-level node of the configuration. In such a case, this script is responsible for the validation of all values and their relationships throughout the configuration.
All policy scripts are invoked on every configuration change. The policy scripts can be configured to depend on certain subtrees of the configuration, which can save time but it is very important that all dependencies are stated and also updated when the validation logic of the policy script is updated. Otherwise, an update may be accepted even though a dependency should have denied it.
There can be multiple dependency declarations for a policy script. Each declaration consists of a dependency element specifying a configuration subtree that the validation code is dependent upon. If any element in any of the subtrees is modified, the policy script is invoked. A subtree is specified as an absolute path.
If there are no declared dependencies, the root of the configuration tree (/) is used, which means that the validation code is executed when any configuration element is modified. If dependencies are declared on a leaf element, an implicit dependency on the leaf itself is added.
Each policy script must handle the argument --policy and, when invoked, write a policy section to stdout. The script must also perform the actual validation when invoked with the argument --keypath.
policy SectionThe following settings can be used to configure a policy script:
keypath: Mandatory. The keypath is the path to a node in the configuration data tree. The policy script will be associated with this node. The path must be absolute. A keypath can for example be /devices/device/c0. The script will be invoked if the configuration node, referred to by the keypath, is changed or if any node in the subtree under the node (if the node is a container or list) is changed.
dependency: Declaration of a dependency. The dependency must be an absolute key path. Multiple dependency settings can be declared. Default is /.
A policy that will be run for every change on or under /devices/device.
When NSO has concluded that the policy script should be invoked to perform its validation logic, the script is invoked with the option --keypath. If the registered node is a leaf, its value will be given with the --value option. For example --keypath /devices/device/c0 or if the node is a leaf --keypath /devices/device/c0/address --value 127.0.0.1.
Once the script has performed its validation logic it must exit with a proper status.
The following exit statuses are valid:
0: Validation ok. Vote for commit.
1: When the outcome of the validation is dubious, it is possible for the script to issue a warning message. The message is extracted from the script output on stdout. An interactive user can choose to abort or continue to commit the transaction. Non-interactive users automatically vote for commit.
2: When the validation fails, it is possible for the script to issue an error message. The message is extracted from the script output on stdout. The transaction will be aborted.
policy ExampleA policy denying changes the configured trace-dir for a set of devices, it can use the check_dir.sh script.
Trying to change that parameter would result in an aborted transaction
In the complete example $NCS_DIR/examples.ncs/getting-started/using-ncs/7-scripting/ there is a README file and a simple policy script scripts/policy/check_dir.sh.
Post-commit scripts are run when a transaction has been committed, but before any locks have been released. The transaction hangs until the script has returned. The script cannot change the outcome of the transaction. Post-commit scripts can for example be used for logging, sending external events etc. The scripts run as the same user ID as NSO.
The script is invoked with --post-commit at script (re)load. In future releases, it is possible that the post-commit section will be used for control of the post-commit scripts behavior.
At post-commit, the script is invoked without parameters. In that context, the script may make use of the environment variables NCS_MAAPI_USID and NCS_MAAPI_THANDLE in order to attach to the active (read-only) transaction.
This makes it simple to make use of the ncs-maapi command. Especially the command ncs-maapi --keypath-diff / may turn out to be useful, as it provides a listing of all updates within the transaction on a format that is easy to parse.
post-commit SectionAll post-commit scripts must be able to handle the argument --post-commit and, when invoked, write an empty post-commit section to stdout:
post-commit ExampleAssume the administrator of a system would want to have a mail each time a change is performed on the system, a script such as mail_admin.sh:
If the admin then loads this script:
This configuration change will produce an email to [email protected] with subject NCS Mailer and body.
In the complete example $NCS_DIR/examples.ncs/getting-started/using-ncs/7-scripting/ , there is a README file and a simple post-commit script scripts/post-commit/show_diff.sh.
Learn about Cisco-provided NEDs and how to manage them.
This section provides necessary information on NED (Network Element Driver) administration with a focus on Cisco-provided NEDs. If you're planning to use NEDs not provided by Cisco, refer to the to build your own NED packages.
NED represents a key NSO component that makes it possible for the NSO core system to communicate southbound with network devices in most deployments. NSO has a built-in client that can be used to communicate southbound with NETCONF-enabled devices. Many network devices are, however, not NETCONF-enabled, and there exist a wide variety of methods and protocols for configuring network devices, ranging from simple CLI to HTTP/REST-enabled devices. For such cases, it is necessary to use a NED to allow NSO communicate southbound with the network device.
Even for NETCONF-enabled devices, it is possible that the NSO's built-in NETCONF client cannot be used, for instance, if the devices do not strictly follow the specification for the NETCONF protocol. In such cases, one must also use a NED to seamlessly communicate with the device. See for more information on third-party YANG NEDs.
docker run --rm cisco-nso-prod:6.3 sh -c "java -version && python --version"docker container create --name temp -v NSO-evol:/nso/etc hello-world
docker cp ncs.conf temp:/nso/etc
docker rm tempcd path-to-previous-run-dir
docker container create --name temp -v NSO-rvol:/nso/run hello-world
docker cp . temp:/nso/run
docker rm tempdocker volume create --name NSO-lvoldocker run -v NSO-rvol:/nso/run -v NSO-evol:/nso/etc -v NSO-lvol:/log -itd \
--name cisco-nso -e EXTRA_ARGS=--with-package-reload -e ADMIN_USERNAME=admin \
-e ADMIN_PASSWORD=admin cisco-nso-prod:6.3docker run -d --name cisco-nso -v NSO-vol:/nso -v NSO-log-vol:/log cisco-nso-prod:6.3<load-path>
<dir>${NCS_RUN_DIR}/packages</dir>
<dir>${NCS_DIR}/etc/ncs</dir>
...
</load-path>docker exec -it cisco-nso ncs --stop docker rm -f cisco-nsodocker run -itd --name cisco-nso -v NSO-vol:/nso cisco-nso-prod:6.3docker compose --profile example up --waitdocker compose --profile prod up --wait#!/bin/bash
set -eu # Abort the script if a command returns with a non-zero exit code or if
# a variable name is dereferenced when the variable hasn't been set
GREEN='\033[0;32m'
PURPLE='\033[0;35m'
NC='\033[0m' # No Color
printf "${GREEN}##### Reset the container setup\n${NC}";
docker compose --profile build down
docker compose --profile example down -v
docker compose --profile prod down -v
rm -rf ./packages/NSO-1/* ./log/NSO-1/*
printf "${GREEN}##### Start the build container used for building the NSO NED
and service packages\n${NC}"
docker compose --profile build up -d
printf "${GREEN}##### Get the packages\n${NC}"
printf "${PURPLE}##### NOTE: Normally you populate the package directory from the host.
Here, we use packages from an NSO example\n${NC}"
docker exec -it build-nso-pkgs sh -c 'cp -r
${NCS_DIR}/examples.ncs/development-guide/nano-services/netsim-sshkey/packages ${NCS_RUN_DIR}'
printf "${GREEN}##### Build the packages\n${NC}"
docker exec -it build-nso-pkgs sh -c 'for f in ${NCS_RUN_DIR}/packages/*/src;
do make -C "$f" all || exit 1; done'
printf "${GREEN}##### Start the simulated device container and setup the example\n${NC}"
docker compose --profile example up --wait
printf "${GREEN}##### Start the NSO prod container\n${NC}"
docker compose --profile prod up --wait
printf "${GREEN}##### Showcase the netsim-sshkey example from NSO on the prod container\n${NC}"
if [[ $# -eq 0 ]] ; then # Ask for input only if no argument was passed to this script
printf "${PURPLE}##### Press any key to continue or ctrl-c to exit\n${NC}"
read -n 1 -s -r
fi
docker exec -it nso1 sh -c 'sed -i.orig -e "s/make/#make/"
${NCS_DIR}/examples.ncs/development-guide/nano-services/netsim-sshkey/showcase.sh'
docker exec -it nso1 sh -c 'cd ${NCS_RUN_DIR};
${NCS_DIR}/examples.ncs/development-guide/nano-services/netsim-sshkey/showcase.sh 1'


ijcmdpath: Is the full CLI command path. For example, the command path my script echo implies that the command will be called my script echo in the CLI.
help: Command help text.
prefixpresence: Controls whether the parameter must be present in the CLI input or not. Can be set to optional or mandatory.
words: Controls the number of words that the parameter value may consist of. By default, the value must consist of just one word (possibly quoted if it contains spaces). If set to any, the parameter may consist of any number of words. This setting is only valid for the last parameter.
flag: Extra argument added before the parameter value. For example, if set to -f and the user enters logfile, the script will get -f logfile as arguments.
prefix: Extra string prepended to the parameter value (as a single word). For example, if set to --file= and the user enters logfile, the script will get --file=logfile as argument.
help: Parameter help text.
priority: An optional integer parameter specifying the order policy scripts will be evaluated, in order of increasing priority, where a lower value is higher priority. The default priority is 0.call: This optional setting can only be used if the associated node, declared as keypath, is a list. If set to once, the policy script is only called once even though there exists many list entries in the data store. This is useful if we have a huge amount of instances or if values assigned to each instance have to be validated in comparison with its siblings. Default is each.
scripts/
command/
policy/
post-commit/admin@ncs# script reload all
$NCS_DIR/examples.ncs/getting-started/using-ncs/7-scripting/scripts:
ok
command:
add_user.sh: unchanged
echo.sh: unchanged
policy:
check_dir.sh: unchanged
post-commit:
show_diff.sh: unchanged
/opt/ncs/scripts: ok
command:
device_brief.sh: unchanged
device_brief_c.sh: unchanged
device_list.sh: unchanged
device_list_c.sh: unchanged
device_save.sh: unchangedjoe@io> example_command_script | morebegin command
modes: oper
styles: c i j
cmdpath: my script echo
help: Display a line of text
endbegin param
name: file
presence: optional
flag: -f
help: Redirect output to file
end#!/bin/bash
set -e
while [ $# -gt 0 ]; do
case "$1" in
--command)
# Configuration of the command
#
# modes - CLI mode (oper config)
# styles - CLI style (c i j)
# cmdpath - Full CLI command path
# help - Command help text
#
# Configuration of each parameter
#
# name - (optional) name of the parameter
# more - (optional) true or false
# presence - optional or mandatory
# type - void - A parameter without a value
# words - any - Multi word param. Only valid for the last param
# flag - Extra word added before the parameter value
# prefix - Extra string prepended to the parameter value
# help - Command help text
cat << EOF
begin command
modes: config
styles: c i j
cmdpath: user-wizard
help: Add a new user
end
EOF
exit
;;
*)
break
;;
esac
shift
done
## Ask for user name
while true; do
echo -n "Enter user name: "
read user
if [ ! -n "${user}" ]; then
echo "You failed to supply a user name."
elif ncs-maapi --exists "/aaa:aaa/authentication/users/user{${user}}"; then
echo "The user already exists."
else
break
fi
done
## Ask for password
while true; do
echo -n "Enter password: "
read -s pass1
echo
if [ "${pass1:0:1}" == "$" ]; then
echo -n "The password must not start with $. Please choose a "
echo "different password."
else
echo -n "Confirm password: "
read -s pass2
echo
if [ "${pass1}" != "${pass2}" ]; then
echo "Passwords do not match."
else
break
fi
fi
done
groups=`ncs-maapi --keys "/nacm/groups/group"`
while true; do
echo "Choose a group for the user."
echo -n "Available groups are: "
for i in ${groups}; do echo -n "${i} "; done
echo
echo -n "Enter group for user: "
read group
if [ ! -n "${group}" ]; then
echo "You must enter a valid group."
else
for i in ${groups}; do
if [ "${i}" == "${group}" ]; then
# valid group found
break 2;
fi
done
echo "You entered an invalid group."
fi
echo
done
echo "Creating user"
ncs-maapi --create "/aaa:aaa/authentication/users/user{${user}}"
ncs-maapi --set "/aaa:aaa/authentication/users/user{${user}}/password" \
"${pass1}"
echo "Setting home directory to: /homes/${user}"
ncs-maapi --set "/aaa:aaa/authentication/users/user{${user}}/homedir" \
"/homes/${user}"
echo "Setting ssh key directory to: /homes/${user}/ssh_keydir"
ncs-maapi --set "/aaa:aaa/authentication/users/user{${user}}/ssh_keydir" \
"/homes/${user}/ssh_keydir"
ncs-maapi --set "/aaa:aaa/authentication/users/user{${user}}/uid" "1000"
ncs-maapi --set "/aaa:aaa/authentication/users/user{${user}}/gid" "100"
echo "Adding user to the ${group} group."
gusers=`ncs-maapi --get "/nacm/groups/group{${group}}/user-name"`
for i in ${gusers}; do
if [ "${i}" == "${user}" ]; then
echo "User already in group"
exit 0
fi
done
ncs-maapi --set "/nacm/groups/group{${group}}/user-name" "${gusers} ${user}"$ ./echo.sh --command
begin command
modes: oper
styles: c i j
cmdpath: my script echo
help: Display a line of text
end
begin param
name: nolf
type: void
presence: optional
flag: -n
help: Do not output the trailing newline
end
begin param
name: file
presence: optional
flag: -f
help: Redirect output to file
end
begin param
presence: mandatory
words: any
help: String to be displayed
endbegin policy
keypath: /devices/device
dependency: /devices/global-settings
priority: 4
call: each
end#!/bin/sh
usage_and_exit() {
cat << EOF
Usage: $0 -h
$0 --policy
$0 --keypath <keypath> [--value <value>]
-h display this help and exit
--policy display policy configuration and exit
--keypath <keypath> path to node
--value <value> value of leaf
Return codes:
0 - ok
1 - warning message is printed on stdout
2 - error message is printed on stdout
EOF
exit 1
}
while [ $# -gt 0 ]; do
case "$1" in
-h)
usage_and_exit
;;
--policy)
cat << EOF
begin policy
keypath: /devices/global-settings/trace-dir
dependency: /devices/global-settings
priority: 2
call: each
end
EOF
exit 0
;;
--keypath)
if [ $# -lt 2 ]; then
echo "<ERROR> --keypath <keypath> - path omitted"
usage_and_exit
else
keypath=$2
shift
fi
;;
--value)
if [ $# -lt 2 ]; then
echo "<ERROR> --value <value> - leaf value omitted"
usage_and_exit
else
value=$2
shift
fi
;;
*)
usage_and_exit
;;
esac
shift
done
if [ -z "${keypath}" ]; then
echo "<ERROR> --keypath <keypath> is mandatory"
usage_and_exit
fi
if [ -z "${value}" ]; then
echo "<ERROR> --value <value> is mandatory"
usage_and_exit
fi
orig="./logs"
dir=${value}
# dir=`ncs-maapi --get /devices/global-settings/trace-dir`
if [ "${dir}" != "${orig}" ] ; then
echo "/devices/global-settings/trace-dir: must retain it original value (${orig})"
exit 2
fiadmin@ncs(config)# devices global-settings trace-dir ./testing
admin@ncs(config)# commit
Aborted: /devices/global-settings/trace-dir: must retain it original
value (./logs)begin post-commit
end#!/bin/bash
set -e
if [ $# -gt 0 ]; then
case "$1" in
--post-commit)
cat << EOF
begin post-commit
end
EOF
exit 0
;;
*)
echo
echo "Usage: $0 [--post-commit]"
echo
echo " --post-commit Mandatory for post-commit scripts"
exit 1
;;
esac
else
file="mail_admin.log"
NCS_DIFF=$(ncs-maapi --keypath-diff /)
mail -s "NCS Mailer" [email protected] <<EOF
AutoGenerated mail from NCS
$NCS_DIFF
EOF
fiadmin@ncs# script reload debug
$NCS_DIR/examples.ncs/getting-started/using-ncs/1-simulated-cisco-ios/scripts:
ok
post-commit:
mail_admin.sh: new
--- Output from
$NCS_DIR/examples.ncs/getting-started/using-ncs/1-simulated-cisco-ios/scripts/post-commit/mail_admin.sh
--post-commit ---
1: begin post-commit
2: end
3:
---
admin@ncs# config
Entering configuration mode terminal
admin@ncs(config)# devices global-settings trace-dir ./again
admin@ncs(config)# commit
Commit complete.AutoGenerated mail from NCS
value set : /devices/global-settings/trace-dirA NED package is a package that NSO uses to manage a particular type of device. A NED is a piece of code that enables communication with a particular type of managed device. You add NEDs to NSO as a special kind of package, called NED packages.
A NED package must provide a device YANG model as well as define means (protocol) to communicate with the device. The latter can either leverage the NSO built-in NETCONF and SNMP support or use a custom implementation. When a package provides custom protocol implementation, typically written in Java, it is called a CLI NED or a Generic NED.
Cisco provides and supports a number of such NEDs. With these Cisco-provided NEDs, a major category are CLI NEDs which communicate with a device through its CLI instead of a dedicated API.
This NED category is targeted at devices that use CLI as a configuration interface. Cisco-provided CLI NEDs are available for various network devices from different vendors. Many different CLI syntaxes are supported.
The driver element in a CLI NED implemented by the Cisco NSO NED team typically consists of the following three parts:
The protocol client, responsible for connecting to and interacting with the device. The protocols supported are SSH and Telnet.
A fast and versatile CLI parser (+ emitter), usually referred to as the turbo parser.
Various transform engines capable of converting data between NSO and device formats.
The YANG models in a CLI NED are developed and maintained by the Cisco NSO NED team. Usually, the models for a CLI NED are structured to mimic the CLI command hierarchy on the device.
A generic NED is typically used to communicate with non-CLI devices, such as devices using protocols like REST, TL1, Corba, SOAP, RESTCONF, or gNMI as a configuration interface. Even NETCONF-enabled devices in many cases require a generic NED to function properly with NSO.
The driver element in a Generic NED implemented by the Cisco NED team typically consists of the following parts:
The protocol client, responsible for interacting with the device.
Various transform engines capable of converting data between NSO and the device formats, usually JSON and/or XML transformers.
There are two types of Generic NEDs maintained by the Cisco NSO NED team:
NEDs with Cisco-owned YANG models. These NEDs have models developed and maintained by the Cisco NSO NED team.
NEDs targeted at YANG models from third-party vendors, also known as, third-party YANG NEDs.
Generic NEDs belonging to the first category typically handle devices that are model-driven. For instance, devices using proprietary protocols based on REST, SOAP, Corba, etc. The YANG models for such NEDs are usually structured to mimic the messages used by the proprietary protocol of the device.
As the name implies, this NED category is used for cases where the device YANG models are not implemented, maintained, or owned by the Cisco NSO NED team. Instead, the YANG models are typically provided by the device vendor itself, or by organizations like IETF, IEEE, ONF, or OpenConfig.
This category of NEDs has some special characteristics that set them apart from all other NEDs developed by the Cisco NSO NED team:
Targeted for devices supporting model-driven protocols like NETCONF, RESTCONF, and gNMI.
Delivered from the software.cisco.com portal without any device YANG models included. There are several reasons for this, such as legal restrictions that prevent Cisco from re-distributing YANG models from other vendors, or the availability of several different version bundles for open-source YANG, like OpenConfig. The version used by the NED must match the version used by the targeted device.
The NEDs can be bundled with various fixes to solve shortcomings in the YANG models, the download sources, and/or in the device. These fixes are referred to as recipes.
Since the third-party NEDs are delivered without any device YANG models, there are additional steps required to make this category of NEDs operational:
The device models need to be downloaded and copied into the NED package source tree. This can be done by using a special (optional) downloader tool bundled with each third-party YANG NED, or in any custom way.
The NED must be rebuilt with the downloaded YANG models.
This procedure is thoroughly described in Managing Cisco-provided third-Party YANG NEDs.
Recipes
A third-party YANG NED can be bundled with up to three types of recipe modules. These recipes are used by the NED to solve various types of issues related to:
The source of the YANG files.
The YANG files.
The device itself.
The recipes represent the characteristics and the real value of a third-party YANG NED. Recipes are typically adapted for a certain bundle of YANG models and/or certain device types. This is why there exist many different third-party YANG NEDs, each one adapted for a specific protocol, a specific model package, and/or a specific device.
Download Recipes
When downloading the YANG files, it is first of all important to know which source to use. In some cases, the source is the device itself. For instance, if the device is enabled for NETCONF and sometimes for RESTCONF (in rare cases).
In other cases, the device does not support model download. This applies to all gNMI-enabled devices and most RESTCONF devices too. In this case, the source can be a public Git repository or an archive file provided by the device vendor.
Another important question is what YANG models and what versions to download. To make this task easier, third-party NEDs can be bundled with the download recipes. These are presets to be used with the downloader tool bundled with the NED. There can be several profiles, each representing a preset that has been verified to work by the Cisco NSO NED team. A profile can point out a certain source to download from. It can also limit the scope of the download so that only certain YANG files are selected.
YANG Recipes (YR)
Third-party YANG files can often contain various types of errors, ranging from real bugs that cause compilation errors to certain YANG constructs that are known to cause runtime issues in NSO. To ensure that the files can be built correctly, the third-party NEDs can be bundled with YANG recipes. These recipes patch the downloaded YANG files before they are built by the NSO compiler. This procedure is performed automatically by the make system when the NED is rebuilt after downloading the device YANG files. For more information, refer to Rebuilding the NED with a Unique NED ID.
Runtime Recipes (RR)
Many devices enabled for NETCONF, RESTCONF, or gNMI sometimes deviate in their runtime behavior. This can make it impossible to interact properly with NSO. These deviations can be on any level in the runtime behavior, such as:
The configuration protocol is not properly implemented, i.e., the device lacks support for mandatory parts of, for instance, the RESTCONF RFC.
The device returns "dirty" configuration dumps, for instance, JSON or XML containing invalid elements.
Special quirks are required when applying new configuration on a device. May also require additional transforms of the payload before it is relayed by the NED.
The device has aliasing issues, possibly caused by overlapping YANG models. If leaf X in model A is modified, the device will automatically modify leaf Y in model B as well.
A third-party YANG NED can be bundled with runtime recipes to solve these kinds of issues, if necessary. How this is implemented varies from NED to NED. In some cases, a NED has a fixed set of recipes that are always used. Alternatively, a NED can support several different recipes, which can be configured through a NED setting, referred to as a runtime profile. For example, a multi-vendor third-party YANG NED might have one runtime profile for each device type supported:
NED settings are YANG models augmented as configurations in NSO and control the behavior of the NED. These settings are augmented under:
/devices/global-settings/ned-settings
/devices/profiles/ned-settings
/devices/device/ned-settings
Most NEDs are instrumented with a large number of NED settings that can be used to customize the device instance configured in NSO. The README file in the respective NED contains more information on these.
Each managed device in NSO has a device type that informs NSO how to communicate with the device. When managing NEDs, the device type is either cli or generic. The other two device types, netconf and snmp, are used in NETCONF and SNMP packages and are further described in this guide.
In addition, a special NED ID identifier is needed. Simply put, this identifier is a handle in NSO pointing to the NED package. NSO uses the identifier when it is about to invoke the driver in a NED package. The identifier ensures that the driver of the correct NED package is called for a given device instance. For more information on how to set up a new device instance, see Configuring a device with the new Cisco-provided NED.
Each NED package has a NED ID, which is mandatory. The NED ID is a simple string that can have any format. For NEDs developed by the Cisco NSO NED team, the NED ID is formatted as <NED NAME>-<gen | cli>-<NED VERSION MAJOR>.<NED VERSION MINOR>.
Examples
onf-tapi_rc-gen-2.0
cisco-iosxr-cli-7.43
The NED ID for a certain NED package stays the same from one version to another, as long as no backward incompatible changes have been done to the YANG models. Upgrading a NED from one version to another, where the NED ID is the same, is simple as it only requires replacing the old NED package with the new one in NSO and then reloading all packages.
Upgrading a NED package from one version to another, where the NED ID is not the same (typically indicated by a change of major or minor number in the NED version), requires additional steps. The new NED package first needs to be installed side-by-side with the old one. Then, a NED migration needs to be performed. This procedure is thoroughly described in NED Migration.
The Cisco NSO NED team ensures that our CLI NEDs, as well as Generic NEDs with Cisco-owned models, have version numbers and NED ID that indicate any possible backward incompatible YANG model changes. When a NED with such an incompatible change is released, the minor digit in the version is always incremented. The case is a bit different for our third-party YANG NEDs since it is up to the end user to select the NED ID to be used. This is further described in Managing Cisco-provided third-Party YANG NEDs.
A NED is assigned a version number consisting of a sequence of numbers separated by dots. The first two numbers represent the major and minor version, and the third number represents the maintenance version.
For example, the number 5.8.1 indicates a maintenance release (1) for the minor release 5.8. Incompatible YANG model changes require either the major or minor version number to be changed. This means that any version within the 5.8.x series is backward compatible with the previous versions.
When a newer maintenance release with the same major/minor version replaces a NED release, NSO can perform a simple data model upgrade to handle stored instance data in the CDB (Configuration Database). This type of upgrade does not pose a risk of data loss.
However, when a NED is replaced by a new major/minor release, it becomes a NED migration. These migrations are complex because the YANG model changes can potentially result in the loss of instance data if not handled correctly.
This section describes the NED installation in NSO for Local and System installs. Consult the README.md supplied with the NED for the most up-to-date installation description.
This section describes how to install a NED package on a locally installed NSO. See Local Install Steps for more information.
Follow the instructions below to install a NED package:
Download the latest production-grade version of the NED from software.cisco.com using the URLs provided on your NED license certificates. All NED packages are files with the .signed.bin extension named using the following rule: ncs-<NSO VERSION>-<NED NAME>-<NED VERSION>.signed.bin. The NED package ncs-6.0-cisco-iosxr-7.43.signed.bin will be used in the example below. It is assumed the NED package has been downloaded into the directory named /tmp/ned-package-store. The environment variable NSO_RUNDIR needs to be configured to point to the NSO runtime directory. Example:
Unpack the NED package and verify its signature.
In case the signature cannot be verified (for instance, if access to internet is down), do as below instead:
The result of the unpacking is a tar.gz file with the same name as the .bin file.
Untar the tar.gz file. The result is a subdirectory named like <NED NAME>-<NED MAJOR VERSION DIGIT>.<NED MINOR VERSION DIGIT>
Install the NED into NSO, using the ncs-setup tool.
Finally, open an NSO CLI session and load the new NED package like below:
Alternatively, the tar.gz file can be installed directly into NSO. In this case, skip steps 3 and 4, and do as below instead:
This section describes how to install a NED package on a system-installed NSO. See System Install Steps for more information.
Download the latest production-grade version of the NED from software.cisco.com using the URLs provided on your NED license certificates. All NED packages are files with the .signed.bin extension named using the following rule: ncs-<NSO_VERSION>-<NED NAME>-<NED VERSION>.signed.bin. The NED package ncs-6.0-cisco-iosxr-7.43.signed.bin will be used in the example below. It is assumed that the package has been downloaded into the directory named /tmp/ned-package-store.
Unpack the NED package and verify its signature.
In case the signature cannot be verified (for instance, if access to internet is down), do as below instead.
The result of the unpacking is a tar.gz file with the same name as the .bin file.
Perform an NSO backup before installing the new NED package.
Start an NSO CLI session.
Fetch the NED package.
Install the NED package (add the argument replace-existing if a previous version has been loaded).
Finally, load the NED package.
The basic steps for configuring a device instance using the newly installed NED package are described in this section. Only the most basic configuration steps are covered here.
Many NEDs require additional custom configuration to be operational. This applies in particular to Generic NEDs. Information about such additional configuration can be found in the files README.md and README-ned-settings.md bundled with the NED package.
The following info is necessary to proceed with the basic setup of a device instance in NSO:
NED ID of the new NED.
Connection information for the device to connect to (address and port).
Authentication information to the device (username and password).
For CLI NEDs, it is mandatory to specify the protocol to be used, either SSH or Telnet.
The following values will be used for this example:
NED ID: cisco-iosxr-cli-7.43
Address: 10.10.1.1
Port: 22
Protocol: ssh
User: cisco
Password: cisco
Do the CLI NED setup as below:
Start an NSO CLI session.
Enter the configuration mode.
Configure a new authentication group to be used for this device.
Configure the new device instance.
Next, check the README.md and README-ned-settings.md bundled with the NED package for further information on additional settings to make the NED fully operational.
Finally, commit the configuration.
In the case of SSH, run also:
This example shows a simple setup of a generic NED.
The following values will be used for this example:
NED ID: onf-tapi_rc-gen-2.0
Address: 10.10.1.2
Port: 443
User: admin
Password: admin
Do the Generic NED setup as below:
Start an NSO CLI session.
Enter the configuration mode.
Configure a new authentication group to be used for this device.
Configure the new device instance.
Next, check the README.md and README-ned-settings.md bundled with the NED package for further information on additional settings to make the NED fully operational.
Finally, commit the configuration.
The third-party YANG NED type is a special category of the generic NED type targeted for devices supporting protocols like NETCONF, RESTCONF, and gNMI. As the name implies, this NED category is used for cases where the device YANG models are not implemented or maintained by the Cisco NSO NED Team. Instead, the YANG models are typically provided by the device vendor itself or by organizations like IETF, IEEE, ONF, or OpenConfig.
A third-party YANG NED package is delivered from the software.cisco.com portal without any device YANG models included. It is required that the models are first downloaded, followed by a rebuild and reload of the package, before the NED can become fully operational. This task needs to be performed by the NED user.
This section gives a brief instruction on how to download the device YANG models using the special downloader tool that is bundled with each third-party YANG NED. Each specific NED can contain specific requirements regarding downloading/rebuilding. Before proceeding, check the file README-rebuild.md bundled with the NED package. Furthermore, it is recommended to use a non-production NSO environment for this task.
Download and install the third-party YANG NED package into NSO, see Local Install of NED in NSO.
Configure a device instance using as usual. See Cisco-provided Generic NED Setup for more information. The device name dev-1 will be used in this example.
Open an NCS CLI session (non-configure mode).
The installed NED is now basically empty. It contains no YANG models except some used by the NED internally. This can be verified with the following CLI commands:
The built-in downloader tool consists of a couple of NSO RPCs defined in one of the NED internal YANG files.
Start with checking the default local directory. This directory will be used as a target for the device YANG models to be downloaded.
This RPC will throw an error if the NED package was installed directly using the tar.gz file. See for more information.
If this error occurs, it is necessary to unpack the NED package in some other directory and use that as a target for the download. In the example below it is /tmp/ned-package-store/onf-tapi_rc-2.0/src/yang.
Continue with listing the models supported by the connected device.
The size of the displayed list is device-dependent and so is the level of detail in each list entry. The only mandatory field is the name. Furthermore, not all devices are actually capable of advertising the models supported. If the currently connected device lacks this support, it is usually emulated by the NED instead. Check the README-rebuild.md for more information regarding this.
Next, list the download profiles currently supported by the device.
A download profile is a preset for the built-in download tool. Its purpose is to make the download procedure as easy as possible. A profile can, for instance, define a certain source from where the device YANG models will be downloaded. Another usage can be to limit the scope of the YANG files to download. For example, one profile to download the native device models, and another for the OpenConfig models. All download profiles are defined and verified by the Cisco NSO NED team. There is usually at least one profile available, otherwise, check the README-rebuild.md bundled in the NED package.
Finally, try downloading the YANG models using a profile. In case a non-default local directory is used as a target, it must be explicitly specified.
In case the default local directory is used, no further arguments are needed.
The tool will output a list with each file downloaded. It automatically scans each YANG file for dependencies and tries to download them as well.
Verify that the downloaded files have been stored properly in the configured target directory.
The NED must be rebuilt when the device YANG models have been downloaded and stored properly. Compiling third-party YANG files is often combined with various types of issues caused by bad or odd YANG constructs. Such issues typically cause compiler errors or unwanted runtime errors in NSO. A third-party YANG NED is configured to take care of all currently known build issues. It will automatically patch the problematic files such that they build properly for NSO. This is done using a set of YANG build recipes bundled with the NED package.
Before rebuilding the NED, it is important to know the path to the target directory used for the downloaded YANG files. This is the same as the local directory if the built-in NED downloader tool was used, see Downloading with the NED Built-in Download Tool.
This example uses the environment variable NED_YANG_TARGET_DIR to represent the target directory.
To rebuild the NED with the downloaded YANG file:
Enter the NED build directory, which is the parent directory to the target directory.
Run the make clean all command. The output from the make command can be massive, depending on the number of YANG files, etc. After this step, the NED is rebuilt with the device YANG models included. Lines like below indicate that the NED has applied a number of YANG recipes (patches) to solve known issues with the YANG files:
This is the final step to make a third-party YANG NED operational. If the NED built-in YANG downloader tool was used together with no local-dir argument specified (i.e., the default), the only thing required is a package reload in NSO, which you can do by running the packages reload or the packages add command.
If another target directory was used for the YANG file download, it is necessary to first do a proper re-install of the NED package. See NED Installation in NSO.
A common use case is to have many different versions of a certain device type in the network. All devices can be managed by the same third-party YANG NED. However, each device will likely have its unique set of YANG files (or versions) which this NED has to be rebuilt for.
To set up NSO for this kind of scenario, some additional steps need to be taken:
Each flavor of the NED needs to be built in a separate source directory, i.e., untar the third-party YANG NED package at multiple locations.
Each flavor of the re-built NED must have its own unique NED-ID. This will make NSO allow multiple versions of the same NED package to co-exist.
The default NED ID for a third-party YANG NED typically looks like this: <NED NAME>-gen-<NED VERSION MAJOR DIGIT>.<NED VERSION MINOR DIGIT>
The NED build system allows for a customized NED ID by setting one or several of three make variables in any combination when rebuilding the NED:
NED_ID_SUFFIX
NED_ID_MAJOR
NED_ID_MINOR
Do as follows to build each flavor of the third-party YANG NED. Do it in iterations, one at a time:
Unpack the empty NED package as described in NED Installation in NSO.
Unpack the NED package again in a separate location. Rename the NED directory to something unique.
Configure a device instance using the installed NED, as described in Cisco-provided Generic NED Setup. Configure it to connect to the first variant of the device.
Follow the instructions in Downloading with the NED Built-in Download Tool to download the YANG files. Configure local-dir to point to the location configured in .
Rebuild a NED package from the location configured in . Use a suitable combination of the NED_ID_SUFFIX, NED_ID_MAJOR, NED_ID_MINOR.
Example 1:
This will result in the NED ID: onf-tapi_rc_tapi_v2.1.3-gen-2.0.
Example 2:
This will result in the NED ID: onf-tapi_rc-gen-2.1.3.
Install the newly built NED package into NSO, side-by-side with the original NED package. See for further information.
Example:
Configure a new device instance using the newly installed NED package. Configure it to connect to the first variant of the device, as done in step 3.
Verify functionality by executing a sync-from on the configured device instance.
The NSO procedure to upgrade a NED package to a newer version uses the following approach:
If there are no backward incompatible changes in the schemas (YANG models) of respective NEDs, simply replace the old NED with the new one and reload all packages in NSO.
In case there are backwards incompatible changes present in the schemas, some administration is required: the new NED needs to be installed side-by-side with the old NED, after which a NED migration must be performed to properly update the data in CDB using the new schemas. More information about NED migration is available in NED Migration.
Whether or not there are backward incompatible differences present between two versions of the same NED, is determined by the NED ID. If the versions have the same NED ID, they are fully compatible; otherwise, the NED IDs will differ, typically indicated by the major and/or minor number in the NED ID.
The third-party YANG NEDs add some extra complexity to the NED migration feature. This is because the device YANG models are not included in the NED package. It is up to the end user to select the YANG model versions to use and also to configure the NED ID. If the same NED, at a later stage, needs to be upgraded and rebuilt with newer versions of the YANG model, a decision has to be made regarding the NED ID: Is it safe to use the same NED ID, or should a new one be used?
Using a unique NED ID for each NED package is always the safe option. It minimizes the risk of data loss during package upgrade, etc. However, in some cases, it might be beneficial to use the same NED ID when upgrading a NED package since it minimizes the administration in NSO, i.e., simply replace the old NED package with the new one without any need of NED migration.
This kind of use case can occur when the firmware is upgraded on an NSO-controlled device. For example, assume that we have an optical device that supports the TAPI YANG models from the Open Networking Foundation. Current firmware supports version 2.1.3 of the TAPI bundle. The third-party YANG NED onf-tapi_rc has been rebuilt accordingly with TAPI version 2.1.3 and the default NED ID onf-tapi_rc-gen-2.0. This NED package is installed in NSO and a device instance named dev-1 is configured using it. Next, the optical device is upgraded with the new firmware that supports the TAPI bundle version 2.3.1 instead. The onf-tapi_rc NED needs to be upgraded accordingly. The question is what NED ID to use?
To upgrade a Cisco-provided third-party YANG NED to a newer version:
Unpack a fresh copy of the onf-tapi_rc NED package.
Download the TAPI models v2.3.1 from the TAPI public Git repository.
Rebuild the NED package with a temporary unique NED ID for this rebuild. Any unique NED ID works for this.
This will generate the NED ID: onf-tapi_rc-gen-2.3.1.
Install the new onf-tapi_rc NED package into NSO, side by side with the old one.
Now, execute a dry run of the NSO NED migration feature. This command generates a list of all schema differences found between the two packages, like below:
If the goal is to rebuild the new NED package again using the same NED ID as the old NED package, there are two things to look out for in the list:
Does the list contain any items with backward-compatible false?
If the answer is yes, is the affected schema node relevant for any use case, i.e., referenced by any service code running in NSO?
Any item listed as backward-compatible false can potentially result in data loss if the old NED is simply replaced with the new one. This might however be acceptable if the affected schema node is not relevant for any use case.
If you upgrade a managed device (such as installing a new firmware), the device data model can change in a significant way. If this is the case, you usually need to use a different and newer NED with an updated YANG model.
When the changes in the NED are not backward compatible, the NED is assigned a new ned-id to avoid breaking existing code. On the plus side, this allows you to use both versions of the NED at the same time, so some devices can use the new version and some can use the old one. As a result, there is no need to upgrade all devices at the same time. The downside is, NSO doesn't know the two NEDs are related and will not perform any upgrade on its own due to different ned-ids. Instead, you must manually change the NED of a managed device through a NED migration.
Migration is required when upgrading a NED and the NED-ID changes, which is signified by a change in either the first or the second number in the NED package version. For example, if you're upgrading the existing router-nc-1.0.1 NED to router-nc-1.2.0 or router-nc-2.0.2, you must perform NED migration. On the other hand, upgrading to router-nc-1.0.2 or router-nc-1.0.3 retains the same ned-id and you can upgrade the router-1.0.1 package in place, directly replacing it with the new one. However, note that some third-party, non-Cisco packages may not adhere to this standard versioning convention. In that case, you must check the ned-id values to see whether migration is needed.
A potential issue with a new NED is that it can break an existing service or other packages that rely on it. To help service developers and operators verify or upgrade the service code, NSO provides additional options of migration tooling for identifying the paths and service instances that may be impacted. Therefore, ensure that all the other packages are compatible with the new NED before you start migrating devices.
To prepare for the NED migration process, first, load the new NED package into NSO with either packages reload or packages add command. Then, use the show packages command to verify that both NEDs, the new and the old, are present. Finally, you may perform the migration of devices either one by one or multiple at a time.
Depending on your operational policies, this may be done during normal operations and does not strictly require a maintenance window, as the migration only reads from and doesn't write to a network device. Still, it is recommended that you create an NSO backup before proceeding.
Note that changing a ned-id also affects device templates if you use them. To make existing device templates compatible with the new ned-id, you can use the copy action. It will copy the configuration used for one ned-id to another, as long as the schema nodes used haven't changed between the versions. The following example demonstrates the copy action usage:
For individual devices, use the /devices/device/migrate action, with the new-ned-id parameter. Without additional options, the command will read and update the device configuration in NSO. As part of this process, NSO migrates all the configuration and service meta-data. Use the dry-run option to see what the command would do and verbose to list all impacted service instances.
You may also use the no-networking option to prevent NSO from generating any southbound traffic towards the device. In this case, only the device configuration in the CDB is used for the migration but then NSO can't know if the device is in sync. Afterward, you must use the compare-config or the sync-from action to remedy this.
For migrating multiple devices, use the /devices/migrate action, which takes the same options. However, with this action, you must also specify the old-ned-id, which limits the migration to devices using the old NED. You can further restrict the action with the device parameter, selecting only specific devices.
It is possible for a NED migration to fail if the new NED is not entirely backward compatible with the old one and the device has an active configuration that is incompatible with the new NED version. In such cases, NSO will produce an error with the YANG constraint that is not satisfied. Here, you must first manually adjust the device configuration to make it compatible with the new NED, and then you can perform the migration as usual.
Depending on what changes are introduced by the migration and how these impact the services, it might be good to re-deploy the affected services before removing the old NED package. It is especially recommended in the following cases:
When the service touches a list key that has changed. As long as the old schema is loaded, NSO is able to perform an upgrade.
When a namespace that was used by the service has been removed. The service diffset, that is, the recorded configuration changes created by the service, will no longer be valid. The diffset is needed for the correct get-modifications output, deep-check-sync, and similar operations.
The YANG modeling language supports the notion of a module revision. It allows users to distinguish between different versions of a module, so the module can evolve over time. If you wish to use a new revision of a module for a managed device, for example, to access new features, you generally need to create a new NED.
When a model evolves quickly and you have many devices that require the use of a lot of different revisions, you will need to maintain a high number of NEDs, which are mostly the same. This can become especially burdensome during NSO version upgrades, when all NEDs may need to be recompiled.
When a YANG module is only updated in a backward-compatible way (following the upgrade rules in RFC6020 or RFC7950), the NSO compiler, ncsc, allows you to pack multiple module revisions into the same package. This way, a single NED with multiple device model revisions can be used, instead of multiple NEDs. Based on the capabilities exchange, NSO will then use the correct revision for communication with each device.
However, there is a major downside to this approach. While the exact revision is known for each communication session with the managed device, the device model in NSO does not have that information. For that reason, the device model always uses the latest revision. When pushing configuration to a device that only supports an older revision, NSO silently drops the unsupported parts. This may have surprising results, as the NSO copy can contain configuration that is not really supported on the device. Use the no-revision-drop commit parameter when you want to make sure you are not committing config that is not supported by a device.
If you still wish to use this functionality, you can create a NED package with the ncs-make-package --netconf-ned command as you would otherwise. However, the supplied source YANG directory should contain YANG modules with different revisions. The files should follow the module-or-submodule-name@revision-date.yang naming convention, as specified in the RFC6020. Some versions of the compiler require you to use the --no-fail-on-warnings option with the ncs-make-package command or the build process may fail.
The examples.ncs/development-guide/ned-upgrade/yang-revision example shows how you can perform a YANG model upgrade. The original, 1.0 version of the router NED uses the [email protected] YANG model. First, it is updated to the version 1.0.1 [email protected] using a revision merge approach. This is possible because the changes are backward-compatible.
In the second part of the example, the updates in [email protected] introduce breaking changes, therefore the version is increased to 1.1 and a different NED-ID is assigned to the NED. In this case, you can't use revision merge and the usual NED migration procedure is required.
> export NSO_RUNDIR=~/nso-lab-rundir> cd /tmp/ned-package-store
> chmod u+x ncs-6.0-cisco-iosxr-7.43.signed.bin
> ./ncs-6.0-cisco-iosxr-7.43.signed.bin> cd /tmp/ned-package-store
> chmod u+x ncs-6.0-cisco-iosxr-7.43.signed.bin
> ./ncs-6.0-cisco-iosxr-7.43.signed.bin> ./ncs-6.0-cisco-iosxr-7.43.signed.bin --skip-verification> ncs_cli -C -u adminadmin@ncs# configure
Entering configuration mode terminal
admin@ncs(config)#admin@ncs(config)# devices authgroup my-xrgroup default-map
remote-name cisco remote-password ciscoadmin@ncs(config)# devices device xrdev-1 address 10.10.1.1
admin@ncs(config)# devices device xrdev-1 port 22
admin@ncs(config)# devices device xrdev-1 device-type cli ned-id cisco-iosxr-cli-7.43 protocol ssh
admin@ncs(config)# devices device xrdev-1 state admin-state unlocked
admin@ncs(config)# devices device xrdev-1 authgroup my-xrgroup> ncs_cli -C -u adminadmin@ncs# configure
Entering configuration mode terminal
admin@ncs(config)#admin@ncs(config)# devices authgroup my-tapigroup default-map remote-name admin
remote-password adminadmin@ncs(config)# devices device tapidev-1 address 10.10.1.2
admin@ncs(config)# devices device tapidev-1 port 443
admin@ncs(config)# devices device tapidev-1 device-type generic ned-id onf-tapi_rc-gen-2.0
admin@ncs(config)# devices device tapidev-1 state admin-state unlocked
admin@ncs(config)# devices device tapidev-1 authgroup my-tapigroup> ncs_cli -C -u admin> echo $NED_YANG_TARGET_DIR
/tmp/ned-package-store/onf-tapi_rc-2.0/src/yang
> cd $NED_YANG_TARGET_DIR/..> make clean all
======== RUNNING YANG PRE-PROCESSOR (YPP) WITH THE FOLLOWING VARIABLES:
tools/ypp --var NCS_VER=6.0 --var NCS_VER_NUMERIC=6000000
--var SUPPORTS_CDM=YES --var SUPPORTS_ROLLBACK_FILES_OCTAL=YES
--var SUPPORTS_SHOW_STATS_PATH=YES \
\
--from=' NEDCOM_SECRET_TYPE' --to=' string' \
'tmp-yang/*.yang'
touch tmp-yang/ypp_ned
======== REMOVE PRESENCE STATEMENT ON CONTEXT TOP CONTAINER
tools/ypp --from="(presence \"Root container)" \
--to="//\g<1>" \
'tmp-yang/tapi-common.yang'
======== ADDING EXTRA ENUM WITH CORRECT SPELLING: NO_PROTECTION
tools/jypp --add-stmt=/typedef#protection-type/type::"enum NO_PROTECTION;" \
'tmp-yang/tapi-topology.yang' || true
======== ADDING EXTRA IDENTITIES USED BY CERTAIN TAPI DEVICES
tools/jypp --add-stmt=/::"identity DIGITAL_SIGNAL_TYPE_400GBASE-R { base DIGITAL_SIGNAL_TYPE; }" \
--add-stmt=/::"identity DIGITAL_SIGNAL_TYPE_GigE_CONV { base DIGITAL_SIGNAL_TYPE; }" \
--add-stmt=/::"identity DIGITAL_SIGNAL_TYPE_ETHERNET { base DIGITAL_SIGNAL_TYPE; }" \
'tmp-yang/tapi-dsr.yang' || true> cd /tmp/ned-package-store
> chmod u+x ncs-6.0-onf-tapi_rc-2.0.3.signed.bin
> ./ncs-6.0-onf-tapi_rc-2.0.3.signed.bin
> tar xfz ncs-6.0-onf-tapi_rc-2.0.3.tar.gz
> ls -d */
onf-tapi_rc-2.0
> mv onf-tapi_rc-2.0 onf-tapi_rc-2.0-variant-1> cd /tmp/ned-package-store
> chmod u+x ncs-6.0-onf-tapi_rc-2.0.3.signed.bin
> ./ncs-6.0-onf-tapi_rc-2.0.3.signed.bin
> tar xfz ncs-6.0-onf-tapi_rc-2.0.3.tar.gz
> ls -d */
onf-tapi_rc-2.0
> mv onf-tapi_rc-2.0 onf-tapi_rc-2.0-for-new-firmware> ncs_cli -C -u admin
admin@ncs# devices device dev-1 rpc rpc-get-modules get-modules
profile onf-tapi-from-git remote { git { checkout v2.3.1 } }
local-dir /tmp/ned-package-store/onf-tapi_rc-2.0-new-firmware/src/yang> cd /tmp/ned-package-store/onf-tapi_rc-2.0-for-new-firmware/src/yang
> make clean all NED_ID_MAJOR=2 NED_ID_MINOR=3.1admin@ncs(config)# devices device dev-1 ned-settings
onf-tapi_rc restconf profile vendor-xyz> ncs-setup --package cisco-iosxr-7.43.tar.gz --dest $NSO_RUNDIR> ncs_cli -C -u admin
admin@ncs# packages reload
>>> System upgrade is starting.
>>> Sessions in configure mode must exit to operational mode.
>>> No configuration changes can be performed until upgrade has completed.
>>> System upgrade has completed successfully.
reload-result {
package onf-tapi_rc-gen-2.0
result true
}
admin@ncs#admin@ncs(config)# devices template acme-ntp ned-id router-nc-1.0
copy ned-id router-nc-1.2





> ./ncs-6.0-cisco-iosxr-7.43.signed.bin --skip-verification> ls *.tar.gz
ncs-6.0-cisco-iosxr-7.43.tar.gz> tar xfz ncs-6.0-cisco-iosxr-7.43.tar.gz
> ls -d */
cisco-iosxr-7.43 > ncs-setup --package cisco-iosxr-7.43 --dest $NSO_RUNDIR> ncs_cli -C -u admin
admin@ncs# packages reload
reload-result {
package cisco-iosxr-cli-7.43
result true
}> ls *.tar.gz
ncs-6.0-cisco-iosxr-7.43.tar.gz> $NCS_DIR/bin/ncs-backup> ncs_cli -C -u adminadmin@ncs# software packages fetch package-from-file
/tmp/ned-package-store/ncs-6.0-cisco-iosxr-7.43.tar.gz
admin@ncs# software packages list
package {
name ncs-6.0-cisco-iosxr-7.43.tar.gz
installable
}admin@ncs# software packages install cisco-iosxr-7.43
admin@ncs# software packages list
package {
name ncs-6.0-cisco-iosxr-7.43.tar.gz
installed
}admin@ncs# packages reload
admin@ncs# software packages list
package {
name cisco-iosxr-cli-7.43
loaded
}admin@ncs(config)# commitadmin@ncs(config)# devices device xrdev-1 ssh fetch-host-keysadmin@ncs(config)# commitadmin@ncs# devices device dev-1 connect
result true
info (admin) Connected to dev-1 - 127.0.0.1:7888
admin@ncs# show devices device dev-1 module
NAME REVISION FEATURE DEVIATION
-------------------------------------------------------------
ietf-restconf-monitoring 2017-01-26 - -
tailf-internal-rpcs 2022-07-08 - -
tailf-ned-onf-tapi_rc-stats 2022-10-17 - -admin@ncs# devices device dev-1 rpc ?
Possible completions:
rpc-get-modules rpc-list-modules rpc-list-profiles rpc-show-default-local-diradmin@ncs# devices device dev-1 rpc rpc-show-default-local-dir show-default-local-dir
result /nso-lab-rundir/packages/onf-tapi_rc-2.0/src/yang
admin@ncs#admin@ncs# devices device dev-1 rpc rpc-show-default-local-dir show-default-local-dir
Error: External error in the NED implementation for device nokia-srlinux-1: default
local directory does not exist (/nso-lab-rundir/packages/onf-tapi_rc-2.0/src/yang)
admin@ncs#> cd /tmp/ned-package-store
> chmod u+x ncs-6.0-onf-tapi_rc-2.0.3.signed.bin
> ./ncs-6.0-onf-tapi_rc-2.0.3.signed.bin
> tar xfz ncs-6.0-onf-tapi_rc-2.0.3.tar.gz
> ls -d */
onf-tapi_rc-2.0admin@ncs# devices device netsim-0 rpc rpc-list-modules list-modules
module {
name tapi-common
revision 2020-04-23
namespace urn:onf:otcc:yang:tapi-common
schema https://localhost:7888/restconf/tailf/modules/tapi-common/2020-04-23
}
module {
name tapi-connectivity
revision 2020-06-16
namespace urn:onf:otcc:yang:tapi-connectivity
schema https://localhost:7888/restconf/tailf/modules/tapi-connectivity/2020-06-16
}
module {
name tapi-dsr
revision 2020-04-23
namespace urn:onf:otcc:yang:tapi-dsr
schema https://localhost:7888/restconf/tailf/modules/tapi-dsr/2020-04-23
}
module {
name tapi-equipment
revision 2020-04-23
namespace urn:onf:otcc:yang:tapi-equipment
schema https://localhost:7888/restconf/tailf/modules/tapi-equipment/2020-04-23
}
...admin@ncs# devices device dev-1 rpc rpc-list-profiles list-profiles
profile {
name onf-tapi-from-device
description Download the ONF TAPI YANG models. Download is done directly from device.
}
profile {
name onf-tapi-from-git
description Download the ONF TAPI YANG models. Download is done from the ONF TAPI github repo.
}
profile {
name onf-tapi
description Download the ONF TAPI YANG models. Download source must be specified explicitly.
}admin@ncs# devices device dev-1 rpc rpc-get-modules get-modules profile
onf-tapi-from-device local-dir /tmp/ned-package-store/onf-tapi_rc-2.0/src/yangadmin@ncs# devices device dev-1 rpc rpc-get-modules get-modules profile onf-tapi-from-deviceresult
Fetching modules:
tapi-common - urn:onf:otcc:yang:tapi-common (32875 bytes)
tapi-connectivity - urn:onf:otcc:yang:tapi-connectivity (40488 bytes)
fetching imported module tapi-path-computation
fetching imported module tapi-topology
tapi-dsr - urn:onf:otcc:yang:tapi-dsr (11172 bytes)
tapi-equipment - urn:onf:otcc:yang:tapi-equipment (33406 bytes)
tapi-eth - urn:onf:otcc:yang:tapi-eth (93152 bytes)
fetching imported module tapi-oam
tapi-notification - urn:onf:otcc:yang:tapi-notification (23864 bytes)
tapi-oam - urn:onf:otcc:yang:tapi-oam (30409 bytes)
tapi-odu - urn:onf:otcc:yang:tapi-odu (45327 bytes)
tapi-path-computation - urn:onf:otcc:yang:tapi-path-computation (19628 bytes)
tapi-photonic-media - urn:onf:otcc:yang:tapi-photonic-media (52848 bytes)
tapi-topology - urn:onf:otcc:yang:tapi-topology (43357 bytes)
tapi-virtual-network - urn:onf:otcc:yang:tapi-virtual-network (13278 bytes)
fetched and saved 12 yang module(s) to /tmp/ned-package-store/onf-tapi_rc-2.0/src/yang> ls -l /tmp/ned-package-store/onf-tapi_rc-2.0/src/yang
total 616
-rw-r--r-- 1 nso-user staff 109607 Nov 11 13:15 tailf-common.yang
-rw-r--r-- 1 nso-user staff 32878 Nov 11 13:15 tapi-common.yang
-rw-r--r-- 1 nso-user staff 40503 Nov 11 13:15 tapi-connectivity.yang
-rw-r--r-- 1 nso-user staff 11172 Nov 11 13:15 tapi-dsr.yang
-rw-r--r-- 1 nso-user staff 33406 Nov 11 13:15 tapi-equipment.yang
-rw-r--r-- 1 nso-user staff 93152 Nov 11 13:15 tapi-eth.yang
-rw-r--r-- 1 nso-user staff 23864 Nov 11 13:15 tapi-notification.yang
-rw-r--r-- 1 nso-user staff 30409 Nov 11 13:15 tapi-oam.yang
-rw-r--r-- 1 nso-user staff 45327 Nov 11 13:15 tapi-odu.yang
-rw-r--r-- 1 nso-user staff 19628 Nov 11 13:15 tapi-path-computation.yang
-rw-r--r-- 1 nso-user staff 52848 Nov 11 13:15 tapi-photonic-media.yang
-rw-r--r-- 1 nso-user staff 43357 Nov 11 13:15 tapi-topology.yang
-rw-r--r-- 1 nso-user staff 13281 Nov 11 13:15 tapi-virtual-network.yang> ncs_cli -C -u admin
admin@ncs# devices device dev-1 rpc rpc-get-modules get-modules profile
onf-tapi-from-device local-dir /tmp/ned-package-store/onf-tapi_rc-2.0-variant-1/src/yang> make clean all NED_ID_SUFFIX=_tapi_v2.1.3> make clean all NED_ID_MAJOR=2 NED_ID_MINOR=1.3> cd /tmp/ned-package-store
> tar cfz onf-tapi_rc-2.0-variant-1.tar.gz onf-tapi_rc-2.0-variant-1
> ncs-setup --package onf-tapi_rc-2.0-variant-1.tar.gz --dest $NSO_RUNDIR
> ncs_cli -C -u admin
admin@ncs# packages reload> cd /tmp/ned-package-store
> tar cfz onf-tapi_rc-2.0-variant-1.tar.gz onf-tapi_rc-2.0-variant-1
> ncs-setup --package onf-tapi_rc-2.0-variant-1.tar.gz --dest $NSO_RUNDIR
> ncs_cli -C -u admin
admin@ncs# packages reload
>>> System upgrade is starting.
>>> Sessions in configure mode must exit to operational mode.
>>> No configuration changes can be performed until upgrade has completed.
>>> System upgrade has completed successfully.
reload-result {
package onf-tapi_rc-gen-2.0
result true
}
reload-result {
package onf-tapi_rc-gen-2.3.1
result true
}admin@ncs# devices device dev-1 migrate new-ned-id onf-tapi_rc-gen-2.3.1 dry-run
modified-path {
path /tapi-common:context/tapi-virtual-network:virtual-network-context/
virtual-nw-service/vnw-constraint/service-layer
info leaf-list type stack has changed
backward-compatible false
}
modified-path {
path /tapi-common:context/tapi-virtual-network:virtual-network-context/
virtual-nw-service/vnw-constraint/requested-capacity/bandwidth-profile
info sub-tree has been deleted
backward-compatible false
}
modified-path {
path /tapi-common:context/tapi-virtual-network:virtual-network-context/
virtual-nw-service/vnw-constraint/latency-characteristic/queing-latency-characteristic
info sub-tree has been deleted
backward-compatible false
}
modified-path {
path /tapi-common:context/tapi-virtual-network:virtual-network-context/
virtual-nw-service/vnw-constraint
info min/max has been relaxed
backward-compatible true
}
modified-path {
path /tapi-common:context/tapi-virtual-network:virtual-network-context/
virtual-nw-service/vnw-constraint
info list key has changed; leaf 'local-id' has changed type
backward-compatible false
}
modified-path {
path /tapi-common:context/tapi-virtual-network:virtual-network-context/
virtual-nw-service/layer-protocol-name
info node is no longer mandatory
backward-compatible true
}
modified-path {
path /tapi-common:context/tapi-virtual-network:virtual-network-context/
virtual-nw-service/layer-protocol-name
info leaf-list type stack has changed
backward-compatible false
}Develop your own NEDs to integrate unsupported devices in your network.
A Network Element Driver (NED) represents a key NSO component that allows NSO to communicate southbound with network devices. The device YANG models contained in the Network Element Drivers (NEDs) enable NSO to store device configurations in the CDB and expose a uniform API to the network for automation. The YANG models can cover only a tiny subset of the device or all of the device. Typically, the YANG models contained in a NED represent the subset of the device's configuration data, state data, Remote Procedure Calls, and notifications to be managed using NSO.
This guide provides information on NED development, focusing on building your own NED package. For a general introduction to NEDs, Cisco-provided NEDs, and NED administration, refer to the NED Administration in Administration.
A NED package allows NSO to manage a network device of a specific type. NEDs typically contain YANG models and the code, specifying how NSO should configure and retrieve status. When developing your own NED, there are four categories supported by NSO.
A NETCONF NED is used with the NSO's built-in NETCONF client and requires no code. Only YANG models. This NED is suitable for devices that strictly follow the specification for the NETCONF protocol and YANG mappings to NETCONF targeting a standardized machine-to-machine interface.
CLI NED targeted devices that use a Cisco-style CLI as a human-to-machine configuration interface. Various YANG extensions are used to annotate the YANG model representation of the device together with code-converting data between NSO and device formats.
A generic NED is typically used to communicate with non-CLI devices, such as devices using protocols like REST, TL1, Corba, SOAP, RESTCONF, or gNMI as a configuration interface. Even NETCONF-enabled devices often require a generic NED to function properly with NSO.
In summary, the NETCONF and SNMP NEDs use built-in NSO clients; the CLI NED is model-driven, whereas the generic NED requires a Java program to translate operations toward the device.
NSO differentiates between managed devices that can handle transactions and devices that can not. This discussion applies regardless of NED type, i.e., NETCONF, SNMP, CLI, or Generic.
NEDs for devices that cannot handle abort must indicate so in the reply of the newConnection() method indicating that the NED wants a reverse diff in case of an abort. Thus, NSO has two different ways to abort a transaction towards a NED, invoke the abort() method with or without a generated reverse diff.
For non-transactional devices, we have no other way of trying out a proposed configuration change than to send the change to the device and see what happens.
The table below shows the seven different data-related callbacks that could or must be implemented by all NEDs. It also differentiates between 4 different types of devices and what the NED must do in each callback for the different types of devices.
The table below displays the device types:
INITIALIZE: The initialize phase is used to initialize a transaction. For instance, if locking or other transaction preparations are necessary, they should be performed here. This callback is not mandatory to implement if no NED-specific transaction preparations are needed.
UNINITIALIZE: If the transaction is not completed and the NED has done INITIALIZE, this method is called to undo the transaction preparations, that is restoring the NED to the state before INITIALIZE. This callback is not mandatory to implement if no NED-specific preparations were performed in INITIALIZE.
PREPARE: In the prepare phase, the NEDs get exposed to all the changes that are destined for each managed device handled by each NED. It is the responsibility of the NED to determine the outcome here. If the NED replies successfully from the prepare phase, NSO assumes the device will be able to go through with the proposed configuration change.
ABORT: If any participants in the transaction reject the proposed changes, all NEDs will be invoked in the abort() method for each managed device the NED handles. It is the responsibility of the NED to make sure that whatever was done in the PREPARE phase is undone. For NEDs that indicate as a reply in newConnection() that they want the reverse diff, they will get the reverse data as a parameter here.
COMMIT: Once all NEDs that get invoked in commit(Timeout) reply OK, the transaction is permanently committed to the system. The NED may still reject the change in COMMIT. If any NED rejects the COMMIT, all participants will be invoked in REVERT, NEDs that support confirmed commit with a timeout, Cisco XR may choose to use the provided timeout to make REVERT easy to implement.
REVERT: This state is reached if any NED reports failure in the COMMIT phase. Similar to the ABORT state, the reverse diff is supplied to the NED if the NED has asked for that.
PERSIST: This state is reached at the end of a successful transaction. Here it's the responsibility of the NED to make sure that if the device reboots, the changes are still there.
The following state diagram depicts the different states the NED code goes through in the life of a transaction.
NED devices have runtime data and statistics. The first part of being able to collect non-configuration data from a NED device is to model the statistics data we wish to gather. In normal YANG files, it is common to have the runtime data nested inside the configuration data. In gathering runtime data for NED devices we have chosen to separate configuration data and runtime data. In the case of the archetypical CLI device, the show running-config ... and friends are used to display the running configuration of the device whereas other different show ... commands are used to display runtime data, for example show interfaces, show routes. Different commands for different types of routers/switches and in particular, different tabular output format for different device types.
To expose runtime data from a NED controlled device, regardless of whether it's a CLI NED or a Generic NED, we need to do two things:
Write YANG models for the aspects of runtime data we wish to expose northbound in NSO.
Write Java NED code that is responsible for collecting that data.
The NSO NED for the Avaya 4k device contains a data model for some real statistics for the Avaya router and also the accompanying Java NED code. Let's start to take a look at the YANG model for the stats portion, we have:
It's a config false; list of counters per interface. We compile the NED stats module with the --ncs-compile-module flag or with the --ncs-compile-bundle flag. It's the same non-config module that contains both runtime data as well as commands and rpcs.
The config false; data from a module that has been compiled with the --ncs-compile-module flag will end up mounted under /devices/device/live-status tree. Thus running the NED towards a real router we have:
It is the responsibility of the NED code to populate the data in the live device tree. Whenever a northbound agent tries to read any data in the live device tree for a NED device, the NED code is invoked.
The NED code implements an interface called, NedConnection This interface contains:
This interface method is invoked by NSO in the NED. The Java code must return what is requested, but it may also return more. The Java code always needs to signal errors by invoking NedWorker.error() and success by invoking NedWorker.showStatsPathResponse(). The latter function indicates what is returned, and also how long it shall be cached inside NSO.
The reason for this design is that it is common for many show commands to work on for example an entire interface, or some other item in the managed device. Say that the NSO operator (or MAAPI code) invokes:
requesting a single leaf, the NED Java code can decide to execute any arbitrary show command towards the managed device, parse the output, and populate as much data as it wants. The Java code also decides how long time the NSO shall cache the data.
When the showStatsPath() is invoked, the NED should indicate the state/value of the node indicated by the path (i.e. if a leaf was requested, the NED should write the value of this leaf to the provided transaction handler (th) using MAAPI, or indicate its absence as described below; if a list entry or a presence container was requested then the NED should indicate presence or absence of the element, if the whole list is requested then the NED should populate the keys for this list). Often requesting such data from the actual device will give the NED more data than specifically requested, in which case the worker is free to write other values as well. The NED is not limited to populating the subtree indicated by the path, it may also write values outside this subtree. NSO will then not request those paths but read them directly from the transaction. Different timeouts can be provided for different paths.
If a leaf does not have a value or does not exist, the NED can indicate this by returning a TTL for the path to the leaf, without setting the value in the provided transaction. This has changed from earlier versions of NSO. The same applies to optional containers and list entries. If the NED populates the keys for a certain list (both when it is requested to do so or when it decided to do so because it has received this data from the device), it should set the TTL value for the list itself to indicate the time the set of keys should be considered up to date. It may choose to provide different TTL values for some or all list entries, but it is not required to do so.
One important task when implementing a NED of any type is to make it mimic the devices handling of default values as close as possible. Network equipment can typically deal with default values in many different ways.
Some devices display default values on leafs even if they have not been explicitly set. Others use trimming, meaning that if a leaf is set to its default value it will be 'unset' and disappear from the devices configuration dump.
It is the responsibility of the NED to make the NSO aware of how the device handles default values. This is done by registering a special NED Capability entry with the NSO. Two modes are currently supported by the NSO: trim and report-all.
Example 129. A device trimming default values
This is the typical behavior of a Cisco IOS device. The simple YANG code snippet below illustrates the behavior. A container with a boolean leaf. Its default value is true.
Try setting the leaf to true in NSO and commit. Then compare the configuration:
The result shows that the configurations differ. The reason is that the device does not display the value of the leaf 'enabled'. It has been trimmed since it has its default value. The NSO is now out of sync with the device.
To solve this issue, make the NED tell the NSO that the device is trimming default values. Register an extra NED Capability entry in the Java code.
Now, try the same operation again:
The NSO is now in sync with the device.
Example: A Device Displaying All Default Values
Some devices display default values for leafs even if they have not been explicitly set. The simple YANG code below will be used to illustrate this behavior. A list containing a key and a leaf with a default value.
Try creating a new list entry in NSO and commit. Then compare the configuration:
The result shows that the configurations differ. The NSO is out of sync. This is because the device displays the default value of the 'threshold' leaf even if it has not been explicitly set through the NSO.
To solve this issue, make the NED tell the NSO that the device is reporting all default values. Register an extra NED Capability entry in the Java code.
Now, try the same operation again:
The NSO is now in sync with the device.
The possibility to do a dry-run on a transaction is a feature in NSO that allows to examine the changes to be pushed out to the managed devices in the network. The output can be produced in different formats, namely cli, xml, and native. In order to produce a dry run in the native output format NSO needs to know the exact syntax used by the device, and the task of converting the commands or operations produced by the NSO into the device-specific output belongs the corresponding NED. This is the purpose of the prepareDry() callback in the NED interface.
In order to be able to invoke a callback an instance of the NED object needs to be created first. There are two ways to instantiate a NED:
newConnection() callback that tells the NED to establish a connection to the device which can later be used to perform any action such as show configuration, apply changes, or view operational data as well as produce dry-run output.
Optional initNoConnect() callback that tells the NED to create an instance that would not need to communicate with the device, and hence must not establish a connection or otherwise communicate with the device. This instance will only be used to calculate dry-run output. It is possible for a NED to reject the initNoConnect() request if it is not able to calculate the dry-run output without establishing a connection to the device, for example, if a NED is capable of managing devices with different flavors of syntax and it is not known at the moment which syntax is used by this particular device.
The following state diagram displays NED states specific to the dry-run scenario.
Each managed device in NSO has a device type, which informs NSO how to communicate with the device. The device type is one of netconf, snmp, cli, or generic. In addition, a special ned-id identifier is needed.
NSO uses a technique called YANG Schema Mount, where all the data models from a device are mounted into the /devices tree in NSO. Each set of mounted data models is completely separated from the others (they are confined to a "mount jail"). This makes it possible to load different versions of the same YANG module for different devices. The functionality is called Common Data Models (CDM).
In most cases, there are many devices running the same software version in the network managed by NSO, thus using the exact same set of YANG modules. With CDM, all YANG modules for a certain device (or family of devices) are contained in a NED package (or just NED for short). If the YANG modules on the device are updated in a backward-compatible way, the NED is also updated.
However, if the YANG modules on the device are updated in an incompatible way in a new version of the device's software, it might be necessary to create a new NED package for the new set of modules. Without CDM, this would not be possible, since there would be two different packages that contained different versions of the same YANG module.
When a NED is being built, its YANG modules are compiled to be mounted into the NSO YANG model. This is done by device compilation of the device's YANG modules and is performed via the ncsc tool provided by NSO.
The ned-id identifier is a YANG identity, which must be derived from one of the pre-defined identities in $NCS_DIR/src/ned/yang/tailf-ncs-ned.yang.
A YANG model for devices handled by NED code needs to extend the base identity and provide a new identity that can be configured.
The Java NED code registers the identity it handles with NSO.
Similar to how we import device models for NETCONF-based devices, we use the ncsc --ncs-compile-bundle command to import YANG models for NED-handled devices.
Once we have imported such a YANG model into NSO, we can configure the managed device in NSO to be handled by the appropriate NED handler (which is user Java code, more on that later)
When NSO needs to communicate southbound towards a managed device which is not of type NETCONF, it will look for a NED that has registered with the name of the identity, in the case above, the string "ios".
Thus before the NSO attempts to connect to a NED device before it tries to sync, or manipulate the configuration of the device, a user-based Java NED code must have registered with the NSO service manager indicating which Java class is responsible for the NED with the string of the identity, in this case, the string "ios". This happens automatically when the NSO Java VM gets a instantiate-component request for an NSO package component of type ned.
The component Java class myNed needs to implement either of the interfaces NedGeneric or NedCli. Both interfaces require the NED class to implement the following:
The above three callbacks are used by the NSO Java VM to connect the NED Java class with NSO. They are called at when the NSO Java VM receives the instantiate-component request.
The underlying NedMux will start a number of threads, and invoke the registered class with other data callbacks as transactions execute.
juniper-junos_nc NEDNSO has supported Junos devices from early on. The legacy Junos NED is NETCONF-based, but as Junos devices did not provide YANG modules in the past, complex NSO machinery translated Juniper's XML Schema Description (XSD) files into a single YANG module. This was an attempt to aggregate several Juniper device modules/versions.
Juniper nowadays provides YANG modules for Junos devices. Junos YANG modules can be downloaded from the device and used directly in NSO with the new juniper-junos_nc NED.
By downloading the YANG modules using juniper-junos_nc NED tools and rebuilding the NED, the NED can provide full coverage immediately when the device is updated instead of waiting for a new legacy NED release.
This guide describes how to replace the legacy juniper-junos NED and migrate NSO applications to the juniper-junos_nc NED using the NSO MPLS VPN example from the NSO examples collection as a reference.
Prepare the example:
Add the juniper-junos and juniper-junos_nc NED packages to the example.
Configure the connection to the Junos device.
Add the MPLS VPN service configuration to the simulated network, including the Junos device using the legacy juniper-junos NED.
Adapting the service to the juniper-junos_nc NED:
Un-deploy MPLS VPN service instances with no-networking.
Delete Junos device config with no-networking.
Set the Junos device to NETCONF/YANG compliant mode.
Switch the ned-id for the Junos device to the
This guide uses the MPLS VPN example in Python from the NSO example set under $NCS_DIR/examples.ncs/getting-started/developing-with-ncs/17-mpls-vpn-python to demonstrate porting an existing application to use the juniper-junos_nc NED. The simulated Junos device is replaced with a Junos vMX 21.1R1.11 container, but other NETCONF/YANG-compliant Junos versions also work.
juniper-junos and juniper-junos_nc NED PackagesThe first step is to add the latest juniper-junos and juniper-junos_nc NED packages to the example's package directory. The NED tar-balls must be available and downloaded from your account to the 17-mpls-vpn-python example directory. Replace the NSO_VERSION and NED_VERSION variables with the versions you use:
Build and start the example:
Replace the netsim device connection configuration in NSO with the configuration for connecting to the Junos device. Adjust the USER_NAME, PASSWORD, and HOST_NAME/IP_ADDR variables and the timeouts as required for the Junos device you are using with this example:
Open a CLI terminal or use NETCONF on the Junos device to verify that the rfc-compliant and yang-compliant modes are not yet enabled. Examples:
Or:
The rfc-compliant and yang-compliant nodes must not be enabled yet for the legacy Junos NED to work. If enabled, delete in the Junos CLI or using NETCONF. A netconf-console example:
Back to the NSO CLI to upgrade the legacy juniper-junos NED to the latest version:
Turn off autowizard and complete-on-space to make it possible to paste configs:
The example service config for two MPLS VPNs where the endpoints have been selected to pass through the PE node PE2, which is a Junos device:
To verify that the traffic passes through PE2:
Toward the end of this lengthy output, observe that some config changes are going to the PE2 device using the http://xml.juniper.net/xnm/1.1/xnm legacy namespace:
Looks good. Commit to the network:
juniper-junos_nc NEDNow that the service's configuration is in place using the legacy juniper-junos NED to configure the PE2 Junos device, proceed and switch to using the juniper-junos_nc NED with PE2 instead. The service template and Python code will need a few adaptations.
no-networkingTo keep the NSO service meta-data information intact when bringing up the service with the new juniper-junos_nc NED, first un-deploy the service instances in NSO, only keeping the configuration on the devices:
no-networkingFirst, save the legacy Junos non-compliant mode device configuration to later diff against the compliant mode config:
Delete the PE2 configuration in NSO to prepare for retrieving it from the device in a NETCONF/YANG compliant format using the new NED:
Using the Junos CLI:
Or, using the NSO netconf-console tool:
juniper-junos_nc NED Packagejuniper-junos_nc NED PackageThe juniper-junos_nc NED is delivered without YANG modules, enabling populating it with device-specific YANG modules. The YANG modules are retrieved directly from the Junos device:
See the juniper-junos_nc README for more options and details.
Build the YANG modules retrieved from the Junos device with the juniper-junos_nc NED:
Reload the packages to load the juniper-junos_nc NED with the added YANG modules:
The service must be updated to handle the difference between the Junos device's non-compliant and compliant configuration. The NSO service uses Python code to configure the Junos device using a service template. One way to find the required updates to the template and code is to check the difference between the non-compliant and compliant configurations for the parts covered by the template.
Checking the packages/l3vpn/templates/l3vpn-pe.xml service template Junos device part under the legacy http://xml.juniper.net/xnm/1.1/xnm namespace, you can observe that it configures interfaces, routing-instances, policy-options, and class-of-service.
You can save the NETCONF/YANG compliant Junos device configuration and diff it against the non-compliant configuration from the previously stored legacy.xml file:
Examining the difference between the configuration in the legacy.xml and new.xml files for the parts covered by the service template:
There is no longer a single namespace covering all configurations. The configuration is now divided into multiple YANG modules with a namespace for each.
The /configuration/policy-options/policy-statement/then/community node choice identity is no longer provided with a leaf named key1. Instead, the leaf name is choice-ident, and a choice-value leaf is set.
The /configuration/class-of-service/interfaces/interface/unit/shaping-rate/rate
To enable the template to handle a Junos device in NETCONF/YANG compliant mode, add the following to the packages/l3vpn/templates/l3vpn-pe.xml service template:
The Python file changes to handle the new BW_SUFFIX variable to generate a string with a suffix instead of an int32:
Code that uses the function and set the string to the service template:
After making the changes to the service template and Python code, reload the updated package(s):
The service instances need to be re-deployed to own the device configuration again:
The service is now in sync with the device configuration stored in NSO CDB:
When re-deploying the service instances, any issues with the added service template section for the compliant Junos device configuration, such as the added namespaces and nodes, are discovered.
As there is no validation for the rate leaf string with a suffix in the Junos device model, no errors are discovered if it is provided in the wrong format until updating the Junos device. Comparing the device configuration in NSO with the configuration on the device shows such inconsistencies without having to test the configuration with the device:
If there are issues, correct them and redo the re-deploy no-networking for the service instances.
When all issues have been resolved, the service configuration is in sync with the device configuration, and the NSO CDB device configuration matches to the configuration on the Junos device:
The NSO service instances are now in sync with the configuration on the Junos device using the juniper-junos_nc NED.
Learn service development in Java with Examples.
As using Java for service development may be somewhat more involved than Python, this section provides further examples and additional tips for setting up the development environment for Java.
The two examples, a simple VLAN service and a Layer 3 MPLS VPN service are more elaborate but show the same techniques as .
If you or your team primarily focuses on services implemented in Python, feel free to skip or only skim through this section.
NSO's built-in SNMP client can manage SNMP devices by supplying NSO with the MIBs, with some additional declarative annotations and code to handle the communication to the device. Usually, this legacy protocol is used to read state data. Albeit limited, NSO has support for configuring devices using SNMP.
juniper-junos_ncDownload the compliant YANG models, build, and reload the juniper-junos_nc NED package.
Sync from the Junos device to get the compliant Junos device config.
Update the MPLS VPN service to handle the difference between the non-compliant and compliant configurations belonging to the service.
Re-deploy the MPLS VPN service instances with no-networking to make the MPLS VPN service instances own the device configuration again.
int32BW_SUFFIXSNMP, Cisco IOS, NETCONF devices with startup+running.
Devices that can abort, NETCONF devices without confirmed commit.
Cisco XR type of devices.
ConfD, Junos.
initialize(). NED code shall make the device go into config mode (if applicable) and lock (if applicable).
initialize(). NED code shall start a transaction on the device.
initialize(). NED code shall do the equivalent of configure exclusive.
Built in, NSO will lock.
uninitialize(). NED code shall unlock (if applicable).
uninitialize(). NED code shall abort the transaction.
uninitialize(). NED code shall abort the transaction.
Built in, NSO will unlock.
prepare(Data). NED code shall send all data to the device.
prepare(Data). NED code shall add Data to the transaction and validate.
prepare(Data). NED code shall add Data to the transaction and validate.
Built in, NSO will edit-config towards the candidate, validate and commit confirmed with a timeout.
abort(ReverseData | null) Either do the equivalent of copy startup to running, or apply the ReverseData to the device.
abort(ReverseData | null). Abort the transaction
abort(ReverseData | null). Abort the transaction
Built in, discard-changes and close.
commit(Timeout). Do nothing
commit(Timeout). Commit the transaction.
commit(Timeout). Execute commit confirmed [Timeout] on the device.
Built in, commit confirmed with the timeout.
revert(ReverseData | null) Either do the equivalent of copy startup to running, or apply the ReverseData to the device.
revert(ReverseData | null) Either do the equivalent of copy startup to running, or apply the ReverseData to the device.
revert(ReverseData | null). discard-changes
Built in, discard-changes and close.
persist() Either do the equivalent of copy running to startup or nothing.
persist() Either do the equivalent of copy running to startup or nothing.
persist(). confirm.
Built in, commit confirm.



module tailf-ned-avaya-4k-stats {
namespace 'http://tail-f.com/ned/avaya-4k-stats';
prefix avaya4k-stats;
import tailf-common {
prefix tailf;
}
import ietf-inet-types {
prefix inet;
}
import ietf-yang-types {
prefix yang;
}
container stats {
config false;
container interface {
list gigabitEthernet {
key "num port";
tailf:cli-key-format "$1/$2";
leaf num {
type uint16;
}
leaf port {
type uint16;
}
leaf in-packets-per-second {
type uint64;
}
leaf out-packets-per-second {
type uint64;
}
leaf in-octets-per-second {
type uint64;
}
leaf out-octets-per-second {
type uint64;
}
leaf in-octets {
type uint64;
}
leaf out-octets {
type uint64;
}
leaf in-packets {
type uint64;
}
leaf out-packets {
type uint64;
}
}
}
}
}$ ncsc --ncs-compile-module avaya4k-stats.yang \
--ncs-device-dir <dir>admin@ncs# show devices device r1 live-status interfaces
live-status {
interface gigabitEthernet1/1 {
in-packets-per-second 234;
out-packets-per-second 177;
in-octets-per-second 4567;
out-octets-per-second 3561;
in-octets 12666;
out-octets 16888;
in-packets 7892;
out-packets 2892;
}
............void showStatsPath(NedWorker w, int th, ConfPath path)
throws NedException, IOException;admin@host> show status devices device r1 live-status \
interface gigabitEthernet1/1/1 out-octets
out-octets 340;container aaa {
leaf enabled {
default true;
type boolean;
}
}$ ncs_cli -C -u adminadmin@ncs# configadmin@ncs(config)# devices device a0 config aaa enabled trueadmin@ncs(config)# commitCommit complete.admin@ncs(config)# top devices device a0 compare-config
diff
devices {
device a0 {
config {
aaa {
- enabled;
}
}
}
}NedCapability capas[] = new NedCapability[2];
capas[0] = new NedCapability(
"",
"urn:ios",
"tailf-ned-cisco-ios",
"",
"2015-01-01",
"");
capas[1] = new NedCapability(
"urn:ietf:params:netconf:capability:" +
"with-defaults:1.0?basic-mode=trim", // Set mode to trim
"urn:ietf:params:netconf:capability:" +
"with-defaults:1.0",
"",
"",
"",
"");$ ncs_cli -C -u adminadmin@ncs# configadmin@ncs(config)# devices device a0 config aaa enabled trueadmin@ncs(config)# commitCommit complete.admin@ncs(config)# top devices device a0 compare-configadmin@ncs(config)#list interface {
key id;
leaf id {
type string;
}
leaf treshold {
default 20;
type uint8;
}
}$ ncs_cli -C -u adminadmin@ncs# configadmin@ncs(config)# devices device a0 config interface myinterfaceadmin@ncs(config)# commitadmin@ncs(config)# top devices device a0 compare-config
diff
devices {
device a0 {
config {
interface myinterface {
+ treshold 20;
}
}
}
}NedCapability capas[] = new NedCapability[2];
capas[0] = new NedCapability(
"",
"urn:abc",
"tailf-ned-abc",
"",
"2015-01-01",
"");
capas[1] = new NedCapability(
"urn:ietf:params:netconf:capability:" +
"with-defaults:1.0?basic-mode=report-all", // Set mode to report-all
"urn:ietf:params:netconf:capability:" +
"with-defaults:1.0",
"",
"",
"",
"");$ ncs_cli -C -u adminadmin@ncs# configadmin@ncs(config)# devices device a0 config interface myinterfaceadmin@ncs(config)# commitCommit complete.admin@ncs(config)# top devices device a0 compare-configadmin@ncs(config)#import tailf-ncs-ned {
prefix ned;
}
identity cisco-ios {
base ned:cli-ned-id;
}admin@ncs# show running config devices device r1
address 127.0.0.1
port 2025
authgroup default
device-type cli ned-id cisco-ios
state admin-state unlocked
...// should return "cli" or "generic"
String type();
// Which YANG modules are covered by the class
String [] modules();
// Which identity is implemented by the class
String identity();$ cd $NCS_DIR/examples.ncs/getting-started/developing-with-ncs/17-mpls-vpn-python
$ cp ./ncs-NSO_VERSION-juniper-junos-NED_VERSION.tar.gz packages/
$ cd packages
$ tar xfz ../ncs-NSO_VERSION-juniper-junos_nc-NED_VERSION.tar.gz
$ cd -$ make all start$ ncs_cli -u admin -C
admin@ncs# config
admin@ncs(config)# devices authgroups group juniper umap admin remote-name USER_NAME \
remote-password PASSWORD
admin@ncs(config)# devices device pe2 authgroup juniper address HOST_NAME/IP_ADDR port 830
admin@ncs(config)# devices device pe2 connect-timeout 240
admin@ncs(config)# devices device pe2 read-timeout 240
admin@ncs(config)# devices device pe2 write-timeout 240
admin@ncs(config)# commit
admin@ncs(config)# end
admin@ncs# exit$ ssh USER_NAME@HOST_NAME/IP_ADDR
junos> configure
junos# show system services netconf
ssh;$ netconf-console -s plain -u USER_NAME -p PASSWORD --host=HOST_NAME/IP_ADDR \
--port=830 --get-config
--subtree-filter=-<<<'<configuration xmlns="http://xml.juniper.net/xnm/1.1/xnm">
<system>
<services>
<netconf/>
</services>
</system>
</configuration>'
<rpc-reply xmlns:junos="http://xml.juniper.net/junos/21.1R0/junos"
xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="1">
<data>
<configuration xmlns="http://xml.juniper.net/xnm/1.1/xnm">
<system>
<services>
<netconf>
<ssh>
</ssh>
</netconf>
</services>
</system>
</configuration>
</data>
</rpc-reply>$ netconf-console -s plain -u USER_NAME -p PASSWORD --host=HOST_NAME/IP_ADDR --port=830
--db=candidate
--edit-config=- <<<'<configuration xmlns="http://xml.juniper.net/xnm/1.1/xnm"
xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0">
<system>
<services>
<netconf>
<rfc-compliant nc:operation="remove"/>
<yang-compliant nc:operation="remove"/>
</netconf>
</services>
</system>
</configuration>'
$ netconf-console -s plain -u USER_NAME -p PASSWORD --host=HOST_NAME/IP_ADDR \
--port=830 --commit$ ncs_cli -u admin -C
admin@ncs# config
admin@ncs(config)# devices device pe2 ssh fetch-host-keys
admin@ncs(config)# devices device pe2 migrate new-ned-id juniper-junos-nc-NED_VERSION
admin@ncs(config)# devices sync-from
admin@ncs(config)# endadmin@ncs# autowizard false
admin@ncs# complete-on-space falsevpn l3vpn ikea
as-number 65101
endpoint branch-office1
ce-device ce1
ce-interface GigabitEthernet0/11
ip-network 10.7.7.0/24
bandwidth 6000000
!
endpoint branch-office2
ce-device ce4
ce-interface GigabitEthernet0/18
ip-network 10.8.8.0/24
bandwidth 300000
!
endpoint main-office
ce-device ce0
ce-interface GigabitEthernet0/11
ip-network 10.10.1.0/24
bandwidth 12000000
!
qos qos-policy GOLD
!
vpn l3vpn spotify
as-number 65202
endpoint branch-office1
ce-device ce5
ce-interface GigabitEthernet0/1
ip-network 10.2.3.0/24
bandwidth 10000000
!
endpoint branch-office2
ce-device ce3
ce-interface GigabitEthernet0/4
ip-network 10.4.5.0/24
bandwidth 20000000
!
endpoint main-office
ce-device ce2
ce-interface GigabitEthernet0/8
ip-network 10.0.1.0/24
bandwidth 40000000
!
qos qos-policy GOLD
!admin@ncs(config)# commit dry-run outformat nativedevice {
name pe2
data <rpc xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="1">
<edit-config xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0">
<target>
<candidate/>
</target>
<test-option>test-then-set</test-option>
<error-option>rollback-on-error</error-option>
<with-inactive xmlns="http://tail-f.com/ns/netconf/inactive/1.0"/>
<config>
<configuration xmlns="http://xml.juniper.net/xnm/1.1/xnm">
<interfaces>
<interface>
<name>xe-0/0/2</name>
<unit>
<name>102</name>
<description>Link to CE / ce5 - GigabitEthernet0/1</description>
<family>
<inet>
<address>
<name>192.168.1.22/30</name>
</address>
</inet>
</family>
<vlan-id>102</vlan-id>
</unit>
</interface>
</interfaces>
...admin@ncs(config)# commitadmin@ncs(config)# vpn l3vpn * un-deploy no-networkingadmin@ncs(config)# show full-configuration devices device pe2 config \
configuration | display xml | save legacy.xmladmin@ncs(config)# no devices device pe2 config
admin@ncs(config)# commit no-networking
admin@ncs(config)# end
admin@ncs# exit$ ssh USER_NAME@HOST_NAME/IP_ADDR
junos> configure
junos# set system services netconf rfc-compliant
junos# set system services netconf yang-compliant
junos# show system services netconf
ssh;
rfc-compliant;
ÿang-compliant;
junos# commit$ netconf-console -s plain -u USER_NAME -p PASSWORD --host=HOST_NAME/IP_ADDR --port=830 \
--db=candidate
--edit-config=- <<<'<configuration xmlns="http://xml.juniper.net/xnm/1.1/xnm">
<system>
<services>
<netconf>
<rfc-compliant/>
<yang-compliant/>
</netconf>
</services>
</system>
</configuration>'
$ netconf-console -s plain -u USER_NAME -p PASSWORD --host=HOST_NAME/IP_ADDR --port=830 \
--commit$ ncs_cli -u admin -C
admin@ncs# config
admin@ncs(config)# devices device pe2 device-type generic ned-id juniper-junos_nc-gen-1.0
admin@ncs(config)# commit
admin@ncs(config)# end$ ncs_cli -u admin -C
admin@ncs# devices device pe2 connect
admin@ncs# devices device pe2 rpc rpc-get-modules get-modules
admin@ncs# exit$ make -C packages/juniper-junos_nc-gen-1.0/src$ ncs_cli -u admin -C
admin@ncs# packages reloadadmin@ncs# devices device pe2 sync-fromadmin@ncs# show running-config devices device pe2 config configuration \
| display xml | save new.xml </interfaces>
</class-of-service>
</configuration>
+
+ <configuration xmlns="http://yang.juniper.net/junos/conf/root" tags="merge">
+ <interfaces xmlns="http://yang.juniper.net/junos/conf/interfaces">
+ <interface>
+ <name>{$PE_INT_NAME}</name>
+ <no-traps/>
+ <vlan-tagging/>
+ <per-unit-scheduler/>
+ <unit>
+ <name>{$VLAN_ID}</name>
+ <description>Link to CE / {$CE} - {$CE_INT_NAME}</description>
+ <vlan-id>{$VLAN_ID}</vlan-id>
+ <family>
+ <inet>
+ <address>
+ <name>{$LINK_PE_ADR}/{$LINK_PREFIX}</name>
+ </address>
+ </inet>
+ </family>
+ </unit>
+ </interface>
+ </interfaces>
+ <routing-instances xmlns="http://yang.juniper.net/junos/conf/routing-instances">
+ <instance>
+ <name>{/name}</name>
+ <instance-type>vrf</instance-type>
+ <interface>
+ <name>{$PE_INT_NAME}.{$VLAN_ID}</name>
+ </interface>
+ <route-distinguisher>
+ <rd-type>{/as-number}:1</rd-type>
+ </route-distinguisher>
+ <vrf-import>{/name}-IMP</vrf-import>
+ <vrf-export>{/name}-EXP</vrf-export>
+ <vrf-table-label>
+ </vrf-table-label>
+ <protocols>
+ <bgp>
+ <group>
+ <name>{/name}</name>
+ <local-address>{$LINK_PE_ADR}</local-address>
+ <peer-as>{/as-number}</peer-as>
+ <local-as>
+ <as-number>100</as-number>
+ </local-as>
+ <neighbor>
+ <name>{$LINK_CE_ADR}</name>
+ </neighbor>
+ </group>
+ </bgp>
+ </protocols>
+ </instance>
+ </routing-instances>
+ <policy-options xmlns="http://yang.juniper.net/junos/conf/policy-options">
+ <policy-statement>
+ <name>{/name}-EXP</name>
+ <from>
+ <protocol>bgp</protocol>
+ </from>
+ <then>
+ <community>
+ <choice-ident>add</choice-ident>
+ <choice-value/>
+ <community-name>{/name}-comm-exp</community-name>
+ </community>
+ <accept/>
+ </then>
+ </policy-statement>
+ <policy-statement>
+ <name>{/name}-IMP</name>
+ <from>
+ <protocol>bgp</protocol>
+ <community>{/name}-comm-imp</community>
+ </from>
+ <then>
+ <accept/>
+ </then>
+ </policy-statement>
+ <community>
+ <name>{/name}-comm-imp</name>
+ <members>target:{/as-number}:1</members>
+ </community>
+ <community>
+ <name>{/name}-comm-exp</name>
+ <members>target:{/as-number}:1</members>
+ </community>
+ </policy-options>
+ <class-of-service xmlns="http://yang.juniper.net/junos/conf/class-of-service">
+ <interfaces>
+ <interface>
+ <name>{$PE_INT_NAME}</name>
+ <unit>
+ <name>{$VLAN_ID}</name>
+ <shaping-rate>
+ <rate>{$BW_SUFFIX}</rate>
+ </shaping-rate>
+ </unit>
+ </interface>
+ </interfaces>
+ </class-of-service>
+ </configuration>
</config>
</device>
</devices># of the service. These functions can be useful e.g. for
# allocations that should be stored and existing also when the
# service instance is removed.
+
+ @staticmethod
+ def int32_to_numeric_suffix_str(val):
+ for suffix in ["", "k", "m", "g", ""]:
+ suffix_val = int(val / 1000)
+ if suffix_val * 1000 != val:
+ return str(val) + suffix
+ val = suffix_val
+
@ncs.application.Service.create
def cb_create(self, tctx, root, service, proplist):
# The create() callback is invoked inside NCS FASTMAP and must tv.add('LOCAL_CE_NET', getIpAddress(endpoint.ip_network))
tv.add('CE_MASK', getNetMask(endpoint.ip_network))
+ tv.add('BW_SUFFIX', self.int32_to_numeric_suffix_str(endpoint.bandwidth))
tv.add('BW', endpoint.bandwidth)
tmpl = ncs.template.Template(service)
tmpl.apply('l3vpn-pe', tv)$ ncs_cli -u admin -C
admin@ncs# packages reloadadmin@ncs# vpn l3vpn * re-deploy no-networkingadmin@ncs# vpn l3vpn * check-sync
vpn l3vpn ikea check-sync
in-sync true
vpn l3vpn spotify check-sync
in-sync trueadmin@ncs# devices device pe2 compare-config$ ncs_cli -u admin -C
admin@ncs# vpn l3vpn * re-deployIn this example, you will create a simple VLAN service in Java. In order to illustrate the concepts, the device configuration is simplified from a networking perspective and only uses one single device type (Cisco IOS).
We will first look at the following preparatory steps:
Prepare a simulated environment of Cisco IOS devices: in this example, we start from scratch in order to illustrate the complete development process. We will not reuse any existing NSO examples.
Generate a template service skeleton package: use NSO tools to generate a Java-based service skeleton package.
Write and test the VLAN Service Model.
Analyze the VLAN service mapping to IOS configuration.
These steps are no different from defining services using templates. Next is to start playing with the Java Environment:
Configuring the start and stop of the Java VM.
First look at the Service Java Code: introduction to service mapping in Java.
Developing by tailing log files.
Developing using Eclipse.
We will start by setting up a run-time environment that includes simulated Cisco IOS devices and configuration data for NSO. Make sure you have sourced the ncsrc file.
Create a new directory that will contain the files for this example, such as:
Now, let's create a simulated environment with 3 IOS devices and an NSO that is ready to run with this simulated network:
Start the simulator and NSO:
Use the Cisco CLI towards one of the devices:
Use the NSO CLI to get the configuration:
Finally, set VLAN information manually on a device to prepare for the mapping later.
In the run-time directory, you created:
Note the packages directory, cd to it:
Currently, there is only one package, the Cisco IOS NED.
We will now create a new package that will contain the VLAN service.
This creates a package with the following structure:
During the rest of this section, we will work with the vlan/src/yang/vlan.yang and vlan/src/java/src/com/example/vlan/vlanRFS.java files.
So, if a user wants to create a new VLAN in the network what should the parameters be? Edit the vlan/src/yang/vlan.yang according to below:
This simple VLAN service model says:
We give a VLAN a name, for example net-1.
The VLAN has an id from 1 to 4096.
The VLAN is attached to a list of devices and interfaces. In order to make this example as simple as possible the interface name is just a string. A more correct and useful example would specify this is a reference to an interface to the device, but for now it is better to keep the example simple.
The VLAN service list is augmented into the services tree in NSO. This specifies the path to reach VLANs in the CLI, REST, etc. There are no requirements on where the service shall be added into NCS, if you want VLANs to be at the top level, simply remove the augments statement.
Make sure you keep the lines generated by the ncs-make-package:
The two lines tell NSO that this is a service. The first line expands to a YANG structure that is shared amongst all services. The second line connects the service to the Java callback.
To build this service model, cd to packages/vlan/src and type make (assumes that you have the prerequisite make build system installed).
We can now test the service model by requesting NSO to reload all packages:
You can also stop and start NSO, but then you have to pass the option --with-package-reload when starting NSO. This is important, NSO does not by default take any changes in packages into account when restarting. When packages are reloaded the state/packages-in-use is updated.
Now, create a VLAN service, (nothing will happen since we have not defined any mapping).
Now, let us move on and connect that to some device configuration using Java mapping. Note well that Java mapping is not needed, templates are more straightforward and recommended but we use this as a "Hello World" introduction to Java service programming in NSO. Also at the end, we will show how to combine Java and templates. Templates are used to define a vendor-independent way of mapping service attributes to device configuration and Java is used as a thin layer before the templates to do logic, call-outs to external systems, etc.
The default configuration of the Java VM is:
By default, NCS will start the Java VM by invoking the command $NCS_DIR/bin/ncs-start-java-vm. That script will invoke
The class NcsJVMLauncher contains the main() method. The started Java VM will automatically retrieve and deploy all Java code for the packages defined in the load path of the ncs.conf file. No other specification than the package-meta-data.xml for each package is needed.
The verbosity of Java error messages can be controlled by:
For more details on the Java VM settings, see NSO Java VM.
The service model and the corresponding Java callback are bound by the servicepoint name. Look at the service model in packages/vlan/src/yang:
The corresponding generated Java skeleton, (one print 'Hello World!' statement added):
Modify the generated code to include the print "Hello World!" statement in the same way. Re-build the package:
Whenever a package has changed, we need to tell NSO to reload the package. There are three ways:
Just reload the implementation of a specific package, will not load any model changes: admin@ncs# packages package vlan redeploy.
Reload all packages including any model changes: admin@ncs# packages reload.
Restart NSO with reload option: $ncs --with-package-reload.
When that is done we can create a service (or modify an existing one) and the callback will be triggered:
Now, have a look at the logs/ncs-java-vm.log:
Tailing the ncs-java-vm.log is one way of developing. You can also start and stop the Java VM explicitly and see the trace in the shell. To do this, tell NSO not to start the VM by adding the following snippet to ncs.conf:
Then, after restarting NSO or reloading the configuration, from the shell prompt:
So modifying or creating a VLAN service will now have the "Hello World!" string show up in the shell. You can modify the package, then reload/redeploy, and see the output.
To use a GUI-based IDE Eclipse, first generate an environment for Eclipse:
This will generate two files, .classpath and .project. If we add this directory to Eclipse as a File -> New -> Java Project, uncheck the Use default location and enter the directory where the .classpath and .project have been generated.
We are immediately ready to run this code in Eclipse.
All we need to do is choose the main() routine in the NcsJVMLauncher class. The Eclipse debugger works now as usual, and we can, at will, start and stop the Java code.
Timeouts
A caveat worth mentioning here is that there exist a few timeouts between NSO and the Java code that will trigger when we are in the debugger. While developing with the Eclipse debugger and breakpoints, we typically want to disable these timeouts.
First, we have the three timeouts in ncs.conf that matter. Set the three values of /ncs-config/japi/new-session-timeout, /ncs-config/japi/query-timeout, and /ncs-config/japi/connect-timeout to a large value (see man page for a detailed description on what those values are). If these timeouts are triggered, NSO will close all sockets to the Java VM.
Edit the file and enter the following XML entry just after the Webui entry:
Now, restart ncs, and from now on start it as:
You can verify that the Java VM is not running by checking the package status:
Create a new project and start the launcher main in Eclipse:
You can start and stop the Java VM from Eclipse. Note well that this is not needed since the change cycle is: modify the Java code, make in the src directory, and then reload the package. All while NSO and the JVM are running.
Change the VLAN service and see the console output in Eclipse:
Another option is to have Eclipse connect to the running VM. Start the VM manually with the -d option.
Then you can set up Eclipse to connect to the NSO Java VM:
In order for Eclipse to show the NSO code when debugging, add the NSO Source Jars (add external Jar in Eclipse):
Navigate to the service create for the VLAN service and add a breakpoint:
Commit a change of a VLAN service instance and Eclipse will stop at the breakpoint:
So the problem at hand is that we have service parameters and a resulting device configuration. Previously, we showed how to do that with templates. The same principles apply in Java. The service model and the device models are YANG models in NSO irrespective of the underlying protocol. The Java mapping code transforms the service attributes to the corresponding configuration leafs in the device model.
The NAVU API lets the Java programmer navigate the service model and the device models as a DOM tree. Have a look at the create signature:
Two NAVU nodes are passed: the actual service serviceinstance and the NSO root ncsRoot.
We can have a first look at NAVU by analyzing the first try statement:
NAVU is a lazy evaluated DOM tree that represents the instantiated YANG model. So knowing the NSO model: devices/device, (container/list) corresponds to the list of capabilities for a device, this can be retrieved by ncsRoot.container("devices").list("device").
The service node can be used to fetch the values of the VLAN service instance:
vlan/name
vlan/vlan-id
vlan/device-if/device and vlan/device-if/interface
The first snippet that iterates the service model and prints to the console looks like below:
The com.tailf.conf package contains Java Classes representing the YANG types like ConfUInt32.
Try it out in the following sequence:
Rebuild the Java Code: In packages/vlan/src type make.
Reload the Package: In the NSO Cisco CLI, do admin@ncs# packages package vlan redeploy.
Create or Modify a vlan Service: In NSO CLI, do admin@ncs(config)# services vlan net-0 vlan-id 844 device-if c0 interface 1/0, and commit.
Remember the service attribute is passed as a parameter to the create method. As a starting point, look at the first three lines:
To reach a specific leaf in the model use the NAVU leaf method with the name of the leaf as a parameter. This leaf then has various methods like getting the value as a string.
service.leaf("vlan-id") and service.leaf(vlan._vlan_id_) are two ways of referring to the VLAN-id leaf of the service. The latter alternative uses symbols generated by the compilation steps. If this alternative is used, you get the benefit of compilation time checking. From this leaf you can get the value according to the type in the YANG model ConfUInt32 in this case.
Line 3 shows an example of casting between types. In this case, we prepare the VLAN ID as a 16 unsigned int for later use.
The next step is to iterate over the devices and interfaces. The NAVU elements() returns the elements of a NAVU list.
In order to write the mapping code, make sure you have an understanding of the device model. One good way of doing that is to create a corresponding configuration on one device and then display that with the pipe target display xpath. Below is a CLI output that shows the model paths for FastEthernet 1/0:
Another useful tool is to render a tree view of the model:
This can then be opened in a Web browser and model paths are shown to the right:
Now, we replace the print statements with setting real configuration on the devices.
Let us walk through the above code line by line. The device-name is a leafref. The deref method returns the object that the leafref refers to. The getParent() might surprise the reader. Look at the path for a leafref: /device/name/config/ios:interface/name. The name leafref is the key that identifies a specific interface. The deref returns that key, while we want to have a reference to the interface, (/device/name/config/ios:interface), that is the reason for the getParent().
The next line sets the VLAN list on the device. Note well that this follows the paths displayed earlier using the NSO CLI. The sharedCreate() is important, it creates device configuration based on this service, and it says that other services might also create the same value, "shared". Shared create maintains reference counters for the created configuration in order for the service deletion to delete the configuration only when the last service is deleted. Finally, the interface name is used as a key to see if the interface exists, "containsNode()".
The last step is to update the VLAN list for each interface. The code below adds an element to the VLAN leaf-list.
Note that the code uses the sharedCreate() functions instead of create(), as the shared variants are preferred and a best practice.
The above create method is all that is needed for create, read, update, and delete. NSO will automatically handle any changes, like changing the VLAN ID, adding an interface to the VLAN service, and deleting the service. This is handled by the FASTMAP engine, it renders any change based on the single definition of the create method.
The mapping strategy using only Java is illustrated in the following figure.
This strategy has some drawbacks:
Managing different device vendors. If we would introduce more vendors in the network this would need to be handled by the Java code. Of course, this can be factored into separate classes in order to keep the general logic clean and just pass the device details to specific vendor classes, but this gets complex and will always require Java programmers to introduce new device types.
No clear separation of concerns, domain expertise. The general business logic for a service is one thing, detailed configuration knowledge of device types is something else. The latter requires network engineers and the first category is normally separated into a separate team that deals with OSS integration.
Java and templates can be combined:
In this model, the Java layer focuses on required logic, but it never touches concrete device models from various vendors. The vendor-specific details are abstracted away using feature templates. The templates take variables as input from the service logic, and the templates in turn transform these into concrete device configuration. The introduction of a new device type does not affect the Java mapping.
This approach has several benefits:
The service logic can be developed independently of device types.
New device types can be introduced at runtime without affecting service logic.
Separation of concerns: network engineers are comfortable with templates, they look like a configuration snippet. They have expertise in how configuration is applied to real devices. People defining the service logic often are more programmers, they need to interface with other systems, etc, this suites a Java layer.
Note that the logic layer does not understand the device types, the templates will dynamically apply the correct leg of the template depending on which device is touched.
From an abstraction point of view, we want a template that takes the following variables:
VLAN ID
Device and interface
So the mapping logic can just pass these variables to the feature template and it will apply it to a multi-vendor network.
Create a template as described before.
Create a concrete configuration on a device, or several devices of different type
Request NSO to display that as XML
Replace values with variables
This results in a feature template like below:
This template only maps to Cisco IOS devices (the xmlns="urn:ios" namespace), but you can add "legs" for other device types at any point in time and reload the package.
The Java mapping logic for applying the template is shown below:
Note that the Java code has no clue about the underlying device type, it just passes the feature variables to the template. At run-time, you can update the template with mapping to other device types. The Java code stays untouched, if you modify an existing VLAN service instance to refer to the new device type the commit will generate the corresponding configuration for that device.
The smart reader will complain, "Why do we have the Java layer at all?", this could have been done as a pure template solution. That is true, but now this simple Java layer gives room for arbitrary complex service logic before applying the template.
The steps to build the solution described in this section are:
Create a run-time directory: $ mkdir ~/service-template; cd ~/service-template.
Generate a netsim environment: $ ncs-netsim create-network $NCS_DIR/packages/neds/cisco-ios 3 c.
Generate the NSO runtime environment: $ ncs-setup --netsim-dir ./netsim --dest ./.
Create the VLAN package in the packages directory: $ cd packages; ncs-make-package --service-skeleton java vlan.
Create a template directory in the VLAN package: $ cd vlan; mkdir templates.
Save the above-described template in packages/vlan/templates.
Create the YANG service model according to the above: packages/vlan/src/yang/vlan.yang.
Update the Java code according to the above: packages/vlan/src/java/src/com/example/vlan/vlanRFS.java.
Build the package: in packages/vlan/src do make.
Start NSO.
This service shows a more elaborate service mapping. It is based on the examples.ncs/service-provider/mpls-vpn example.
MPLS VPNs are a type of Virtual Private Network (VPN) that achieves segmentation of network traffic using Multiprotocol Label Switching (MPLS), often found in Service Provider (SP) networks. The Layer 3 variant uses BGP to connect and distribute routes between sites of the VPN.
The figure below illustrates an example configuration for one leg of the VPN. Configuration items in bold are variables that are generated from the service inputs.
Sometimes the input parameters are enough to generate the corresponding device configurations. But in many cases, this is not enough. The service mapping logic may need to reach out to other data in order to generate the device configuration. This is common in the following scenarios:
Policies: it might make sense to define policies that can be shared between service instances. The policies, for example, QoS, have data models of their own (not service models) and the mapping code reads from that.
Topology Information: the service mapping might need to know connected devices, like which PE the CE is connected to.
Resources like VLAN IDs, and IP Addresses: these might not be given as input parameters. This can be modeled separately in NSO or fetched from an external system.
It is important to design the service model to consider the above examples: what is input? what is available from other sources? This example illustrates how to define QoS policies "on the side". A reference to an existing QoS policy is passed as input. This is a much better principle than giving all QoS parameters to every service instance. Note well that if you modify the QoS definitions that services are referring to, this will not change the existing services. In order to have the service to read the changed policies you need to perform a re-deploy on the service.
This example also uses a list that maps every CE to a PE. This list needs to be populated before any service is created. The service model only has the CE as input parameter, and the service mapping code performs a lookup in this list to get the PE. If the underlying topology changes a service re-deploy will adopt the service to the changed CE-PE links. See more on topology below.
NSO has a package to manage resources like VLAN and IP addresses as a pool within NSO. In this way the resources are managed within the transaction. The mapping code could also reach out externally to get resources. Nano services are recommended for this.
Using topology information in the instantiation of an NSO service is a common approach, but also an area with many misconceptions. Just like a service in NSO takes a black-box view of the configuration needed for that service in the network NSO treats topologies in the same way. It is of course common that you need to reference topology information in the service but it is highly desirable to have a decoupled and self-sufficient service that only uses the part of the topology that is interesting/needed for the specific service should be used.
Other parts of the topology could either be handled by other services or just let the network state sort it out - it does not necessarily relate to the configuration of the network. A routing protocol will for example handle the IP path through the network.
It is highly desirable to not introduce unneeded dependencies towards network topologies in your service.
To illustrate this, let's look at a Layer 3 MPLS VPN service. A logical overview of an MPLS VPN with three endpoints could look something like this. CE routers connecting to PE routers, that are connected to an MPLS core network. In the MPLS core network, there are a number of P routers.
In the service model, you only want to configure the CE devices to use as endpoints. In this case, topology information could be used to sort out what PE router each CE router is connected to. However, what type of topology do you need? Lets look at a more detailed picture of what the L1 and L2 topology could look like for one side of the picture above.
In pretty much all networks there is an access network between the CE and PE router. In the picture above the CE routers are connected to local Ethernet switches connected to a local Ethernet access network, connected through optical equipment. The local Ethernet access network is connected to a regional Ethernet access network, connected to the PE router. Most likely the physical connections between the devices in this picture have been simplified, in the real world redundant cabling would be used. The example above is of course only one example of how an access network could look like and it is very likely that a service provider have different access technologies. For example Ethernet, ATM, or a DSL-based access network.
Depending on how you design the L3VPN service, the physical cabling or the exact traffic path taken in the layer 2 Ethernet access network might not be that interesting, just like we don't make any assumptions or care about how traffic is transported over the MPLS core network. In both these cases we trust the underlying protocols handling state in the network, spanning tree in the Ethernet access network, and routing protocols like BGP in the MPLS cloud. Instead in this case, it could make more sense to have a separate NSO service for the access network, both so it can be reused for both for example L3VPNs and L2VPN but also to not tightly couple to the access network with the L3VPN service since it can be different (Ethernet or ATM etc.).
Looking at the topology again from the L3VPN service perspective, if services assume that the access network is already provisioned or taken care of by another service, it could look like this.
The information needed to sort out what PE router a CE router is connected to as well as configuring both CE and PE routers is:
Interface on the CE router that is connected to the PE router, and IP address of that interface.
Interface on the PE router that is connected to the CE router, and IP address to the interface.
This section describes the creation of an MPLS L3VPN service in a multi-vendor environment by applying the concepts described above. The example discussed can be found in examples.ncs/service-provider/mpls-vpn. The example network consists of Cisco ASR 9k and Juniper core routers (P and PE) and Cisco IOS-based CE routers.
The goal of the NSO service is to set up an MPLS Layer3 VPN on a number of CE router endpoints using BGP as the CE-PE routing protocol. Connectivity between the CE and PE routers is done through a Layer2 Ethernet access network, which is out of the scope of this service. In a real-world scenario, the access network could for example be handled by another service.
In the example network, we can also assume that the MPLS core network already exists and is configured.
When designing service YANG models there are a number of things to take into consideration. The process usually involves the following steps:
Identify the resulting device configurations for a deployed service instance.
Identify what parameters from the device configurations are common and should be put in the service model.
Ensure that the scope of the service and the structure of the model work with the NSO architecture and service mapping concepts. For example, avoid unnecessary complexities in the code to work with the service parameters.
Ensure that the model is structured in a way so that integration with other systems north of NSO works well. For example, ensure that the parameters in the service model map to the needed parameters from an ordering system.
Steps 1 and 2: Device Configurations and Identifying Parameters:
Deploying an MPLS VPN in the network results in the following basic CE and PE configurations. The snippets below only include the Cisco IOS and Cisco IOS-XR configurations. In a real process, all applicable device vendor configurations should be analyzed.
The device configuration parameters that need to be uniquely configured for each VPN have been marked in bold.
Steps 3 and 4: Model Structure and Integration with other Systems:
When configuring a new MPLS l3vpn in the network we will have to configure all CE routers that should be interconnected by the VPN, as well as the PE routers they connect to.
However, when creating a new l3vpn service instance in NSO it would be ideal if only the endpoints (CE routers) are needed as parameters to avoid having knowledge about PE routers in a northbound order management system. This means a way to use topology information is needed to derive or compute what PE router a CE router is connected to. This makes the input parameters for a new service instance very simple. It also makes the entire service very flexible, since we can move CE and PE routers around, without modifying the service configuration.
Resulting YANG Service Model:
The snipped above contains the l3vpn service model. The structure of the model is very simple. Every VPN has a name, an as-number, and a list of all the endpoints in the VPN. Each endpoint has:
A unique ID.
A reference to a device (a CE router in our case).
A pointer to the LAN local interface on the CE router. This is kept as a string since we want this to work in a multi-vendor environment.
LAN private IP network.
Bandwidth on the VPN connection.
To be able to derive the CE to PE connections we use a very simple topology model. Notice that this YANG snippet does not contain any service point, which means that this is not a service model but rather just a YANG schema letting us store information in CDB.
The model basically contains a list of connections, where each connection points out the device, interface, and IP address in each of the connections.
Since we need to look up which PE routers to configure using the topology model in the mapping logic it is not possible to use a declarative configuration template-based mapping. Using Java and configuration templates together is the right approach.
The Java logic lets you set a list of parameters that can be consumed by the configuration templates. One huge benefit of this approach is that all the parameters set in the Java code are completely vendor-agnostic. When writing the code, there is no need for knowledge of what kind of devices or vendors exist in the network, thus creating an abstraction of vendor-specific configuration. This also means that in to create the configuration template there is no need to have knowledge of the service logic in the Java code. The configuration template can instead be created and maintained by subject matter experts, the network engineers.
With this service mapping approach, it makes sense to modularize the service mapping by creating configuration templates on a per-feature level, creating an abstraction for a feature in the network. In this example means, we will create the following templates:
CE router
PE router
This is both to make services easier to maintain and create but also to create components that are reusable from different services. This can of course be even more detailed with templates with for example BGP or interface configuration if needed.
Since the configuration templates are decoupled from the service logic it is also possible to create and add additional templates in a running NSO system. You can for example add a CE router from a new vendor to the layer3 VPN service by only creating a new configuration template, using the set of parameters from the service logic, to a running NSO system without changing anything in the other logical layers.
The Java code part for the service mapping is very simple and follows the following pseudo code steps:
This section will go through relevant parts of Java outlined by the pseudo-code above. The code starts with defining the configuration templates and reading the list of endpoints configured and the topology. The Navu API is used for navigating the data models.
The next step is iterating over the VPN endpoints configured in the service, finding out connected PE router using small helper methods navigating the configured topology.
The parameter dictionary is created from the TemplateVariables class and is populated with appropriate parameters.
The last step after all parameters have been set is applying the templates for the CE and PE routers for this VPN endpoint.
The configuration templates are XML templates based on the structure of device YANG models. There is a very easy way to create the configuration templates for the service mapping if NSO is connected to a device with the appropriate configuration on it, using the following steps.
Configure the device with the appropriate configuration.
Add the device to NSO
Sync the configuration to NSO.
Display the device configuration in XML format.
Save the XML output to a configuration template file and replace configured values with parameters
The commands in NSO give the following output. To make the example simpler, only the BGP part of the configuration is used:
The final configuration template with the replaced parameters marked in bold is shown below. If the parameter starts with a $-sign, it's taken from the Java parameter dictionary; otherwise, it is a direct xpath reference to the value from the service instance.
Manage the life-cycle of network services.
NSO can also manage the life-cycle for services like VPNs, BGP peers, and ACLs. It is important to understand what is meant by service in this context:
NSO abstracts the device-specific details. The user only needs to enter attributes relevant to the service.
The service instance has configuration data itself that can be represented and manipulated.
A service instance configuration change is applied to all affected devices.
The following are the features that NSO uses to support service configuration:
Service Modeling: Network engineers can model the service attributes and the mapping to device configurations. For example, this means that a network engineer can specify at data-model for VPNs with router interfaces, VLAN ID, VRF, and route distinguisher.
Service Life-cycle: While less sophisticated configuration management systems can only create an initial service instance in the network they do not support changing or deleting a service instance. With NSO you can at any point in time modify service elements like the VLAN id of a VPN and NSO can generate the corresponding changes to the network devices.
Service Instance: The NSO service instance has configuration data that can be represented and manipulated. The service model on run-time updates all NSO northbound interfaces so that a network engineer can view and manipulate the service instance over CLI, WebUI, REST, etc.
An example is the best method to illustrate how services are created and used in NSO. As described in the sections about devices and NEDs, it was said that NEDs come in packages. The same is true for services, either if you design the services yourself or use ready-made service applications, it ends up in a package that is loaded into NSO.
Watch a video presentation of this demo on .
The example examples.ncs/service-provider/mpls-vpn will be used to explain NSO Service Management features. This example illustrates Layer-3 VPNs in a service provider MPLS network. The example network consists of Cisco ASR 9k and Juniper core routers (P and PE) and Cisco IOS-based CE routers. The Layer-3 VPN service configures the CE/PE routers for all endpoints in the VPN with BGP as the CE/PE routing protocol. The layer-2 connectivity between CE and PE routers is expected to be done through a Layer-2 ethernet access network, which is out of scope for this example. The Layer-3 VPN service includes VPN connectivity as well as bandwidth and QOS parameters.
The service configuration only has references to CE devices for the end-points in the VPN. The service mapping logic reads from a simple topology model that is configuration data in NSO, outside the actual service model and derives what other network devices to configure.
The topology information has two parts:
The first part lists connections in the network and is used by the service mapping logic to find out which PE router to configure for an endpoint. The snippets below show the configuration output in the Cisco-style NSO CLI.\
The second part lists devices for each role in the network and is in this example only used to dynamically render a network map in the Web UI.\
The QOS configuration in service provider networks is complex and often requires a lot of different variations. It is also often desirable to be able to deliver different levels of QOS. This example shows how a QOS policy configuration can be stored in NSO and referenced from VPN service instances. Three different levels of QOS policies are defined; GOLD, SILVER, and BRONZE with different queuing parameters.
Three different traffic classes are also defined with a DSCP value that will be used inside the MPLS core network as well as default rules that will match traffic to a class.
Run the example as follows:
Make sure that you start clean, i.e. no old configuration data is present. If you have been running this or some other example before, make sure to stop any NSO or simulated network nodes (ncs-netsim) that you may have running. Output like 'connection refused (stop)' means no previous NSO was running and 'DEVICE ce0 connection refused (stop)...' no simulated network was running, which is good.\
This will set up the environment and start the simulated network.
Before creating a new L3VPN service, we must sync the configuration from all network devices and then enter config mode. (A hint for this complete section is to have the README file from the example and cut and paste the CLI commands).\
Add another VPN.\
A service package in NSO comprises two parts:
Service model: the attributes of the service, and input parameters given when creating the service. In this example name, as-number, and end-points.
Mapping: what is the corresponding configuration of the devices when the service is applied. The result of the mapping can be inspected by the commit dry-run outformat native command.
We later show how to define this, for now, assume that the job is done.
When NSO applies services to the network, NSO stores the service configuration along with resulting device configuration changes. This is used as a base for the FASTMAP algorithm which automatically can derive device configuration changes from a service change.
Example 1
Going back to the example L3 VPN above, any part of volvo VPN instance can be modified.
A simple change like changing the as-number on the service results in many changes in the network. NSO does this automatically.
Example 2
Let us look at a more challenging modification.
A common use case is of course to add a new CE device and add that as an end-point to an existing VPN. Below is the sequence to add two new CE devices and add them to the VPNs. (In the CLI snippets below we omit the prompt to enhance readability).
First, we add them to the topology:
Note well that the above just updates NSO local information on topological links. It has no effect on the network. The mapping for the L3 VPN services does a look-up in the topology connections to find the corresponding pe router.
Next, we add them to the VPNs:
Before we send anything to the network, let's look at the device configuration using a dry run. As you can see, both new CE devices are connected to the same PE router, but for different VPN customers.
Finally, commit the configuration to the network
Next, we will show how NSO can be used to check if the service configuration in the network is up to date.
In a new terminal window, we connect directly to the device ce0 which is a Cisco device emulated by the tool ncs-netsim.
We will now reconfigure an edge interface that we previously configured using NSO.
Going back to the terminal with NSO, check the status of the network configuration:
The CLI sequence above performs 3 different comparisons:
Real device configuration versus device configuration copy in NSO CDB.
Expected device configuration from the service perspective and device configuration copy in CDB.
Expected device configuration from the service perspective and real device configuration.
Notice that the service volvo is out of sync with the service configuration. Use the check-sync outformat cli to see what the problem is:
Assume that a network engineer considers the real device configuration to be authoritative:
And then restore the service:
In the same way, as NSO can calculate any service configuration change, it can also automatically delete the device configurations that resulted from creating services:
It is important to understand the two diffs shown above. The first diff as an output to show configuration shows the diff at the service level. The second diff shows the output generated by NSO to clean up the device configurations.
Finally, we commit the changes to delete the service.
Service instances live in the NSO data store as well as a copy of the device configurations. NSO will maintain relationships between these two.
Show the configuration for a service
You can ask NSO to list all devices that are touched by a service and vice versa:
Note that operational mode in the CLI was used above. Every service instance has an operational attribute that is maintained by the transaction manager and shows which device configuration it created. Furthermore, every device configuration has backward pointers to the corresponding service instances:
The reference counter above makes sure that NSO will not delete shared resources until the last service instance is deleted. The context-match search is helpful, it displays the path to all matching configuration items.
As described in , the commit queue can be used to increase the transaction throughput. When the commit queue is for service activation, the services will have states reflecting outstanding commit queue items.
We will now commit a VPN service using the commit queue and one device is down.
This service is not provisioned fully in the network, since ce0 was down. It will stay in the queue either until the device starts responding or when an action is taken to remove the service or remove the item. The commit queue can be inspected. As shown below we see that we are waiting for ce0. Inspecting the queue item shows the outstanding configuration.
The commit queue will constantly try to push the configuration towards the devices. The number of retry attempts and at what interval they occur can be configured.
If we start ce0 and inspect the queue, we will see that the queue will finally be empty and that the commit-queue status for the service is empty.
In some scenarios, it makes sense to remove the service configuration from the network but keep the representation of the service in NSO. This is called to un-deploy a service.
To have NSO deploy services across devices, two pieces are needed:
A service model in YANG: the service model shall define the black-box view of a service; which are the input parameters given when creating the service? This YANG model will render an update of all NSO northbound interfaces, for example, the CLI.
Mapping, given the service input parameters, what is the resulting device configuration? This mapping can be defined in templates, code, or a combination of both.
The first step is to generate a skeleton package for a service (for details, see ). Create a directory under, for example, ~/my-sim-iossimilar to how it is done for the 1-simulated-cisco-ios/ example. Make sure that you have stopped any running NSO and netsim.
Navigate to the simulated ios directory and create a new package for the VLAN service model:
If the packages folder does not exist yet, such as when you have not run this example before, you will need to invoke the ncs-setup and ncs-netsim create-network commands as described in the 1-simulated-cisco-ios README file.
The next step is to create the template skeleton by using the ncs-make-package utility:
This results in a directory structure:
For now, let's focus on the src/yang/vlan.yang file.
If this is your first exposure to YANG, you can see that the modeling language is very straightforward and easy to understand. See for more details and examples for YANG. The concept to understand in the above-generated skeleton is that the two lines of uses ncs:service-data and ncs:servicepoint "vlan" tells NSO that this is a service. The ncs:service-data grouping together with the ncs:servicepoint YANG extension provides the common definitions for a service. The two are implemented by the $NCS_DIR/src/ncs/yang/tailf-ncs-services.yang. So if a user wants to create a new VLAN in the network what should be the parameters? - A very simple service model would look like below (modify the src/yang/vlan.yang file):
This simple VLAN service model says:
We give a VLAN a name, for example, net-1, this must also be unique, it is specified as key.
The VLAN has an id from 1 to 4096.
The VLAN is attached to a list of devices and interfaces. To make this example as simple as possible the interface reference is selected by picking the type and then the name as a plain string.
The good thing with NSO is that already at this point you could load the service model to NSO and try if it works well in the CLI etc. Nothing would happen to the devices since we have not defined the mapping, but this is normally the way to iterate a model and test the CLI towards the network engineers.
To build this service model cd to $NCS_DIR/examples.ncs/getting-started/using-ncs/1-simulated-cisco-ios/packages/vlan/src and type make (assuming you have the make build system installed).
Go to the root directory of the simulated-ios example:
Start netsim, NSO, and the CLI:
When starting NSO above we give NSO a parameter to reload all packages so that our newly added vlan package is included. Packages can also be reloaded without restart. At this point we have a service model for VLANs, but no mapping of VLAN to device configurations. This is fine, we can try the service model and see if it makes sense. Create a VLAN service:
Committing service changes does not affect the devices since we have not defined the mapping. The service instance data will just be stored in NSO CDB.
Note that you get tab completion on the devices since they are leafrefs to device names in CDB, the same for interface-type since the types are enumerated in the model. However the interface name is just a string, and you have to type the correct interface name. For service models where there is only one device type like in this simple example, we could have used a reference to the ios interface name according to the IOS model. However that makes the service model dependent on the underlying device types and if another type is added, the service model needs to be updated and this is most often not desired. There are techniques to get tab completion even when the data type is a string, but this is omitted here for simplicity.
Make sure you delete the vlan service instance as above before moving on with the example.
Now it is time to define the mapping from service configuration to actual device configuration. The first step is to understand the actual device configuration. Hard-wire the VLAN towards a device as example. This concrete device configuration is a boilerplate for the mapping, it shows the expected result of applying the service.
The concrete configuration above has the interface and VLAN hard-wired. This is what we now will make into a template instead. It is always recommended to start like the above and create a concrete representation of the configuration the template shall create. Templates are device-configuration where parts of the config are represented as variables. These kinds of templates are represented as XML files. Show the above as XML:
Now, we shall build that template. When the package was created a skeleton XML file was created in packages/vlan/templates/vlan.xml
We need to specify the right path to the devices. In our case, the devices are identified by /device-if/device-name (see the YANG service model).
For each of those devices, we need to add the VLAN and change the specified interface configuration. Copy the XML config from the CLI and replace it with variables:
Walking through the template can give a better idea of how it works. For every /device-if/device-name from the service model do the following:
Add the VLAN to the VLAN list, the tag merge tells the template to merge the data into an existing list (the default is to replace).
For every interface within that device, add the VLAN to the allowed VLANs and set the mode to trunk. The tag nocreate tells the template to not create the named interface if it does not exist
It is important to understand that every path in the template above refers to paths from the service model in vlan.yang.
Request NSO to reload the packages:
Previously we started NCS with a reload package option, the above shows how to do the same without starting and stopping NSO.
We can now create services that will make things happen in the network. (Delete any dummy service from the previous step first). Create a VLAN service:
When working with services in templates, there is a useful debug option for commit which will show the template and XPATH evaluation.
We can change the VLAN service:
It is important to understand what happens above. When the VLAN ID is changed, NSO can calculate the minimal required changes to the configuration. The same situation holds true for changing elements in the configuration or even parameters of those elements. In this way, NSO does not need explicit mapping to define a VLAN change or deletion. NSO does not overwrite a new configuration on the old configuration. Adding an interface to the same service works the same:
To clean up the configuration on the devices, run the delete command as shown below:
To make the VLAN service package complete edit the package-meta-data.xml to reflect the service model purpose. This example showed how to use template-based mapping. NSO also allows for programmatic mapping and also a combination of the two approaches. The latter is very flexible if some logic needs to be attached to the service provisioning that is expressed as templates and the logic applies device agnostic templates.
FASTMAP is the NSO algorithm that renders any service change from the single definition of the create service. As seen above, the template or code only has to define how the service shall be created, NSO is then capable of defining any change from that single definition.
A limitation in the scenarios described so far is that the mapping definition could immediately do its work as a single atomic transaction. This is sometimes not possible. Typical examples are external allocation of resources such as IP addresses from an IPAM, spinning up VMs, and sequencing in general.
Nano services using Reactive FASTMAP handle these scenarios with an executable plan that the system can follow to provision the service. The general idea is to implement the service as several smaller (nano) steps or stages, by using reactive FASTMAP and provide a framework to safely execute actions with side effects.
The example in examples.ncs/development-guide/nano-services/netsim-sshkey implements key generation to files and service deployment of the key to set up network elements and NSO for public key authentication to illustrate this concept. The example is described in more detail in .
A very common situation when we wish to deploy NSO in an existing network is that the network already has existing services implemented in the network. These services may have been deployed manually or through another provisioning system. The task is to introduce NSO and import the existing services into NSO. The goal is to use NSO to manage existing services, and to add additional instances of the same service type, using NSO. This is a non-trivial problem since existing services may have been introduced in various ways. Even if the service configuration has been done consistently it resembles the challenges of a general solution for rendering a corresponding C-program from assembler.
One of the prerequisites for this to work is that it is possible to construct a list of the already existing services. Maybe such a list exists in an inventory system, an external database, or maybe just an Excel spreadsheet. It may also be the case that we can:
Import all managed devices into NSO.
Execute a full sync-from on the entire network.
Write a program, using Python/Maapi or Java/Maapi that traverses the entire network configuration and computes the services list.
The first thing we must do when we wish to reconcile existing services is to define the service YANG model. The second thing is to implement the service mapping logic and do it in such a way that given the service input parameters when we run the service code, they would all result in a configuration that is already there in the existing network.
The basic principles for reconciliation are:
Read the device configuration to NSO using the sync-from action. This will get the device configuration that is a result of any existing services as well.
Instantiate the services according to the principles above.
Performing the above actions with the default behavior would not render the correct reference counters since NSO did not create the original configuration. The service activation can be run with dedicated flags to take this into account. See the NSO User Guide for a detailed process.
In many cases, a service activation solution like NSO is deployed in parallel with existing activation solutions. It is then desirable to make sure that NSO does not conflict with the device configuration rendered from the existing solution.
NSO has a commit flag that will restrict the device configuration to not overwrite data that NSO did not create: commit no-overwrite
Some services need to be set up in stages where each stage can consist of setting up some device configuration and then waiting for this configuration to take effect before performing the next stage. In this scenario, each stage must be performed in a separate transaction which is committed separately. Most often an external notification or other event must be detected and trigger the next stage in the service activation.
NSO supports the implementation of such staged services with the use of Reactive FASTMAP patterns in nano services.
From the user's perspective, it is not important how a certain service is implemented. The implementation should not have an impact on how the user creates or modifies a service. However, knowledge about this can be necessary to explain the behavior of a certain service.
In short the life-cycle of an RFM nano service in not only controlled by the direct create/set/delete operations. Instead, there are one or many implicit reactive-re-deploy requests on the service that are triggered by external event detection. If the user examines an RFM service, e.g. using get-modification, the device impact will grow over time after the initial create.
Nano services autonomously will do reactive-re-deploy until all stages of the service are completed. This implies that a nano service normally is not completed when the initial create is committed. For the operator to understand that a nano service has run to completion there must typically be some service-specific operational data that can indicate this.
Plans are introduced to standardize the operational data that can show the progress of the nano service. This gives the user a standardized view of all nano services and can directly answer the question of whether a service instance has run to completion or not.
A plan consists of one or many component entries. Each component consists of two or many state entries where the state can be in status not-reached, reached, or failed. A plan must have a component named self and can have other components with arbitrary names that have meaning for the implementing nano service. A plan component must have a first state named init and a last state named ready. In between init and ready, a plan component can have additional state entries with arbitrary naming.
The purpose of the self component is to describe the main progress of the nano service as a whole. Most importantly the self component last state named ready must have the status reached if and only if the nano service as a whole has been completed. Other arbitrary components as well as states are added to the plan if they have meaning for the specific nano service i.e. more specific progress reporting.
A plan also defines an empty leaf failed which is set if and only if any state in any component has a status set to failed. As such this is an aggregation to make it easy to verify if a RFM service is progressing without problems or not.
The following is an illustration of using the plan to report the progress of a nano service:
Plans were introduced to standardize the operational data that show the progress of reactive fastmap (RFM) nano services. This gives the user a standardized view of all nano services and can answer the question of whether a service instance has run to completion or not. To keep track of the progress of plans, Service Progress Monitoring (SPM) is introduced. The idea with SPM is that time limits are put on the progress of plan states. To do so, a policy and a trigger are needed.
A policy defines what plan components and states need to be in what status for the policy to be true. A policy also defines how long time it can be false without being considered jeopardized and how long time it can be false without being considered violated. Further, it may define an action, that is called in case of a policy being jeopardized, violated, or successful.
A trigger is used to associate a policy with a service and a component.
The following is an illustration of using an SPM to track the progress of an RFM service, in this case, the policy specifies that the self-components ready state must be reached for the policy to be true:
$ mkdir ~/vlan-service
$ cd ~/vlan-service$ ncs-netsim create-network $NCS_DIR/packages/neds/cisco-ios 3 c
$ ncs-setup --netsim-dir ./netsim/ --dest ./$ ncs-netsim start
DEVICE c0 OK STARTED
DEVICE c1 OK STARTED
DEVICE c2 OK STARTED
$ ncs$ ncs-netsim cli-i c0
admin connected from 127.0.0.1 using console on ncs
c0> enable
c0# configure
Enter configuration commands, one per line. End with CNTL/Z.
c0(config)# show full-configuration
no service pad
no ip domain-lookup
no ip http server
no ip http secure-server
ip routing
ip source-route
ip vrf my-forward
bgp next-hop Loopback 1
!
...$ ncs_cli -C -u admin
admin connected from 127.0.0.1 using console on ncs
admin@ncs# devices sync-from
sync-result {
device c0
result true
}
sync-result {
device c1
result true
}
sync-result {
device c2
result true
}
admin@ncs# config
Entering configuration mode terminal
admin@ncs(config)# show full-configuration devices device c0 config
devices device c0
config
no ios:service pad
ios:ip vrf my-forward
bgp next-hop Loopback 1
!
ios:ip community-list 1 permit
ios:ip community-list 2 deny
ios:ip community-list standard s permit
no ios:ip domain-lookup
no ios:ip http server
no ios:ip http secure-server
ios:ip routing
...admin@ncs(config)# devices device c0 config ios:vlan 1234
admin@ncs(config)# devices device c0 config ios:interface
FastEthernet 1/0 switchport mode trunk
admin@ncs(config-if)# switchport trunk allowed vlan 1234
admin@ncs(config-if)# top
admin@ncs(config)# show configuration
devices device c0
config
ios:vlan 1234
!
ios:interface FastEthernet1/0
switchport mode trunk
switchport trunk allowed vlan 1234
exit
!
!
admin@ncs(config)# commit $ ls -F1
README.ncs
README.netsim
logs/
ncs-cdb/
ncs.conf
netsim/
packages/
scripts/
state/$ cd packages
$ ls -l
total 8
cisco-ios -> .../packages/neds/cisco-ios$ ncs-make-package --service-skeleton java vlan
$ ls
cisco-ios vlan augment /ncs:services {
list vlan {
key name;
uses ncs:service-data;
ncs:servicepoint "vlan-servicepoint";
leaf name {
type string;
}
leaf vlan-id {
type uint32 {
range "1..4096";
}
}
list device-if {
key "device-name";
leaf device-name {
type leafref {
path "/ncs:devices/ncs:device/ncs:name";
}
}
leaf interface {
type string;
}
}
}
}uses ncs:service-data;
ncs:servicepoint "vlan-servicepoint";$ cd packages/vlan/src/
$ make$ ncs_cli -C -U admin
admin@ncs# packages reload
>>> System upgrade is starting.
>>> Sessions in configure mode must exit to operational mode.
>>> No configuration changes can be performed until upgrade has completed.
>>> System upgrade has completed successfully.
result Doneadmin@ncs(config)# services vlan net-0 vlan-id 1234 device-if c0 interface 1/0
admin@ncs(config-device-if-c0)# top
admin@ncs(config)# commitadmin@ncs(config)# show full-configuration java-vm | details
java-vm stdout-capture enabled
java-vm stdout-capture file ./logs/ncs-java-vm.log
java-vm connect-time 60
java-vm initialization-time 60
java-vm synchronization-timeout-action log-stop
java-vm jmx jndi-address 127.0.0.1
java-vm jmx jndi-port 9902
java-vm jmx jmx-address 127.0.0.1
java-vm jmx jmx-port 9901$ java com.tailf.ncs.NcsJVMLauncheradmin@ncs(config)# java-vm exception-error-message verbosity
Possible completions:
standard trace verbose$ cd packages/vlan/src/
$ makeadmin@ncs(config)# vlan net-0 vlan-id 888
admin@ncs(config-vlan-net-0)# commit$ tail ncs-java-vm.log
...
<INFO> 03-Mar-2014::16:55:23.705 NcsMain JVM-Launcher: \
- REDEPLOY PACKAGE COLLECTION --> OK
<INFO> 03-Mar-2014::16:55:23.705 NcsMain JVM-Launcher: \
- REDEPLOY ["vlan"] --> DONE
<INFO> 03-Mar-2014::16:55:23.706 NcsMain JVM-Launcher: \
- DONE COMMAND --> REDEPLOY_PACKAGE
<INFO> 03-Mar-2014::16:55:23.706 NcsMain JVM-Launcher: \
- READ SOCKET =>
Hello World!<java-vm>
<auto-start>false</auto-start>
</java-vm>$ ncs-start-java-vm
.....
.. all stdout from JVM$ ncs-setup --eclipse-setup<japi>
<new-session-timeout>PT1000S</new-session-timeout>
<query-timeout>PT1000S</query-timeout>
<connect-timeout>PT1000S</connect-timeout>
</japi>$ ncs -c ./ncs.confadmin@ncs# show packages package vlan
packages package vlan
package-version 1.0
description "Skeleton for a resource facing service - RFS"
ncs-min-version 3.0
directory ./state/packages-in-use/1/vlan
component RFSSkeleton
callback java-class-name [ com.example.vlan.vlanRFS ]
oper-status java-uninitialized$ ncs-start-java-vm -d
Listening for transport dt_socket at address: 9000
NCS JVM STARTING
... @ServiceCallback(servicePoint="vlan-servicepoint",
callType=ServiceCBType.CREATE)
public Properties create(ServiceContext context,
NavuNode service,
NavuNode ncsRoot,
Properties opaque)
throws DpCallbackException {try {
// check if it is reasonable to assume that devices
// initially has been sync-from:ed
NavuList managedDevices =
ncsRoot.container("devices").list("device");
for (NavuContainer device : managedDevices) {
if (device.list("capability").isEmpty()) {
String mess = "Device %1$s has no known capabilities, " +
"has sync-from been performed?";
String key = device.getKey().elementAt(0).toString();
throw new DpCallbackException(String.format(mess, key));
}
}admin@ncs% show devices device c0 config ios:interface
FastEthernet 1/0 | display xpath
/devices/device[name='c0']/config/ios:interface/
FastEthernet[name='1/0']/switchport/mode/trunk
/devices/device[name='c0']/config/ios:interface/
FastEthernet[name='1/0']/switchport/trunk/allowed/vlan/vlans [ 111 ]$ pyang -f jstree tailf-ned-cisco-ios.yang -o ios.html// The interface
NavuNode theIf = feIntfList.elem(feIntfName);
theIf.container("switchport").
sharedCreate().
container("mode").
container("trunk").
sharedCreate();
// Create the VLAN leaf-list element
theIf.container("switchport").
container("trunk").
container("allowed").
container("vlan").
leafList("vlans").
sharedCreate(vlanID16);<!-- Feature Parameters -->
<!-- $DEVICE -->
<!-- $VLAN_ID -->
<!-- $INTF_NAME -->
<config-template xmlns="http://tail-f.com/ns/config/1.0"
servicepoint="vlan">
<devices xmlns="http://tail-f.com/ns/ncs">
<device>
<name>{$DEVICE}</name>
<config>
<vlan xmlns="urn:ios" tags="merge">
<vlan-list>
<id>{$VLAN_ID}</id>
</vlan-list>
</vlan>
<interface xmlns="urn:ios" tags="merge">
<FastEthernet tags="nocreate">
<name>{$INTF_NAME}</name>
<switchport>
<trunk>
<allowed>
<vlan tags="merge">
<vlans>{$VLAN_ID}</vlans>
</vlan>
</allowed>
</trunk>
</switchport>
</FastEthernet>
</interface>
</config>
</device>
</devices>
</config-template> interface GigabitEthernet0/1.77
description Link to PE / pe0 - GigabitEthernet0/0/0/3
encapsulation dot1Q 77
ip address 192.168.1.5 255.255.255.252
service-policy output volvo
!
policy-map volvo
class class-default
shape average 6000000
!
!
interface GigabitEthernet0/11
description volvo local network
ip address 10.7.7.1 255.255.255.0
exit
router bgp 65101
neighbor 192.168.1.6 remote-as 100
neighbor 192.168.1.6 activate
network 10.7.7.0
! vrf volvo
address-family ipv4 unicast
import route-target
65101:1
exit
export route-target
65101:1
exit
exit
exit
policy-map volvo-ce1
class class-default
shape average 6000000 bps
!
end-policy-map
!
interface GigabitEthernet 0/0/0/3.77
description Link to CE / ce1 - GigabitEthernet0/1
ipv4 address 192.168.1.6 255.255.255.252
service-policy output volvo-ce1
vrf volvo
encapsulation dot1q 77
exit
router bgp 100
vrf volvo
rd 65101:1
address-family ipv4 unicast
exit
neighbor 192.168.1.5
remote-as 65101
address-family ipv4 unicast
as-override
exit
exit
exit
exitcontainer vpn {
list l3vpn {
tailf:info "Layer3 VPN";
uses ncs:service-data;
ncs:servicepoint l3vpn-servicepoint;
key name;
leaf name {
tailf:info "Unique service id";
type string;
}
leaf as-number {
tailf:info "MPLS VPN AS number.";
mandatory true;
type uint32;
}
list endpoint {
key id;
leaf id {
tailf:info "Endpoint identifier";
type string;
}
leaf ce-device {
mandatory true;
type leafref {
path "/ncs:devices/ncs:device/ncs:name";
}
}
leaf ce-interface {
mandatory true;
type string;
}
leaf ip-network {
tailf:info “private IP network”;
mandatory true;
type inet:ip-prefix;
}
leaf bandwidth {
tailf:info "Bandwidth in bps";
mandatory true;
type uint32;
}
}
}
}container topology {
list connection {
key name;
leaf name {
type string;
}
container endpoint-1 {
tailf:cli-compact-syntax;
uses connection-grouping;
}
container endpoint-2 {
tailf:cli-compact-syntax;
uses connection-grouping;
}
leaf link-vlan {
type uint32;
}
}
}
grouping connection-grouping {
leaf device {
type leafref {
path "/ncs:devices/ncs:device/ncs:name";
}
}
leaf interface {
type string;
}
leaf ip-address {
type tailf:ipv4-address-and-prefix-length;
}
}READ topology
FOR EACH endpoint
USING topology
DERIVE connected-pe-router
READ ce-pe-connection
SET pe-parameters
SET ce-parameters
APPLY TEMPLATE l3vpn-ce
APPLY TEMPLATE l3vpn-peTemplate peTemplate = new Template(context, "l3vpn-pe");
Template ceTemplate = new Template(context,"l3vpn-ce");
NavuList endpoints = service.list("endpoint");
NavuContainer topology = ncsRoot.getParent().
container("http://com/example/l3vpn").
container("topology"); for(NavuContainer endpoint : endpoints.elements()) {
try {
String ceName = endpoint.leaf("ce-device").valueAsString();
// Get the PE connection for this endpoint router
NavuContainer conn =
getConnection(topology,
endpoint.leaf("ce-device").valueAsString());
NavuContainer peEndpoint = getConnectedEndpoint(
conn,ceName);
NavuContainer ceEndpoint = getMyEndpoint(
conn,ceName);TemplateVariables vpnVar = new TemplateVariables();
vpnVar.putQuoted("PE",peEndpoint.leaf("device").valueAsString());
vpnVar.putQuoted("CE",endpoint.leaf("ce-device").valueAsString());
vpnVar.putQuoted("VLAN_ID", vlan.valueAsString());
vpnVar.putQuoted("LINK_PE_ADR",
getIPAddress(peEndpoint.leaf("ip-address").valueAsString()));
vpnVar.putQuoted("LINK_CE_ADR",
getIPAddress(ceEndpoint. leaf("ip-address").valueAsString()));
vpnVar.putQuoted("LINK_MASK",
getNetMask(ceEndpoint. leaf("ip-address").valueAsString()));
vpnVar.putQuoted("LINK_PREFIX",
getIPPrefix(ceEndpoint.leaf("ip-address").valueAsString()));peTemplate.apply(service, vpnVar);
ceTemplate.apply(service, vpnVar);admin@ncs# devices device ce1 sync-from
admin@ncs# show running-config devices device ce1 config \
ios:router bgp | display xml
<config xmlns="http://tail-f.com/ns/config/1.0">
<devices xmlns="http://tail-f.com/ns/ncs">
<device>
<name>ce1</name>
<config>
<router xmlns="urn:ios">
<bgp>
<as-no>65101</as-no>
<neighbor>
<id>192.168.1.6</id>
<remote-as>100</remote-as>
<activate/>
</neighbor>
<network>
<number>10.7.7.0</number>
</network>
</bgp>
</router>
</config>
</device>
</devices>
</config><config-template xmlns="http://tail-f.com/ns/config/1.0">
<devices xmlns="http://tail-f.com/ns/ncs">
<device tags="nocreate">
<name>{$CE}</name>
<config>
<router xmlns="urn:ios" tags="merge">
<bgp>
<as-no>{/as-number}</as-no>
<neighbor>
<id>{$LINK_PE_ADR}</id>
<remote-as>100</remote-as>
<activate/>
</neighbor>
<network>
<number>{$LOCAL_CE_NET}</number>
</network>
</bgp>
</router>
</config>
</device>
</devices>
</config-template>























References between Service Instances and Device Configuration: NSO maintains references between service instances and device configuration. This means that a VPN instance knows exactly which device configurations it created or modified. Every configuration stored in the CDB is mapped to the service instance that created it.

$ cp $NCS_DIR/etc/ncs/ncs.conf . topology connection c0
endpoint-1 device ce0 interface GigabitEthernet0/8 ip-address 192.168.1.1/30
endpoint-2 device pe0 interface GigabitEthernet0/0/0/3 ip-address 192.168.1.2/30
link-vlan 88
!
topology connection c1
endpoint-1 device ce1 interface GigabitEthernet0/1 ip-address 192.168.1.5/30
endpoint-2 device pe1 interface GigabitEthernet0/0/0/3 ip-address 192.168.1.6/30
link-vlan 77
!topology role ce
device [ ce0 ce1 ce2 ce3 ce4 ce5 ]
!
topology role pe
device [ pe0 pe1 pe2 pe3 ]
! qos qos-policy GOLD
class BUSINESS-CRITICAL
bandwidth-percentage 20
!
class MISSION-CRITICAL
bandwidth-percentage 20
!
class REALTIME
bandwidth-percentage 20
priority
!
!
qos qos-policy SILVER
class BUSINESS-CRITICAL
bandwidth-percentage 25
!
class MISSION-CRITICAL
bandwidth-percentage 25
!
class REALTIME
bandwidth-percentage 10
!qos qos-class BUSINESS-CRITICAL
dscp-value af21
match-traffic ssh
source-ip any
destination-ip any
port-start 22
port-end 22
protocol tcp
!
!
qos qos-class MISSION-CRITICAL
dscp-value af31
match-traffic call-signaling
source-ip any
destination-ip any
port-start 5060
port-end 5061
protocol tcp
!
!Copy$ Copyncs# top
!
vpn l3vpn ford
as-number 65200
endpoint main-office
ce-device ce2
ce-interface GigabitEthernet0/5
ip-network 192.168.1.0/24
bandwidth 10000000
!
endpoint branch-office1
ce-device ce3
ce-interface GigabitEthernet0/5
ip-network 192.168.2.0/24
bandwidth 5500000
!
endpoint branch-office2
ce-device ce5
ce-interface GigabitEthernet0/5
ip-network 192.168.7.0/24
bandwidth 1500000
!ncs(config)# vpn l3vpn volvo as-number 65102
ncs(config-l3vpn-volvo)# commit dry-run outformat native
native {
device {
name ce0
data no router bgp 65101
router bgp 65102
neighbor 192.168.1.2 remote-as 100
neighbor 192.168.1.2 activate
network 10.10.1.0
!
...
ncs(config-l3vpn-volvo)# committop
!
topology connection c7
endpoint-1 device ce7 interface GigabitEthernet0/1 ip-address 192.168.1.25/30
endpoint-2 device pe3 interface GigabitEthernet0/0/0/2 ip-address 192.168.1.26/30
link-vlan 103
!
topology connection c8
endpoint-1 device ce8 interface GigabitEthernet0/1 ip-address 192.168.1.29/30
endpoint-2 device pe3 interface GigabitEthernet0/0/0/2 ip-address 192.168.1.30/30
link-vlan 104
!
ncs(config)#committop
!
vpn l3vpn ford
endpoint new-branch-office
ce-device ce7
ce-interface GigabitEthernet0/5
ip-network 192.168.9.0/24
bandwidth 4500000
!
vpn l3vpn volvo
endpoint new-branch-office
ce-device ce8
ce-interface GigabitEthernet0/5
ip-network 10.8.9.0/24
bandwidth 4500000
!ncs(config)#commit dry-run outformat native(config)#commit$ ncs-netsim cli-c ce0 enable
ce0# configure
Enter configuration commands, one per line. End with CNTL/Z.
ce0(config)# no policy-map volvo
ce0(config)# exit
ce0# exitncs# devices check-sync
sync-result {
device ce0
result out-of-sync
info got: c5c75ee593246f41eaa9c496ce1051ea expected: c5288cc0b45662b4af88288d29be8667
...
ncs# vpn l3vpn * check-sync
vpn l3vpn ford check-sync
in-sync true
vpn l3vpn volvo check-sync
in-sync true
ncs# vpn l3vpn * deep-check-sync
vpn l3vpn ford deep-check-sync
in-sync true
vpn l3vpn volvo deep-check-sync
in-sync falsencs# vpn l3vpn volvo deep-check-sync outformat cli
cli devices {
devices {
device ce0 {
config {
+ ios:policy-map volvo {
+ class class-default {
+ shape {
+ average {
+ bit-rate 12000000;
+ }
+ }
+ }
+ }
}
}
}
}ncs# devices device ce0 sync-from
result truencs# vpn l3vpn volvo re-deploy dry-run { outformat native }
native {
device {
name ce0
data policy-map volvo
class class-default
shape average 12000000
!
!
}
}
ncs# vpn l3vpn volvo re-deployncs(config)# no vpn l3vpn ford
ncs(config)# commit dry-run
cli devices {
device ce7
config {
- ios:policy-map ford {
- class class-default {
- shape {
- average {
- bit-rate 4500000;
- }
- }
- }
- }
...(config)# commitncs(config)# show full-configuration vpn l3vpn
vpn l3vpn volvo
as-number 65102
endpoint branch-office1
ce-device ce1
ce-interface GigabitEthernet0/11
ip-network 10.7.7.0/24
bandwidth 6000000
!
...ncs# show vpn l3vpn device-list
NAME DEVICE LIST
----------------------------------------
volvo [ ce0 ce1 ce4 ce8 pe0 pe2 pe3 ]
ncs# show devices device service-list
NAME SERVICE LIST
-------------------------------------
ce0 [ "/l3vpn:vpn/l3vpn{volvo}" ]
ce1 [ "/l3vpn:vpn/l3vpn{volvo}" ]
ce2 [ ]
ce3 [ ]
ce4 [ "/l3vpn:vpn/l3vpn{volvo}" ]
ce5 [ ]
ce6 [ ]
ce7 [ ]
ce8 [ "/l3vpn:vpn/l3vpn{volvo}" ]
p0 [ ]
p1 [ ]
p2 [ ]
p3 [ ]
pe0 [ "/l3vpn:vpn/l3vpn{volvo}" ]
pe1 [ ]
pe2 [ "/l3vpn:vpn/l3vpn{volvo}" ]
pe3 [ "/l3vpn:vpn/l3vpn{volvo}" ]ncs(config)# show full-configuration devices device ce3 \
config | display service-meta-data
devices device ce3
config
...
/* Refcount: 1 */
/* Backpointer: [ /l3vpn:vpn/l3vpn:l3vpn[l3vpn:name='ford'] ] */
ios:interface GigabitEthernet0/2.100
/* Refcount: 1 */
description Link to PE / pe1 - GigabitEthernet0/0/0/5
/* Refcount: 1 */
encapsulation dot1Q 100
/* Refcount: 1 */
ip address 192.168.1.13 255.255.255.252
/* Refcount: 1 */
service-policy output ford
exit
ncs(config)# show full-configuration devices device ce3 config \
| display curly-braces | display service-meta-data
...
ios:interface {
GigabitEthernet 0/1;
GigabitEthernet 0/10;
GigabitEthernet 0/11;
GigabitEthernet 0/12;
GigabitEthernet 0/13;
GigabitEthernet 0/14;
GigabitEthernet 0/15;
GigabitEthernet 0/16;
GigabitEthernet 0/17;
GigabitEthernet 0/18;
GigabitEthernet 0/19;
GigabitEthernet 0/2;
/* Refcount: 1 */
/* Backpointer: [ /l3vpn:vpn/l3vpn:l3vpn[l3vpn:name='ford'] ] */
GigabitEthernet 0/2.100 {
/* Refcount: 1 */
description "Link to PE / pe1 - GigabitEthernet0/0/0/5";
encapsulation {
dot1Q {
/* Refcount: 1 */
vlan-id 100;
}
}
ip {
address {
primary {
/* Refcount: 1 */
address 192.168.1.13;
/* Refcount: 1 */
mask 255.255.255.252;
}
}
}
service-policy {
/* Refcount: 1 */
output ford;
}
}
ncs(config)# show full-configuration devices device ce3 config \
| display service-meta-data | context-match Backpointer
devices device ce3
/* Refcount: 1 */
/* Backpointer: [ /l3vpn:vpn/l3vpn:l3vpn[l3vpn:name='ford'] ] */
ios:interface GigabitEthernet0/2.100
devices device ce3
/* Refcount: 2 */
/* Backpointer: [ /l3vpn:vpn/l3vpn:l3vpn[l3vpn:name='ford'] ] */
ios:interface GigabitEthernet0/5$ ncs-netsim stop ce0
DEVICE ce0 STOPPEDncs(config)# show configuration
vpn l3vpn volvo
as-number 65101
endpoint branch-office1
ce-device ce1
ce-interface GigabitEthernet0/11
ip-network 10.7.7.0/24
bandwidth 6000000
!
endpoint main-office
ce-device ce0
ce-interface GigabitEthernet0/11
ip-network 10.10.1.0/24
bandwidth 12000000
!
!
ncs# commit commit-queue async
commit-queue-id 10777927137
Commit complete.
ncs(config)# *** ALARM connection-failure: Failed to connect to device ce0: connection refused: Connection refusedncs# show devices commit-queue | notab
devices commit-queue queue-item 10777927137
age 1934
status executing
devices [ ce0 ce1 pe0 ]
transient ce0
reason "Failed to connect to device ce0: connection refused"
is-atomic true
ncs# show vpn l3vpn volvo commit-queue | notab
commit-queue queue-item 1498812003922ncs# show full-configuration devices global-settings commit-queue | details
devices global-settings commit-queue enabled-by-default false
devices global-settings commit-queue atomic true
devices global-settings commit-queue retry-timeout 30
devices global-settings commit-queue retry-attempts unlimitedncs# show devices commit-queue | notab
devices commit-queue queue-item 10777927137
age 3357
status executing
devices [ ce0 ce1 pe0 ]
transient ce0
reason "Failed to connect to device ce0: connection refused"
is-atomic true
ncs# show devices commit-queue | notab
devices commit-queue queue-item 10777927137
age 3359
status executing
devices [ ce0 ce1 pe0 ]
is-atomic true
ncs# show devices commit-queue
% No entries found.
ncs# show vpn l3vpn volvo commit-queue
% No entries found.
ncs# show devices commit-queue completed | notab
devices commit-queue completed queue-item 10777927137
when 2015-02-09T16:48:17.915+00:00
succeeded true
devices [ ce0 ce1 pe0 ]
completed [ ce0 ce1 pe0 ]
completed-services [ /l3vpn:vpn/l3vpn:l3vpn[l3vpn:name='volvo'] ]ncs# vpn l3vpn volvo check-sync
in-sync false
ncs# vpn l3vpn volvo re-deploy
ncs# vpn l3vpn volvo check-sync
in-sync true$ cd examples.ncs/getting-started/using-ncs/1-simulated-cisco-ios/packages$ ncs-make-package --service-skeleton template --root-container vlans --no-test vlanvlan
package-meta-data.xml
src
templates module vlan {
namespace "http://com/example/vlan";
prefix vlan;
import ietf-inet-types {
prefix inet;
}
import tailf-ncs {
prefix ncs;
}
container vlans {
list vlan {
key name;
uses ncs:service-data;
ncs:servicepoint "vlan";
leaf name {
type string;
}
// may replace this with other ways of refering to the devices.
leaf-list device {
type leafref {
path "/ncs:devices/ncs:device/ncs:name";
}
}
// replace with your own stuff here
leaf dummy {
type inet:ipv4-address;
}
}
} // container vlans {
} augment /ncs:services {
container vlans {
key name;
uses ncs:service-data;
ncs:servicepoint "vlan";
leaf name {
type string;
}
leaf vlan-id {
type uint32 {
range "1..4096";
}
}
list device-if {
key "device-name";
leaf device-name {
type leafref {
path "/ncs:devices/ncs:device/ncs:name";
}
}
leaf interface-type {
type enumeration {
enum FastEthernet;
enum GigabitEthernet;
enum TenGigabitEthernet;
}
}
leaf interface {
type string;
}
}
}
}$ make$ cd $NCS_DIR/examples.ncs/getting-started/using-ncs/1-simulated-cisco-ios$ncs-netsim start
$ncs --with-package-reload
$ncs_cli -C -u adminadmin@ncs(config)# services vlan net-0 vlan-id 1234 \
device-if c0 interface-type FastEthernet interface 1/0
admin@ncs(config-device-if-c0)# top
admin@ncs(config)# show configuration
services vlan net-0
vlan-id 1234
device-if c0
interface-type FastEthernet
interface 1/0
!
!
admin@ncs(config)# services vlan net-0 vlan-id 1234 \
device-if c1 interface-type FastEthernet interface 1/0
admin@ncs(config-device-if-c1)# top
admin@ncs(config)# show configuration
services vlan net-0
vlan-id 1234
device-if c0
interface-type FastEthernet
interface 1/0
!
device-if c1
interface-type FastEthernet
interface 1/0
!
!
admin@ncs(config)# commit dry-run outformat cli
cli {
local-node {
data services {
+ vlan net-0 {
+ vlan-id 1234;
+ device-if c0 {
+ interface-type FastEthernet;
+ interface 1/0;
+ }
+ device-if c1 {
+ interface-type FastEthernet;
+ interface 1/0;
+ }
+ }
}
}
}
admin@ncs(config)# commit
Commit complete.
admin@ncs(config)# no services vlan
admin@ncs(config)# commit
Commit complete.admin@ncs(config)# devices device c0 config ios:vlan 1234
admin@ncs(config-vlan)# top
admin@ncs(config)# devices device c0 config ios:interface \
FastEthernet 10/10 switchport trunk allowed vlan 1234
admin@ncs(config-if)# top
admin@ncs(config)# show configuration
devices device c0
config
ios:vlan 1234
!
ios:interface FastEthernet10/10
switchport trunk allowed vlan 1234
exit
!
!
admin@ncs(config)# commitadmin@ncs(config)# show full-configuration devices device c0 \
config ios:vlan | display xml
<config xmlns="http://tail-f.com/ns/config/1.0">
<devices xmlns="http://tail-f.com/ns/ncs">
<device>
<name>c0</name>
<config>
<vlan xmlns="urn:ios">
<vlan-list>
<id>1234</id>
</vlan-list>
</vlan>
</config>
</device>
</devices>
</config>
admin@ncs(config)# show full-configuration devices device c0 \
config ios:interface FastEthernet 10/10 | display xml
<config xmlns="http://tail-f.com/ns/config/1.0">
<devices xmlns="http://tail-f.com/ns/ncs">
<device>
<name>c0</name>
<config>
<interface xmlns="urn:ios">
<FastEthernet>
<name>10/10</name>
<switchport>
<trunk>
<allowed>
<vlan>
<vlans>1234</vlans>
</vlan>
</allowed>
</trunk>
</switchport>
</FastEthernet>
</interface>
</config>
</device>
</devices>
</config>
admin@ncs(config)#<config-template xmlns="http://tail-f.com/ns/config/1.0"
servicepoint="vlan">
<devices xmlns="http://tail-f.com/ns/ncs">
<device>
<!--
Select the devices from some data structure in the service
model. In this skeleton the devices are specified in a leaf-list.
Select all devices in that leaf-list:
-->
<name>{/device}</name>
<config>
<!--
Add device-specific parameters here.
In this skeleton the service has a leaf "dummy"; use that
to set something on the device e.g.:
<ip-address-on-device>{/dummy}</ip-address-on-device>
-->
</config>
</device>
</devices>
</config-template><config-template xmlns="http://tail-f.com/ns/config/1.0"
servicepoint="vlan">
<devices xmlns="http://tail-f.com/ns/ncs">
<device>
<name>{/device-if/device-name}</name>
<config>
<vlan xmlns="urn:ios">
<vlan-list tags="merge">
<id>{../vlan-id}</id>
</vlan-list>
</vlan>
<interface xmlns="urn:ios">
<?if {interface-type='FastEthernet'}?>
<FastEthernet tags="nocreate">
<name>{interface}</name>
<switchport>
<trunk>
<allowed>
<vlan tags="merge">
<vlans>{../vlan-id}</vlans>
</vlan>
</allowed>
</trunk>
</switchport>
</FastEthernet>
<?end?>
<?if {interface-type='GigabitEthernet'}?>
<GigabitEthernet tags="nocreate">
<name>{interface}</name>
<switchport>
<trunk>
<allowed>
<vlan tags="merge">
<vlans>{../vlan-id}</vlans>
</vlan>
</allowed>
</trunk>
</switchport>
</GigabitEthernet>
<?end?>
<?if {interface-type='TenGigabitEthernet'}?>
<TenGigabitEthernet tags="nocreate">
<name>{interface}</name>
<switchport>
<trunk>
<allowed>
<vlan tags="merge">
<vlans>{../vlan-id}</vlans>
</vlan>
</allowed>
</trunk>
</switchport>
</TenGigabitEthernet>
<?end?>
</interface>
</config>
</device>
</devices>
</config-template>admin@ncs# packages reload
reload-result {
package cisco-ios
result true
}
reload-result {
package vlan
result true
}admin@ncs(config)# services vlan net-0 vlan-id 1234 device-if c0 \
interface-type FastEthernet interface 1/0
admin@ncs(config-device-if-c0)# top
admin@ncs(config)# services vlan net-0 device-if c1 \
interface-type FastEthernet interface 1/0
admin@ncs(config-device-if-c1)# top
admin@ncs(config)# show configuration
services vlan net-0
vlan-id 1234
device-if c0
interface-type FastEthernet
interface 1/0
!
device-if c1
interface-type FastEthernet
interface 1/0
!
!
admin@ncs(config)# commit dry-run outformat native
native {
device {
name c0
data interface FastEthernet1/0
switchport trunk allowed vlan 1234
exit
}
device {
name c1
data vlan 1234
!
interface FastEthernet1/0
switchport trunk allowed vlan 1234
exit
}
}
admin@ncs(config)# commit
Commit complete.admin@ncs(config)# commit | debug
Possible completions:
template Display template debug info
xpath Display XPath debug info
admin@ncs(config)# commit | debug templateadmin@ncs(config)# services vlan net-0 vlan-id 1222
admin@ncs(config-vlan-net-0)# top
admin@ncs(config)# show configuration
services vlan net-0
vlan-id 1222
!
admin@ncs(config)# commit dry-run outformat native
native {
device {
name c0
data no vlan 1234
vlan 1222
!
interface FastEthernet1/0
switchport trunk allowed vlan 1222
exit
}
device {
name c1
data no vlan 1234
vlan 1222
!
interface FastEthernet1/0
switchport trunk allowed vlan 1222
exit
}
}admin@ncs(config)# services vlan net-0 device-if c2 interface-type FastEthernet interface 1/0
admin@ncs(config-device-if-c2)# top
admin@ncs(config)# commit dry-run outformat native
native {
device {
name c2
data vlan 1222
!
interface FastEthernet1/0
switchport trunk allowed vlan 1222
exit
}
}
admin@ncs(config)# commit
Commit complete.admin@ncs(config)# no services vlan net-0
admin@ncs(config)# commit dry-run outformat native
native {
device {
name c0
data no vlan 1222
interface FastEthernet1/0
no switchport trunk allowed vlan 1222
exit
}
device {
name c1
data no vlan 1222
interface FastEthernet1/0
no switchport trunk allowed vlan 1222
exit
}
device {
name c2
data no vlan 1222
interface FastEthernet1/0
no switchport trunk allowed vlan 1222
exit
}
}
admin@ncs(config)# commit
Commit complete.ncs# show vpn l3vpn volvo plan
NAME TYPE STATE STATUS WHEN
------------------------------------------------------------------------------------
self self init reached 2016-04-08T09:22:40
ready not-reached -
endpoint-branch-office l3vpn init reached 2016-04-08T09:22:40
qos-configured reached 2016-04-08T09:22:40
ready reached 2016-04-08T09:22:40
endpoint-head-office l3vpn init reached 2016-04-08T09:22:40
pe-created not-reached -
ce-vpe-topo-added not-reached -
vpe-p0-topo-added not-reached -
qos-configured not-reached -
ready not-reached -ncs# show vpn l3vpn volvo service-progress-monitoring
JEOPARDY VIOLATION SUCCESS
NAME POLICY START TIME JEOPARDY TIME RESULT VIOLATION TIME RESULT STATUS TIME
---------------------------------------------------------------------------------------------------------------------------
self service-ready 2016-04-08T09:22:40 2016-04-08T09:22:40 - 2016-04-08T09:22:40 - running -Learn about the NSO Python API and its usage.
The NSO Python library contains a variety of APIs for different purposes. In this section, we introduce these and explain their usage. The NSO Python module deliverables are found in two variants, the low-level APIs and the high-level APIs.
The low-level APIs are a direct mapping of the NSO C APIs, CDB, and MAAPI. These will follow the evolution of the C APIs. See man confd_lib_lib for further information.
The high-level APIs are an abstraction layer on top of the low-level APIs to make them easier to use and to improve code readability and development rate for common use cases. E.g. services and action callbacks and common scripting towards NSO.
Scripting in Python is a very easy and powerful way of accessing NSO. This document has several examples of scripts showing various ways of accessing data and requesting actions in NSO.
The examples are directly executable with the Python interpreter after sourcing the ncsrc file in the NSO installation directory. This sets up the PYTHONPATH environment variable, which enables access to the NSO Python modules.
Edit a file and execute it directly on the command line like this:
The Python high-level MAAPI API provides an easy-to-use interface for accessing NSO. Its main targets are to encapsulate the sockets, transaction handles, data type conversions, and the possibility of using the Python with statement for proper resource cleanup.
The simplest way to access NSO is to use the single_transaction helper. It creates a MAAPI context and a transaction in one step.
This example shows its usage, connecting as user admin and python in the AAA context:
The example code here shows how to start a transaction but does not properly handle the case of concurrency conflicts when writing data. See for details.
When only reading data, always start a read transaction to read directly from the CDB datastore and data providers. write transactions cache repeated reads done by the same transaction.
A common use case is to create a MAAPI context and reuse it for several transactions. This reduces the latency and increases the transaction throughput, especially for backend applications. For scripting the lifetime is shorter and there is no need to keep the MAAPI contexts alive.
This example shows how to keep a MAAPI connection alive between transactions:
Maagic is a module provided as part of the NSO Python APIs. It reduces the complexity of programming towards NSO, is used on top of the MAAPI high-level API, and addresses areas that require more programming. First, it helps in navigating the model, using standard Python object dot notation, giving very clear and easily read code. The context handlers remove the need to close sockets, user sessions, and transactions and the problems when they are forgotten and kept open. Finally, it removes the need to know the data types of the leafs, helping you to focus on the data to be set.
When using Maagic, you still do the same procedure of starting a transaction.
To use the Maagic functionality, you get access to a Maagic object either pointing to the root of the CDB:
In this case, it is a ncs.maagic.Node object with a ncs.maapi.Transaction backend.
From here, you can navigate in the model. In the table, you can see examples of how to navigate.
The table below lists Maagic object navigation.
You can also get a Maagic object from a keypath:
Maagic handles namespaces by a prefix to the names of the elements. This is optional but recommended to avoid future side effects.
The syntax is to prefix the names with the namespace name followed by two underscores, e.g., ns_name__ name.
Examples of how to use namespaces:
In cases where there is a name collision, the namespace prefix is required to access an entity from a module, except for the module that was first loaded. A namespace is always required for root entities when there is a collision. The module load order is found in the NCS log file: logs/ncs.log.
Reading data using Maagic is straightforward. You will just specify the leaf you are interested in and the data is retrieved. The data is returned in the nearest available Python data type.
For non-existing leafs, None is returned.
Writing data using Maagic is straightforward. You will just specify the leaf you are interested in and assign a value. Any data type can sent as input, as the str function is called, converting it to a string. The format depends on the data type. If the type validation fails, an Error exception is thrown.
Data is deleted the Python way of using the del function:
Some entities have a delete method, this is explained under the corresponding type.
The delete mechanism in Maagic is implemented using the __delattr__ method on the Node class. This means that executing the del function on a local or global variable will only delete the object from the Python local or global namespaces. E.g., del obj.
Containers are addressed using standard Python dot notation: root.container1.container2.
A presence container is created using the create method:
Existence is checked with the exists or bool functions:
A presence container is deleted with the del or delete functions:
The case of a choice is checked by addressing the name of the choice in the model:
Changing a choice is done by setting a value in any of the other cases:
List elements are created using the create method on the List class:
The objects ce5 and o above are of type ListElement which is actually an ordinary container object with a different name.
Existence is checked with the exists or bool functions List class:
A list element is deleted with the Python del function:
To delete the whole list, use the Python del function or delete() on the list.
Unions are not handled in any specific way - you just read or write to the leaf and the data is validated according to the model.
Enumerations are returned as an Enum object, giving access to both the integer and string values.
Writing values to enumerations accepts both the string and integer values.
Leafrefs are read as regular leafs and the returned data type corresponds to the referred leaf.
Leafrefs are set as the leaf they refer to. The data type is validated as it is set. The reference is validated when the transaction is committed.
Identityrefs are read and written as string values. Writing an identityref without a prefix is possible, but doing so is error-prone and may stop working if another model is added which also has an identity with the same name. The recommendation is to always use a prefix when writing identityrefs. Reading an identityref will always return a prefixed string value.
Instance identifiers are read as xpath formatted string values.
Instance identifiers are set as xpath formatted strings. The string is validated as it is set. The reference is validated when the transaction is committed.
A leaf-list is represented by a LeafList object. This object behaves very much like a Python list. You may iterate it, check for the existence of a specific element using in, or remove specific items using the del operator. See examples below.
Binary values are read and written as byte strings.
Reading a bits leaf will give a Bits object back (or None if the bits leaf is non-existent). To get some useful information out of the Bits object, you can either use the bytearray() method to get a Python byte array object in return or the Python str() operator to get a space-separated string containing the bit names.
There are four ways of setting a bits leaf: One is to set it using a string with space-separated bit names, the other one is to set it using a byte array, the third by using a Python binary string, and as a last option is it may be set using a Bits object. Note that updating a Bits object does not change anything in the database - for that to happen, you need to assign it to the Maagic node.
An empty leaf is created using the create method. If the type empty leaf is part of a union, the leaf must be set to the C_EMPTY value instead.
If the type empty leaf is part of a union, then you read the leaf to see if empty is the current value. Otherwise, existence is checked with the exists or bool functions:
An empty leaf is deleted with the del or delete functions:
Requesting an action may not require an ongoing transaction and this example shows how to use Maapi as a transactionless back-end for Maagic.
This example shows how to request an action that requires an ongoing transaction. It is also valid to request an action that does not require an ongoing transaction.
Providing parameters to an action with Maagic is very easy: You request an input object, with get_input from the Maagic action object, and set the desired (or required) parameters as defined in the model specification.
If you have a leaf-list, you need to prepare the input parameters
A common use case is to script the creation of devices. With the Python APIs, this is easily done without the need to generate set commands and execute them in the CLI.
This class is a helper to support service progress reporting using plan-data as part of a Reactive FASTMAP nano service. More info about plan-data is found in .
The interface of the PlanComponent is identical to the corresponding Java class and supports the setup of plans and setting the transition states.
See pydoc3 ncs.application.PlanComponent for further information about the Python class.
The pattern is to add an overall plan (self) for the service and separate plans for each component that builds the service.
When appending a new state to a plan the initial state is set to ncs:not-reached. At the completion of a plan the state is set to ncs:ready. In this case when the service is completely setup:
The Python high-level API provides an easy way to implement an action handler for your modeled actions. The easiest way to create a handler is to use the ncs-make-package command. It creates some ready-to-use skeleton code.
The generated package skeleton:
This example action handler takes a number as input, doubles it, and returns the result.
When debugging Python packages refer to .
Test the action by doing a request from the NSO CLI:
The input and output parameters are the most commonly used parameters of the action callback method. They provide the access objects to the data provided to the action request and the returning result.
They are maagic.Node objects, which provide easy access to the modeled parameters.
The table below lists the action handler callback parameters:
The Python high-level API provides an easy way to implement a service handler for your modeled services. The easiest way to create a handler is to use the ncs-make-package command. It creates some skeleton code.
The generated package skeleton:
This example has some code added for the service logic, including a service template.
When debugging Python packages, refer to .
Add some service logic to the cb_create:
Add a template to packages/pyservice/templates/service.template.xml:
The table below lists the service handler callback parameters:
The Python high-level API provides an easy way to implement a validation point handler. The easiest way to create a handler is to use the ncs-make-package command. It creates ready-to-use skeleton code.
The generated package skeleton:
This example validation point handler accepts all values except invalid.
When debugging Python packages refer to .
Test the validation by setting the value to invalid and validating the transaction from the NSO CLI:
The table below lists the validation point handler callback parameters:
The Python low-level APIs are a direct mapping of the C-APIs. A C call has a corresponding Python function entry. From a programmer's point of view, it wraps the C data structures into Python objects and handles the related memory management when requested by the Python garbage collector. Any errors are reported as error.Error.
The low-level APIs will not be described in detail in this document, but you will find a few examples showing their usage in the coming sections.
See pydoc3 _ncs and man confd_lib_lib for further information.
This API is a direct mapping of the NSO MAAPI C API. See pydoc3 _ncs.maapi and man confd_lib_maapi for further information.
Note that additional care must be taken when using this API in service code, as it also exposes functions that do not perform reference counting (see ).
In the service code, you should use the shared_* set of functions, such as:
And, avoid the non-shared variants:
The following example is a script to read and de-crypt a password using the Python low-level MAAPI API.
This example is a script to do a check-sync action request using the low-level MAAPI API.
This API is a direct mapping of the NSO CDB C API. See pydoc3 _ncs.cdb and man confd_lib_cdb for further information.
Setting of operational data has historically been done using one of the CDB APIs (Python, Java, C). This example shows how to set a value and trigger subscribers for operational data using the Python low-level API. API.
When schemas are loaded, either upon direct request or automatically by methods and classes in the maapi module, they are statically cached inside the Python VM. This fact presents a problem if one wants to connect to several different NSO nodes with diverging schemas from the same Python VM.
Take for example the following program that connects to two different NSO nodes (with diverging schemas) and shows their ned-id's.
Running this program may produce output like this:
The output shows identities in string format for the active NEDs on the different nodes. Note that for lsa-2, the last three lines do not show the name of the identity but instead the representation of a _ncs.Value. The reason for this is that lsa-2 has different schemas which do not include these identities. Schemas for this Python VM were loaded and cached during the first call to ncs.maapi.single_read_trans() so no schema loading occurred during the second call.
The way to make the program above work as expected is to force the reloading of schemas by passing an optional argument to single_read_trans() like so:
Running the program with this change may produce something like this:
Now, this was just an example of what may happen when wrong schemas are loaded. Implications may be more severe though, especially if maagic nodes are kept between reloads. In such cases, accessing an "invalid" maagic object may in the best case result in undefined behavior making the program not work, but might even crash the program. So care needs to be taken to not reload schemas in a Python VM if there are dependencies to other parts in the same VM that need previous schemas.
Functions and methods that accept the load_schemas argument:
ncs.maapi.Maapi() constructor
ncs.maapi.single_read_trans()
ncs.maapi.single_write_trans()
multiprocessing.ProcessWhen using multiprocessing in NSO, the default start method is now spawn instead of fork. With the spawn method, a new Python interpreter process is started, and all arguments passed to multiprocessing.Process must be picklable.
If you pass Python objects that reference low-level C structures (for example _ncs.dp.DaemonCtxRef or _ncs.UserInfo), Python will raise an error like:
This happens because self and uinfo contain low-level C references that cannot be serialized (pickled) and sent to the child process.
To fix this, avoid passing entire objects such as self or uinfo to the process. Instead, pass only simple or primitive data types (like strings, integers, or dictionaries) that can be pickled.
Design large and scalable NSO applications using LSA.
Layered Service Architecture (LSA) is a design approach for massively large and scalable NSO applications. Large service providers and enterprises can use it to manage services for millions of users, ranging over several hundred thousand managed devices. Such scale requires special consideration since a single NSO instance no longer suffices and LSA helps you address this challenge.
At some point, scaling up hits the law of diminishing returns. Effectively, adding more resources to the NSO server becomes prohibitively expensive. To further increase the throughput of the whole system, you can share the load across multiple instances, in a scale-out fashion.
You achieve this by splitting a service into a main, upper-layer part, and one or more lower-layer parts. The upper part controls and dispatches work to the lower parts. This is the same approach as using a customer-facing service (CFS) and a resource-facing service (RFS). However, here the CFS code (the upper-layer part) runs in a different NSO node than the RFS code (the lower-layer parts). What is more, the lower-layer parts can be spread across multiple NSO nodes.
input
ncs.maagic.Node
An object containing the parameters of the input section of the action yang model.
output
ncs.maagic.Node
The object where to put the output parameters as defined in the output section of the action yang model.
proplist
list(tuple(str, str))
The opaque object for the service configuration used to store hidden state information between invocations. It is updated by returning a modified list.
validationpoint
string
The validation point that triggered the validation.
MAAPI (Management Agent API) Northbound interface that is transactional and user session-based. Using this interface, both configuration and operational data can be read. Configuration and operational data can be written and committed as one transaction. The API is complete in the way that it is possible to write a new northbound agent using only this interface. It is also possible to attach to ongoing transactions to read uncommitted changes and/or modify data in these transactions.
Python low-level CDB API The Southbound interface provides access to the CDB configuration database. Using this interface, configuration data can be read. In addition, operational data that is stored in CDB can be read and written. This interface has a subscription mechanism to subscribe to changes. A subscription is specified on a path that points to an element in a YANG model or an instance in the instance tree. Any change under this point will trigger the subscription. CDB also has functions to iterate through the configuration changes when a subscription has been triggered.
Python low-level DP API Southbound interface that enables callbacks, hooks, and transforms. This API makes it possible to provide the service callbacks that handle service-to-device mapping logic. Other usual cases are external data providers for operational data or action callback implementations. There are also transaction and validation callbacks, etc. Hooks are callbacks that are fired when certain data is written and the hook is expected to do additional modifications of data. Transforms are callbacks that are used when complete mediation between two different models is necessary.
Python high-level API: API that resides on top of the MAAPI, CDB, and DP APIs. It provides schema model navigation and instance data handling (read/write). Uses a MAAPI context as data access and incorporates its functionality. It is used in service implementations, action handlers, and Python scripting.
root.devices
Container
root.devices.device
List
root.devices.device['ce0']
ListElement
root.devices.device['ce0'].device_type.cli
PresenceContainer
root.devices.device['ce0'].address
str
root.devices.device['ce0'].port
int
self
ncs.dp.Action
The action object.
uinfo
ncs.UserInfo
User information of the requester.
name
string
The tailf:action name.
kp
ncs.HKeypathRef
The keypath of the action.
self
ncs.application.Service
The service object.
tctx
ncs.TransCtxRef
Transaction context.
root
ncs.maagic.Node
An object pointing to the root with the current transaction context, using shared operations (create, set_elem, ...) for configuration modifications.
service
ncs.maagic.Node
An object pointing to the service with the current transaction context, using shared operations (create, set_elem, ...) for configuration modifications.
self
ncs.dp.ValidationPoint
The validation point object.
tctx
ncs.TransCtxRef
Transaction context.
kp
ncs.HKeypathRef
The keypath of the node being validated.
value
ncs.Value
Current value of the node being validated.
Each RFS node is responsible for its own set of managed devices, mounted under its /devices tree, and the upper-layer, CFS node only concerns itself with the RFS nodes. So, the CFS node only mounts the RFS nodes under its /devices tree, not managed devices directly. The main advantage of this architecture is that you can add many device RFS nodes that collectively manage a huge number of actual devices—much more than a single node could.
While it is tempting to design the system in the most scalable way from the start, it comes with a cost. Compared to a single, non-LSA setup, the automation system now becomes distributed across multiple nodes, with all the complexity that entails. For example, in a non-distributed system, the communication between different parts has mostly negligible latency and hardly ever fails. That is certainly not true anymore for distributed systems as we know them today, including LSA.
More practically, taking a service in NSO and deploying a single instance on an LSA system is likely to take longer and have a higher chance of failure compared to a non-LSA system, because additional network communication is involved.
Moreover, multiple NSO nodes present a higher operational complexity and administrative burden. There is no longer a “single pane of glass” view of all the individual devices. That's why you must weigh the benefits of the LSA approach against the scale at which you operate. When LSA starts making sense will depend on the type of devices you manage, the services you have, the geographical distribution of resources, and so on.
A distributed system can push the overall throughput way beyond what a single instance can do. But you will achieve a much better outcome by first focusing on eliminating the bottlenecks in the provisioning code, as discussed in Scaling and Performance Optimization. Only when that proves insufficient, consider deploying LSA.
LSA also addresses the memory limitations of NSO when device configurations become very large (individually or all together). If the NSO server is memory-constrained and more memory cannot be added, the LSA approach can be a solution.
Another challenge that LSA may help you overcome is scaling organizationally. When many teams share the same NSO instance, it can get hard to separate the different concerns and responsibilities. Teams may also have different cadences or preferences for upgrades, resulting in friction. With LSA, it becomes possible to create a clearer separation. The CFS node and the RFS nodes can have different release cycles (as long as the YANG upgrade rules are followed) and each can be upgraded independently. If a bug is found or a feature is missing in the RFS nodes, it can be fixed without affecting the CFS node, and vice versa.
To summarize, the major advantage of this architecture is scalability. The solution scales horizontally, both at the upper and the lower layer, thus catering for truly massive deployments, but at the expense of the increased complexity.
To take advantage of the scalability potential of LSA, your services must be designed in a layered fashion. Once the automation logic in NSO reaches a certain level of complexity, a stacked service design tends to emerge naturally. Often, you can extend it to LSA with relatively little change. The same is true for brand-new, green field designs.
In other situations, you might need to invest some additional effort to split and orchestrate the work across multiple groups of devices. Examples are existing monolithic services or stacked service designs that require all RFSs to access all devices.
If you are designing the service from scratch, you have the most freedom in choosing the partitioning of logic between CFS and RFS. The CFS must contain the YANG definition for the service and its configurable options that are available to the customer, perhaps through an order capture system north of the NSO. On the other hand, the RFS YANG models are internal to the service, that is, they are not used directly by the customer. So, you are free to design them in a way that makes the provisioning code as simple as possible.
As an example, you might have a VLAN provisioning service where the CFS lets users select if the hosts on the VLAN can access the internet. Then you can divide provisioning into, let's say, an RFS service that configures the VLAN and the appropriate IP subnet across the data center switches, and another RFS service that configures the firewall to allow the traffic from the subnet to reach the internet. This design clearly separates the provisioned devices into two groups: firewalls and data center switches. Each group can be managed by a separate lower-layer NSO.
Similar to a brand new design, an existing monolithic application that uses stacked services has already laid the groundwork for LSA-compatible design because of the existing division into two layers (upper and lower).
A possible complication, in this case, is when each existing RFS touches all of the affected devices, and that makes it hard to partition devices across multiple lower-layer NSO nodes. For example, if one RFS manages the VLAN interface (the VLAN ID and layer 2 settings) and another RFS manages the IP configuration for this interface, that configuration very likely happens on the same devices. The solution in this situation could be to partition RFS services based on the data center that they operate in, such as one lower-layer NSO node for one data center, another lower-layer NSO for another data center, and so on. If that is not possible, an alternative is to redesign each RFS and split their responsibilities differently.
The most complex, yet common case is when a single node NSO installation grows over time and you are faced with performance problems due to the new size. To leverage the LSA functionality, you must first split the service into upper- and lower-layer parts, which require a certain amount of effort. That is why the decision to use LSA should always be accompanied by a thorough analysis to determine what makes the system too slow. Sometimes, it is a result of a bad "must" expression in the service YANG code or similar. Fixing that is much easier than re-architecting the application.
Regardless of whether you start with a green field design or extend an existing application, you must tackle the problem of dispatching the RFS instantiation to the correct lower-layer NSO node.
Imagine a VPN application that uses a managed device on each site to securely connect to the private network. In a service provider network, this is usually done by the CPE. When a customer orders connectivity to an additional site (another leg of the VPN), the service needs to configure the site-local device (the CPE). As there will be potentially many such devices, each will be managed by one of the RFS nodes. However, the VPN service is managed centrally, through the CFS, which must:
Figure out which RFS node is responsible for the device for the new site (CPE).
Dispatch the RFS instantiation to that particular RFS node, making sure the device is properly configured.
NSO provides a mechanism to facilitate the second part, the actual dispatch, but the service logic must somehow select the correct RFS node. If the RFS nodes are geographically separated across different countries or different data centers, the CFS could simply infer or calculate the right RFS node based on service instance parameters, such as the physical location of the new site.
A more flexible alternative is to use dynamic mapping. It can be as simple as a list of 2-tuples that map a device name to an RFS node, stored in the CDB. The trade-off is that the list must be maintained. It is straightforward to automate the maintenance of the list though, for example through NETCONF notifications whenever /devices/device on the RFS nodes is manipulated or by explicitly asking the CFS node to query the RFS nodes for their list of devices.
Ultimately, the right approach to dispatch will depend on the complexity of your service and operational procedures.
Having designed a layered service with the CFS and RFS parts, the CFS must now communicate with the RFS that resides on a different node. You achieve that by adding the lower-layer (RFS) node as a managed device to the upper-layer (CFS) node. The CFS node must access the RFS data model on the lower-layer node, just like it accesses any other configuration on any managed device. But don't you need a NED to do this? Indeed, you do. That's why the RFS model needs to be specially compiled for the upper-layer node to use as part of NED and not a standalone service. A model compiled in this way is called a 'device compiled'.
Let's then see how the LSA setup affects the whole service provisioning process. Suppose a new request arrives at the CFS node, such as a new service instance being created through RESTCONF by a customer order portal. The CFS runs the service mapping logic as usual; however, instead of configuring the network devices directly, the CFS configures the appropriate RFS nodes with the generated RFS service instance data. This is the dispatch logic in action.
As the configuration for the lower-layer nodes happens under the /devices/device tree, it is picked up and pushed to the relevant NSO instances by the NED. The NED sends the appropriate NETCONF edit-config RPCs, which trigger the RFS FASTMAP code at the RFS nodes. The RFS mapping logic constructs the necessary network configuration for each RFS instance and the RFS nodes update the actual network devices.
In case the commit queue feature is not being used, this entire sequence is serialized through the system as a whole. It means that if another northbound request arrives at the CFS node while the first request is being processed, the second request is synchronously queued at the CFS node, waiting for the currently running transaction to either succeed or fail.
If the code on the RFS nodes is reactive, it will likely return without much waiting, since the RFM applications are usually very fast during their first round of execution. But that will still have a lower performance than using the commit queue since the execution is serialized eventually when modifying devices. To maximize throughput, you also need to enable the commit queue functionality throughout the system.
The main benefit of LSA is that it scales horizontally at the RFS node layer. If one RFS node starts to become overloaded, it's easy to bring up an additional one, to share the load. Thus LSA caters to scalability at the level of the number of managed devices. However, each RFS node needs to host all the RFSs that touch the devices it manages under its /devices/device tree. There is still one, and only one, NSO node that directly manages a single device.
Dividing a provisioning application into upper and lower-layer services also increases the complexity of the application itself. For example, to follow the execution of a reactive or nano RFS, typically an additional NETCONF notification code must be written. The notifications have to be sent from the RFS nodes and received and processed by the CFS code. This way, if something goes wrong at the device layer, the information is relayed all the way to the top level of the system.
Furthermore, it is highly recommended that LSA applications enable the commit queue on all NSO nodes. If the commit queue is not enabled, the slowest device on the network will limit the overall throughput, significantly reducing the benefits of LSA.
Finally, if the two-layer approach proves to be insufficient due to requirements at the CFS node, you can extend it to three layers, with an additional layer of NSO nodes between the CFS and RFS layers.
This section describes a small LSA application, which exists as a running example in the examples.ncs/getting-started/developing-with-ncs/22-layered-service-architecture directory.
The application is a slight variation on the examples.ncs/getting-started/developing-with-ncs/4-rfs-service example where the YANG code has been split up into an upper-layer and a lower-layer implementation. The example topology (based on netsim for the managed devices, and NSO for the upper/lower layer NSO instances) looks like the following:
The upper layer of the YANG service data for this example looks like the following:
Instantiating one CFS we have:
The provisioning code for this CFS has to make a decision on where to instantiate what. In this example the "what" is trivial, it's the accompanying RFS, whereas the "where" is more involved. The two underlying RFS nodes, each manage 3 netsim routers, thus given the input, the CFS code must be able to determine which RFS node to choose. In this example, we have chosen to have an explicit map, thus on the upper-nso we also have:
So, we have a template CFS code that does the dispatching to the right RFS node.
This technique for dispatching is simple and easy to understand. The dispatching might be more complex, it might even be determined at execution time dependent on CPU load. It might be (as in this example) inferred from input parameters or it might be computed.
The result of the template-based service is to instantiate the RFS, at the RFS nodes.
First, let's have a look at what happened in the upper-nso. Look at the modifications but ignore the fact that this is an LSA service:
Just the dispatched data is shown. As ex0 and ex5 reside on different nodes, the service instance data has to be sent to both lower-nso-1 and lower-nso-2.
Now let's see what happened in the lower-nso. Look at the modifications and take into account that these are LSA nodes (this is the default):
Both the dispatched data and the modification of the remote service are shown. As ex0 and ex5 reside on different nodes, the service modifications of the service rfs-vlan on both lower-nso-1 and lower-nso-2 are shown.
The communication between the NSO nodes is of course NETCONF.
The YANG model at the lower layer, also known as the RFS layer, is similar to the CFS, but slightly different:
The task for the RFS provisioning code here is to actually provision the designated router. If we log into one of the lower layer NSO nodes, we can check the following.
To conclude this section, the final remark here is that to design a good LSA application, the trick is to identify a good layering for the service data models. The upper layer, the CFS layer is what is exposed northbound, and thus requires a model that is as forward-looking as possible since that model is what a system north of NSO integrates to, whereas the lower layer models, the RFS models can be viewed as "internal system models" and they can be more easily changed.
In this section, we'll describe a lightly modified version of the example in the previous section. The application we describe here exists as a running example under: examples.ncs/getting-started/developing-with-ncs/24-layered-service-architecture-scaling
Sometimes it is desirable to be able to easily move devices from one lower LSA node to another. This makes it possible to easily expand or shrink the number of lower LSA nodes. Additionally, it is sometimes desirable to avoid HA pairs for replication but instead use a common store for all lower LSA devices, such as a distributed database, or a common file system.
The above is possible provided that the LSA application is structured in certain ways.
The lower LSA nodes only expose services that manipulate the configuration of a single device. We call these devices RFSs, or dRFS for short.
All services are located in a way that makes it easy to extract them, for example in /drfs:dRFS/device
No RFS takes place on the lower LSA nodes. This avoids the complication with locking and distributed event handling.
The LSA nodes need to be set up with the proper NEDs and with auth groups such that a device can be moved without having to install new NEDs or update auth groups.
Provided that the above requirements are met, it is possible to move a device from one lower LSA node by extracting the configuration from the source node and installing it on the target node. This, of course, requires that the source node is still alive, which is normally the case when HA-pairs are used.
An alternative to using HA-pairs for the lower LSA nodes is to extract the device configuration after each modification to the device and store it in some central storage. This would not be recommended when high throughput is required but may make sense in certain cases.
In the example application, there are two packages on the lower LSA nodes that provide this functionality. The package inventory-updater installs a database subscriber that is invoked every time any device configuration is modified, both in the preparation phase and in the commit phase of any such transaction. It extracts the device and dRFS configuration, including service metadata, during the preparation phase. If the transaction proceeds to a full commit, the package is again invoked and the extracted configuration is stored in a file in the directory db_store.
The other package is called device-actions. It provides three actions: extract-device, install-device, and delete-device. They are intended to be used by the upper LSA node when moving a device either from a lower LSA node or from db_store.
In the upper LSA node, there is one package for coordinating the movement, called move-device. It provides an action for moving a device from one lower LSA node to another. For example when invoked to move device ex0 from lower-1 to lower-2 using the action
it goes through the following steps:
A partial lock is acquired on the upper-nso for the path /devices/device[name=lower-1]/config/dRFS/device[name=ex0] to avoid any changes to the device while the device is in the process of being moved.
The device and dRFS configuration are extracted in one of two ways:
Read the configuration from lower-1 using the action
Read the configuration from some central store, in our case the file system in the directory. db_store.
The configuration will look something like this
Install the configuration on the lower-2 node. This can be done by running the action:
This will load the configuration and commit using the flags no-deploy and no-networking.
Delete the device from lower-1 by running the action
Update mapping table
Release the partial lock for /devices/device[name=lower-1]/config/dRFS/device[name=ex0].
Re-deploy all services that have touched the device. The services all have backpointers from /devices/device{lower-1}/config/dRFS/device{ex0}. They are re-deployed using the flags no-lsa and no-networking.
Finally, the action runs compare-config on lower-1 and lower-2.
With this infrastructure in place, it is fairly straightforward to implement actions for re-balancing devices among lower LSA nodes, as well as evacuating all devices from a given lower LSA node. The example contains implementations of those actions as well.
If we do not have the luxury of designing our NSO service application from scratch, but rather are faced with extending/changing an existing, already deployed application into the LSA architecture we can use the techniques described in this section.
Usually, the reasons for rearchitecting an existing application are performance-related.
In the NSO example collection, one of the most popular real examples is the examples.ncs/service-provider/mpls-vpn code. That example contains an almost "real" VPN provisioning example whereby VPNS are provisioned in a network of CPEs, PEs, and P routers according to this picture:
The service model in this example roughly looks like this:
There are several interesting observations on this model code related to the Layered Service Architecture.
Each instantiated service has a list of endpoints and CPE routers. These are modeled as a leafref into the /devices tree. This has to be changed if we wish to change this application into an LSA application since the /devices tree at the upper layer doesn't contain the actual managed routers. Instead, the /devices tree contains the lower layer RFS nodes.
There is no connectivity/topology information in the service model. Instead, the mpls-vpn example has topology information on the side, and that data is used by the provisioning code. That topology information for example contains data on which CE routers are directly connected to which PE router.
Remember from the previous section, that one of the additional complications of an LSA application is the dispatching part. The dispatching problem fits well into the pattern where we have topology information stored on the side and let the provisioning FASTMAP code use that data to guide the provisioning. One straightforward way would be to augment the topology information with additional data, indicating which RFS node is used to manage a specific managed device.
By far the easiest way to change an existing monolithic NSO application into the LSA architecture is to keep the service model at the upper layer and lower layer almost identical, only changing things like leafrefs directly into the /devices tree which obviously breaks.
In this example, the topology information is stored in a separate container share-data and propagated to the LSA nodes by means of service code.
The example, examples.ncs/service-provider/mpls-vpn-layered-service-architecture does exactly this, the upper layer data model in upper-nso/packages/l3vpn/src/yang/l3vpn.yang now looks as:
The ce-device leaf is now just a regular string, not a leafref.
So, instead of an NSO topology that looks like:
We want an NSO architecture that looks like this:
The task for the upper layer FastMap code is then to instantiate a copy of itself on the right lower layer NSO nodes. The upper layer FastMap code must:
Determine which routers, (CE, PE, or P) will be touched by its execution.
Look in its dispatch table, which lower-layer NSO nodes are used to host these routers.
Instantiate a copy of itself on those lower layer NSO nodes. One extremely efficient way to do that is to use the Maapi.copy_tree() method. The code in the example contains code that looks like this:
Finally, we must make a minor modification to the lower layer (RFS) provisioning code too. Originally, the FastMap code wrote all config for all routers participating in the VPN, now with the LSA partitioning, each lower layer NSO node is only responsible for the portion of the VPN that involves devices that reside in its /devices tree, thus the provisioning code must be changed to ignore devices that do not reside in the /devices tree.
In addition to conceptual changes of splitting into upper- and lower-layer parts, migrating an existing monolithic application to LSA may also impact the models used. In the new design, the upper-layer node contains the (more or less original) CFS model as well as the device-compiled RFS model, which it requires for communication with the RFS nodes. In a typical scenario, these are two separate models. So, for example, they must each use a unique namespace.
To illustrate the different YANG files and namespaces used, the following text describes the process of splitting up an example monolithic service. Let's assume that the original service resides in a file, myserv.yang, and looks like the following:
In an LSA setting, we want to keep this module as close to the original as possible. We clearly want to keep the namespace, the prefix, and the structure of the YANG identical to the original. This is to not disturb any provisioning systems north of the original NSO. Thus with only minor modifications, we want to run this module at the CFS node, but with non-applicable leafrefs removed, thus at the CFS node we would get:
Now, we want to run almost the same YANG module at the RFS node, however, the namespace must be changed. For the sake of the CFS node, we're going to NED compile the RFS and NSO doesn't like the same namespace to occur twice, thus for the RFS node, we would get a YANG module myserv-rfs.yang that looks like the following:
This file can, and should, keep the leafref as is.
The final and last file we get is the compiled NED, which should be loaded in the CFS node. The NED is directly compiled from the RFS model, as an LSA NED.
Thus, we end up with three distinct packages from the original one:
The original, slated for the CFS node, with leafrefs removed.
The modified original, slated for the RFS node, with the namespace and the prefix changed.
The NED, compiled from the RFS node code, slated for the CFS node.
The purpose of the upper CFS node is to manage all CFS services and to push the resulting service mappings to the RFS services. The lower RFS nodes are configured as devices in the device tree of the upper CFS node and the RFS services are created under the /devices/device/config accordingly. This is almost identical to the relation between a normal NSO node and the normal devices. However, there are differences when it comes to commit parameters and the commit queue, as well as some other LSA-specific features.
Such a design allows you to decide whether you will run the same version of NSO on all nodes or not. Since some differences arise between the two options, this document distinguishes a single-version deployment from a multi-version one.
Deployment of an LSA cluster where all the nodes have the same major version of NSO running is called a single version deployment. If the versions are different, then it is a multi-version deployment, since the packages on the CFS node must be managed differently.
The choice between the two deployment options depends on your functional needs. The single version is easier to maintain and is a good starting point but is less flexible. While it is possible to migrate from one to the other, the migration from a single version to a multi-version is typically easier than the other way around. Still, every migration requires some effort, so it is best to pick one approach and stick to it.
You can find working examples of both deployment types in the examples.ncs/getting-started/developing-with-ncs/22-lsa-single-version-deployment and examples.ncs/getting-started/developing-with-ncs/28-lsa-multi-version-deployment folders, respectively.
The type of deployment does not affect the RFS nodes. In general, the RFS nodes act very much like ordinary standalone NSO instances but only support the RFS services.
Configure and set up the lower RFS nodes as you would a standalone node, by making sure the necessary NED and RFS packages are loaded and the managed network devices added. This requires you to have already decided on the distribution of devices to lower RFS nodes. The RFS packages are ordinary service packages.
The only LSA-specific requirement is that these nodes enable NETCONF communication northbound, as this is how the upper CFS node will interact with them. To enable NETCONF northbound, ensure that a configuration similar to the following is present in the ncs.conf of every RFS node:
One thing to note is that you do not need to explicitly enable the commit queue on the RFS nodes, even if you intend to use LSA with the commit queue feature. The upper CFS node is aware of the LSA setup and will propagate the relevant commit flags to the lower RFS nodes automatically.
If you wish to enable the commit queue by default, that is, even for transactions originating on the RFS node (non-LSA), you are strongly encouraged to enable it globally, through the /devices/global-settings/commit-queue/enabled-by-default setting on all the RFS nodes and, importantly, the upper CFS node too. Otherwise, you may end up in a situation where only a part of the transaction runs through the commit queue. In that case, the rollback-on-error commit queue error option will not work correctly, as it can't roll back the full original transaction but just the part that went through the commit queue. This can result in an inconsistent network state.
Regardless of single or multi-version deployment, the upper CFS node has the lower RFS nodes configured as devices under the /devices/device tree. The CFS node communicates with these devices through NETCONF and must have the correct ned-id configured for each lower RFS node. The ned-id is set under /devices/device/device-type/netconf/ned-id, as for any NETCONF device.
The part that is specific to LSA is the actual ned-id used. This has to be ned:lsa-netconf or a ned-id derived from it. What is more, the ned-id depends on the deployment type. For a single-version deployment, you can use the lsa-netconf value directly. This ned-id is built-in (defined in tailf-ncs-ned.yang) and available in NSO without any additional packages.
So the configuration for the RFS device in the CFS node would look similar to:
Notice the use of the lsa-remote-node instead of the address (and port) as is usually done. This setting identifies the device as a lower-layer LSA node and instructs NSO to use connection information provided under cluster configuration.
The value of lsa-remote-node references a cluster remote-node, such as the following:
In addition to devices device, the authgroup value is again required here and refers to cluster authgroup, not the device one. Both authgroups must be configured correctly for LSA to function.
Having added device and cluster configuration for all RFS nodes, you should update the SSH host keys for both, the /devices/device and /cluster/remote-node paths. For example:
Moreover, the RFS NSO nodes have an extra configuration that may not be visible to the CFS node, resulting in out-of-sync behavior. You are strongly encouraged to set the out-of-sync-commit-behaviour value to accept, with a command such as:
At the same time you should also enable the /cluster/device-notifications, which will allow the CFS node to receive the forwarded device notifications from the RFS nodes, and /cluster/commit-queue, to enable the commit queue support for LSA. Without the latter, you will not be able to use the commit commit-queue async command, for example.
If you wish to enable the commit queue by default, you should do so by setting the /devices/global-settings/commit-queue/enabled-by-default on the CFS node. Do not use per device or per device group configuration, for the same reason you should avoid it on the RFS nodes.
If you plan a single-version deployment, the preceding steps are sufficient. For a multi-version deployment, on the other hand, there are two additional tasks to perform.
First, you will need to install the correct Cisco-NSO LSA NED package (or packages if you need to support more versions). Each NSO release includes these packages that are specifically tailored for LSA. They are used by the upper CFS node if the lower RFS nodes are running a different version than the CFS node itself. The packages are named cisco-nso-nc-X.Y where X.Y are the two most significant numbers of the NSO release (the major version) that the package supports. So, if your RFS nodes are running NSO 5.7.2, for example, you should use cisco-nso-nc-5.7.
These packages are found in the $NCS_DIR/packages/lsa directory. Each package contains the complete model of the ncs namespace for the corresponding NSO version, compiled as an LSA NED. Please always use the cisco-nso package included with the NSO version of the upper CFS node and not some older variant (such as the one from the lower RFS node) as it may not work correctly.
Second, installing the cisco-nso LSA NED package will make the corresponding ned-id available, such as cisco-nso-nc-5.7 (ned-id matches the package name). Use this ned-id for the RFS nodes instead of lsa-netconf. For example:
This configuration allows the CFS node to communicate with a different NSO version but there are still some limitations. The upper CFS node must have the same or newer version than the managed RFS nodes. For all the currently supported versions of the lower node, the packages can be found in the $NCS_DIR/packages/lsa directory, but you may also be able to build an older one yourself.
In case you already have a single-version deployment using the lsa-netconf ned-id's, you can use the NED migrate procedure to switch to the new ned-id and multi-version deployment.
Besides adding managed lower-layer nodes, the upper-layer node also requires packages for the services. Obviously, you must add the CFS package, which is an ordinary service package, to the CFS node. But you must also provide the device compiled RFS YANG models to allow provisioning of RFSs on the remote RFS nodes.
The process resembles the way you create and compile device YANG models in normal NED packages. The ncs-make-package tool provides the --lsa-netconf-ned option, where you specify the location of the RFS YANG model and the tool creates a NED package for you. This is a new package that is separate from the RFS package used in the RFS nodes, so you might want to name it differently to avoid confusion. The following text uses the -ned suffix.
Usually, you would also provide the --no-netsim, --no-java, and --no-python switches to the invocation, as the package is used with the NETCONF protocol and doesn't need any additional code. The --no-netsim option is required because netsim is not supported for these types of packages. For example:
In this case, there is no explicit --lsa-lower-nso option specified and ncs-make-package will by default be set up to compile the package for the single version deployment, tied to the lsa-netconf ned-id. That means the models in the NED can be used with devices that have a lsa-netconf ned-id configured.
To compile it for the multi-version deployment, which uses a different ned-id, you must select the target NSO version with the --lsa-lower-nso cisco-nso-nc-X.Y option, for example:
Depending on the RFS model, the package may fail to compile, even though the model compiles fine as a service. A typical error would indicate some node from a module, such as tailf-ncs, is not found. The reason is that the original RFS service YANG model has dependencies on other YANG models that are not included in the compilation process.
One solution to this problem is to remove the dependencies in the YANG model before compilation. Normally this can be solved by changing the datatype in the NED compiled copy of the YANG model, for example from leafref or instance-identifier to string. This is only needed for the NED compiled copy, the lower RFS node YANG model can remain the same. There will then be an implicit conversion between types, at runtime, in the communication between the upper CFS node and the lower RFS node.
An alternate solution, if you are doing a single version deployment and there are dependencies on the tailf-ncs namespace, is to switch to a multi-version deployment because the cisco-nso package includes this namespace (device compiled). Here, the NSO versions match but you are still using the cisco-nso-nc-X.Y ned-id and have to follow the instructions for the multi-version deployment.
Once you have both, the CFS and device-compiled RFS service packages are ready; add them to the CFS node, then invoke a sync-from action to complete the setup process.
You can see all the required setup steps for a single version deployment performed in the example examples.ncs/getting-started/developing-with-ncs/22-lsa-single-version-deployment and the examples.ncs/getting-started/developing-with-ncs/28-lsa-multi-version-deployment has the steps for the multi-version one. The two are quite similar but the multi-version deployment has additional steps, so it is the one described here.
First, build the example for manual setup.
Then configure the nodes in the cluster. This is needed so that the upper CFS node can receive notifications from the lower RFS node and prepare the upper CFS node to be used with the commit queue.
To be able to handle the lower NSO node as an LSA node, the correct version of the cisco-nso-nc package needs to be installed. In this example, 5.4 is used.
Create a link to the cisco-nso package in the packages directory of the upper CFS node:
Reload the packages:
Now when the cisco-nso-nc package is in place, configure the two lower NSO nodes and sync-from them:
Now, for example, the configured devices of the lower nodes can be viewed:
Or, alarms inspected:
Now, create a netconf package on the upper CFS node which can be used towards the rfs-vlan service on the lower RFS node, in the shell terminal window, do the following:
The created NED is an lsa-netconf-ned based on the YANG files of the rfs-vlan service:
The version of the NED reflects the version of the nso on the lower node:
The package will be generated in the packages directory of the upper NSO CFS node:
And, the name of the package will be:
Install the cfs-vlan service on the upper CFS node. In the shell terminal window, do the following:
Reload the packages once more to get the cfs-vlan package. In the CLI terminal window, do the following:
Now, when all packages are in place a cfs-vlan service can be configured. The cfs-vlan service will dispatch service data to the right lower RFS node depending on the device names used in the service.
In the CLI terminal window, verify the service:
As ex0 resides on lower-nso-1 that part of the configuration goes there and the ex5 part goes to lower-nso-2.
Since an LSA deployment consists of multiple NSO nodes (or HA pairs of nodes), each can be upgraded to a newer NSO version separately. While that offers a lot of flexibility, it also makes upgrades more complex in many cases. For example, performing a major version upgrade on the upper CFS node only will make the deployment Multi-Version even if it was Single-Version before the upgrade, requiring additional action on your part.
In general, staying with the Single-Version Deployment is the simplest option and does not require any further LSA-specific upgrade action (except perhaps recompiling the packages). However, the main downside is that, at least for a major upgrade, you must upgrade all the nodes at the same time (otherwise, you no longer have a Single-Version Deployment).
If that is not feasible, the solution is to run a Multi-Version Deployment. Along with all of the requirements, the section Multi-Version Deployment describes a major difference from the Single Version variant: the upper CFS node uses a version-specific cisco-nso-nc-X.Y NED ID to refer to lower RFS nodes. That means, if you switch to a Multi-Version Deployment, or perform a major upgrade of the lower-layer RFS node, the ned-id should change accordingly. However, do not change it directly but follow the correct NED upgrade procedure described in the section called NED Migration. Briefly, the procedure consists of these steps:
Keep the currently configured ned-id for an RFS device and the corresponding packages. If upgrading the CFS node, you will need to recompile the packages for the new NSO version.
Compile and load the packages that are device-compiled with the new ned-id, alongside the old packages.
Use the migrate action on a device to switch over to the new ned-id.
The procedure requires you to have two versions of the device-compiled RFS service packages loaded in the upper CFS node when calling the migrate action: one version compiled by referencing the old (current) NED ID and the other one by referencing the new (target) NED ID.
To illustrate, suppose you currently have an upper-layer and a lower-layer node both running NSO 5.4. The nodes were set up as described in the Single Version Deployment option, with the upper CFS node using the tailf-ncs-ned:lsa-netconf NED ID for the lower-layer RFS node. The CFS node also uses the rfs-vlan-ned NED package for the rfs-vlan service.
Now you wish to upgrade the CFS node to NSO 5.7 but keep the RFS node on the existing version 5.4. Before upgrading the CFS node, you create a backup and recompile the rfs-vlan-ned package for NSO 5.7. Note that the package references the lsa-netconf ned-id, which is the ned-id configured for the RFS device in the CFS node's CDB. Then, you perform the CFS node upgrade as usual.
At this point the CFS node is running the new, 5.7 version and the RFS node is running 5.4. Since you now have a Multi Version Deployment, you should migrate to the correct ned-id as well. Therefore, you prepare the rfs-vlan-nc-5.4 package, as described in the Multi-Version Deployment option, compile the package, and load it into the CFS node. Thanks to the NSO CDM feature, both packages, rfs-vlan-nc-5.4 and rfs-vlan-ned, can be used at the same time.
With the packages ready, you execute the devices device lower-nso-1 migrate new-ned-id cisco-nso-nc-5.4 command on the CFS node. The command configures the RFS device entry on CFS to use the new cisco-nso-nc-5.4 ned-id, as well as migrates the device configuration and service meta-data to the new model. Having completed the upgrade, you can now remove the rfs-vlan-ned if you wish.
Later on, you may decide to upgrade the RFS node to NSO 5.6. Again, you prepare the new rfs-vlan-nc-5.6 package for the CFS node in a similar way as before, now using the cisco-nso-nc-5.6 ned-id instead of cisco-nso-nc-5.4. Next, you perform the RFS node upgrade to 5.6 and finally migrate the RFS device on the CFS node to the cisco-nso-nc-5.6 ned-id, with the migrate action.
Likewise, you can return to the Single Version Deployment, by upgrading the RFS node to the NSO 5.7, reusing the old, or preparing anew, the rfs-vlan-ned package and migrating to the lsa-netconf ned-id.
All these ned-id changes stem from the fact that the upper-layer CFS node treats the lower-layer RFS node as a managed device, requiring the correct model, just like it does for any other device type. For the same reason, maintenance (bug fix or patch) NSO upgrades do not result in a changed ned-id, so for those no migration is necessary.
Simplify change management in your network using templates.
NSO comes with a flexible and powerful built-in templating engine, which is based on XML. The templating system simplifies how you apply configuration changes across devices of different types and provides additional validation against the target data model. Templates are a convenient, declarative way of updating structured configuration data and allow you to avoid lots of boilerplate code.
You will most often find this type of configuration templates used in services, which is why they are sometimes also called service templates. However, we mostly refer to them simply as XML templates, since they are defined in XML files.
NSO loads templates as part of a package, looking for XML files in the templates subdirectory. You then apply an XML template through API or by connecting it with a service through a service point, allowing NSO to use it whenever a service instance needs updating.
Template is an XML file with the config-template root element, residing in the http://tail-f.com/ns/config/1.0 namespace. The root contains configuration elements according to NSO YANG schema and XML processing instructions.
Configuration element structure is very much like the one you would find in a NETCONF message since it uses the same encoding rules defined by YANG. Additionally, each element can specify a tags attribute that refines how the configuration is applied.
A typical template for configuring an NSO-managed device is:
The first line defines the root node. It contains elements that follow the same structure as that used by the CDB, in particular, the devices device <name> config path in the CLI. In the printout, two elements, device and config, also have a tags attribute.
You can write this structure by studying the YANG schema if you wish. However, a more typical approach is to start with manipulating NSO configuration by hand, such as through the NSO CLI or web UI. Then generate the XML structure with the help of NSO output filters. You can use commit dry-run outformat xml or show ... | display xml commands, or even the ncs_load utility. For a worked, step-by-step example, refer to the section .
Having the basic structure in place, you can then fine-tune the template by adding different processing instructions and tags, as well as replacing static values with variable references using the XPath syntax.
Note that a single template can configure multiple devices of different type, services, or any other configurable data in NSO; basically the same as you can do in a CLI commit. But a single, gigantic template can become a burden to maintain. That is why many developers prefer to split up bigger configurations into multiple feature templates, either by functionality or by device type.
Finally, the name of the file, without the .xml extension is the name of the template. The name allows you to reference the template from the code later on. Since all the template names reside in the same namespace, it is a good practice to use a common naming scheme, preferably <package name>-<feature>.xml to ensure template names are unique.
The NSO CLI features a templatize command that allows you to analyze a given configuration and find common configuration patterns. You can use these to, for example, create a configuration template for a service.
Suppose you have an existing interface configuration on a device:
Using the templatize command, you can search for patterns in this part of the configuration, which produces the following:
In this case, NSO finds a single pattern (the only one) and creates the corresponding template. In general, NSO might produce a number of templates. As an example, try running the command within the examples.ncs/implement-a-service/dns-v3 environment.
The algorithm works by searching the data at the specified path. For any list it encounters, it compares every item in the list with its siblings. If the two items have the same structure but not necessarily the same actual values (for leafs), that part of the configuration can be made into a template. If the two list items use the same value for a leaf, the value is used directly in the generated template. Otherwise, a unique variable name is created and used in its place, as shown in the example.
However, templatize requires you to reference existing configurations in NSO. If such configuration is not readily available to you and you want to avoid manually creating sample configuration in NSO first, you can use the sample-xml-skeleton functionality of the yanger utility to generate sample XML data directly:
You can replace the value of --sample-xml-skeleton-path with the path to the part of the configuration you want to generate.
In case the target data model contains submodules, or references other non-built-in modules, you must also tell yanger where to find additional modules with the -p parameter, such as adding -p src/yang/ to the invocation.
Some XML elements, notably those that represent leafs or leaf-lists, specify element text content as values that you wish to configure, such as:
NSO converts the string value to the actual value type of the YANG model automatically when the template is applied.
Along with hard-coded, static content (rtr01), the value may also contain curly brackets ({...}), which the templating engine treats as XPath 1.0 expressions.
The simplest form of an XPath expression is a plain XPath variable:
A value can contain any number of {...} expressions and strings. The end result is the concatenation of all the strings and XPath expressions. For example, <description>Link to PE: {$PE} - {$PE_INT_NAME}</description> might evaluate to <description>Link to PE: pe0 - GigabitEthernet0/0/0/3</description> if you set PE to pe0 and PE_INT_NAME to GigabitEthernet0/0/0/3 when applying the template.
You set the values for variables in the code where you apply the template. However, if you set the value to an empty string, the corresponding statement is ignored (in this case you may use the XPath function string() to set a node to the actual empty string).
NSO also sets some predefined variables, which you can reference:
$DEVICE: The name of the current device. Cannot be overridden.
$TEMPLATE_NAME: The name of the current template. Cannot be overridden.
$SCHEMA_OPAQUE: Defined if the template is registered for a servicepoint (the top node in the template has servicepoint attribute) and the corresponding ncs:servicepoint
The {...} expression can also be any other valid XPath 1.0 expression. To address a reachable node, you might for example use:
Or to select a leaf node, device:
NSO then uses the value of this leaf, say ce5, when constructing the value of the expression.
However, there are some special cases. If the result of the expression is a node-set (e.g. multiple leafs), and the target is a leaf list or a list's key leaf, the template configures multiple destination nodes. This handling allows you to set multiple values for a leaf list or set multiple list items.
Similarly, if the result is an empty node set, nothing is set (the set operation is ignored).
Finally, what nodes are reachable in the XPath expression, and how, depends on the root node and context used in the template. See .
The if, and the accompanying elif, else, processing instructions make it possible to apply parts of the template, based on a condition. For example:
The preceding template shows how to produce different configuration, for network bandwidth management in this case, when different qos-class/priority values are specified.
In particular, the sub-tree containing the priority-realtime tag will only be evaluated if qos-class/priority in the if processing instruction evaluates to the string 'realtime'.
The subtree under the elif processing instruction will be executed if the preceding if expression evaluated to false, i.e. qos-class/priority is not equal to the string 'realtime', but 'critical' instead.
The subtree under the else processing instruction will be executed when both the preceding if and elif expressions evaluated to false, i.e. qos-class/priority is not 'realtime' nor 'critical'.
In your own code you can of course use just a subset of these instructions, such as a simple if - end conditional evaluation. But note that every conditional evaluation must end with the end processing instruction, to allow nesting multiple conditionals.
The evaluation of the XPath statements used in the if and elif processing instructions follow the XPath standard for computing boolean values. In summary, the conditional expression will evaluate to false when:
The argument evaluates to an empty node-set.
The value of the argument is either an empty string or numeric zero.
The argument is of boolean type and evaluates to false, such as using the not(true()) function.
The foreach and for processing instructions allow you to avoid needless repetition: they iterate over a set of values and apply statements in a sub-tree several times. For example:
The printout shows the use of foreach to configure a set of IP routes (the list ip-route-forwarding-list) for a Cisco network router. If there is a tunnel list in the service model, the {/tunnel} expression selects all the items from the list. If this is a non-empty set, then the sub-tree containing ip-route-forwarding-list is evaluated once for every item in that node set.
For each iteration, the initial context is set to one node, that is, the node being processed in that iteration. The XPath function current() retrieves this initial context if needed. Using the context, you can access the node data with relative XPath paths, e.g. the {network} code in the example refers to /tunnel[...]/network for the current item.
foreach only supports a single XPath expression as its argument and the result needs to be a node-set, not a simple value. However, you may use XPath union operator to join multiple node sets in a single expression when required: {some-list-1 | some-leaf-list-2}.
Similarly, for is a processing instruction that uses a variable to control the iteration, in line with traditional programming languages. For example, the following template disables the first four (0-3) interfaces on a Cisco router:
In this example, three semicolon-separated clauses follow the for keyword:
The first clause is the initial step executed before the loop is entered the first time. The format of the clause is that of a variable name followed by an equals sign and an expression. The latter may combine literal strings and XPath expressions surrounded by {}. The expression is evaluated in the same way as the XML tag contents in templates. This clause is optional.
The second clause is the progress condition. The loop will execute as long as this condition evaluates to true, using the same rules as the if processing instruction. The format of this clause is an XPath expression surrounded by {}. This clause is mandatory.
The foreach and for expressions make the loop explicit, which is why they are the first choice for most programmers. Alternatively, under certain circumstances, the template invokes an implicit loop, as described in .
The most common use-case for templates is to produce new configuration but other behavior is possible too. This is accomplished by setting the tags attribute on XML elements.
NSO supports the following tags values, colloquially referred to as “tags”:
merge: Merge with a node if it exists, otherwise create the node. This is the default operation if no operation is explicitly set.
replace: Replace a node if it exists, otherwise create the node.
create: Creates a node. The node must not already exist. An error is raised if the node exists.
Tags merge and nocreate are inherited to their sub-nodes until a new tag is introduced.
Tags create and replace are not inherited and only apply to the node they are specified on. Children of the nodes with create or replace tags have merge behavior.
Tag delete applies only to the current node; any children (except keys specifying the list/leaf-list entry to delete) are ignored.
For ordered-by-user lists and leaf lists, where item order is significant, you can use the insert attribute to specify where in the list, or leaf-list, the node should be inserted. You specify whether the node should be inserted first or last in the node-set, or before or after a specific instance.
For example, if you have a list of rules, such as ACLs, you may need to ensure a particular order:
However, it is not uncommon that there are multiple services managing the same ordered-by user list or leaf-list. The relative order of elements inserted by these services might not matter, but there are some constraints on element positions that need to be fulfilled.
Following the ACL rules example, suppose that initially the list contains only the "deny-all" rule:
There are services that prepend permit rules to the beginning of the list using the insert="first" operation. If there are two services creating one entry each, say 10.0.0.0/8 and 192.168.0.0/24 respectively, then the resulting configuration looks like this:
Note that the rule for the second service comes first because it was configured last and inserted as the first item in the list.
If you now try to check-sync the first service (10.0.0.0/8), it will report as out-of-sync, and re-deploying it would move the 10.0.0.0/8 rule first. But what you really want is to ensure the deny-all rule comes last. This is when the guard attribute comes in handy.
If both the insert and guard attributes are specified on a list entry in a template, then the template engine first checks whether the list entry already exists in the resulting configuration between the target position (as indicated by the insert attribute) and the position of an element indicated by the guard attribute:
If the element exists and fulfills this constraint, then its position is preserved. If a template list entry results in multiple configuration list entries, then all of them need to exist in the configuration in the same order as calculated by the template, and all of them need to fulfill the guard constraint in order for their position to be preserved.
If the list entry/entries do not exist, are not in the same order, or do not fulfill the constraint, then the list is reordered as instructed by the insert statement.
So, in the ACL example, the template can specify the guard as follows:
A guard can be specified literally (e.g. guard="deny-all" if "name" is the key of the list) or using an XPath expression (e.g. guard="{$LASTRULE}"). If the guard evaluates to a node-set consisting of multiple elements, then only the first element in this node-set is considered as the guard. The constraint defined by the guard is evaluated as follows:
If the guard evaluates to an empty node-set (i.e. the node indicated by the guard does not exist in the target configuration), then the constraint is not fulfilled.
If insert="first", then the constraint is fulfilled if the element exists in the configuration before the element indicated by the guard.
If insert="last", then the constraint is fulfilled if the element exists in the configuration after the element indicated by the guard.
Templates support macros - named XML snippets that facilitate reuse and simplify complex templates. When you call a previously defined macro, the templating engine inserts the macro data, expanded with the values of the supplied arguments. The following example demonstrates the use of a macro.
When using macros, be mindful of the following:
A macro must be a valid chunk of XML, or a simple string without any XML markup. So, a macro cannot contain only start-tags or only end-tags, for example.
Each macro is defined between the <?macro?> and <?endmacro?> processing instructions, immediately following the <config-template> tag in the template.
A macro definition takes a name and an optional list of parameters. Each parameter may define a default value.
When reporting errors in a template using macros, the line numbers for the macro invocations are also included, so that the actual location of the error can be traced. For example, an error message might resemble service.xml:19:8 Invalid parameters for processing instruction set. - meaning that there was a macro expansion on line 19 in service.xml and an error occurred at line 8 in the file defining that macro.
When the evaluation of a template starts, the XPath context node and root node are both set to either the service instance data node (with a template-only service) or the node specified with the API call to apply the template (usually the service instance data node as well).
The root node is used as the starting point for evaluating absolute paths starting with / and puts a limit on where you can navigate with ../.
You can access data outside the current root node subtree by dereferencing a leafref type leaf or by changing the root node from within the template.
To change the root node within the template, use the set-root-node XML processing instruction. The instruction takes an XPath expression as a parameter and this expression is evaluated in a special context, where the root node is the root of the datastore. This makes it possible to change to a node outside the current evaluation context.
For example: <?set-root-node {/}?> changes the accessible tree to the whole data store. Note that, as all processing instructions, the effect of set-root-node only applies until the closing parent tag.
The context node refers to the node that is used as the starting point for navigation with relative paths, such as ../device or device.
You can change the current context node using the set-context-node or other context-related processing instructions. For example: <?set-context-node {..}?> changes the context node to the parent of the current context node.
There is a special case where NSO automatically changes the evaluation context as it progresses through and applies the template, which makes it easier to work with lists. There are two conditions required to trigger this special case:
The value being set in the template is the key of a list.
The XPath expression used for this key evaluates to a node set, not a value.
To illustrate, consider the following example.
Suppose you are using the template to configure interfaces on a device. Target device YANG model defines the list of interfaces as:
You also use a service model that allows configuring multiple links:
The context-changing mechanism allows you to configure the device interface with the specified address using the template:
The /links/link[0]/intf-name evaluates to a node and the evaluation context node is changed to the parent of this node, /links/link[0], because name is a key leaf. Now you can refer to /links/link[0]/intf-addr with a simple relative path {intf-addr}.
The true power and usefulness of context changing becomes evident when used together with XPath expressions that produce node sets with multiple nodes. You can create a template that configures multiple interfaces with their corresponding addresses (note the use of link instead of link[0]):
The first expression returns a node set possibly including multiple leafs. NSO then configures multiple list items (interfaces), based on their name. The context change mechanism triggers as well, making {intf-addr} refer to the corresponding leaf in the same link definition. Alternatively, you can achieve the same outcome with a loop (see ).
However, in some situations, you may not desire to change the context. You can avoid it by making the XPath expression return a value instead of a node/node-set. The simplest way is to use the XPath string() function, for example:
When a device makes itself known to NSO, it presents a list of capabilities (see ), which includes what YANG modules that particular device supports. Since each YANG module defines a unique XML namespace, this information can be used in a template.
Hence, a template may include configuration for many diverse devices. The templating system streamlines this by applying only those pieces of the template that have a namespace matching the one advertised by the device (see ).
Additionally, the system performs validation of the template against the specified namespace when loading the template as part of the package load sequence, allowing you to detect a lot of the errors at load time instead of at run time.
In case the namespace matching is insufficient, such as when you want to check for a particular version of a NED, you can use special processing instructions if-ned-id or if-ned-id-match. See for details and for an example.
However, strict validation against the currently loaded schema may become a problem for developing generic, reusable templates that should run in different environments with different sets of NEDs and NED versions loaded. For example, an NSO instance having fewer NED versions than the template is designed for may result in some elements not being recognized, while having more NED versions may introduce ambiguities.
In order to allow templates to be reusable while at the same time keeping as many errors as possible detectable at load time, NSO has a concept of supported-ned-ids. This is a set of NED IDs the package developer declares in the package-meta-data.xml file, indicating all NEDs the XML templates contained in this package are designed to support. This gives NSO a hint on how to interpret the template.
Namely, if a package declares a list of supported-ned-ids, then the templates in this package are interpreted as if no other ned-ids are loaded in the system. If such a template is attempted to be applied to a device with ned-id outside the supported list, then a run-time error is generated because this ned-id was not considered when the template was loaded. This allows us to ignore ambiguities in the data model introduced by additional NEDs that were not considered during template development.
If a package declares a list of supported-ned-ids and the runtime system does not have one or more declared NEDs loaded, then the template engine uses the so-called relaxed loading mode, which means it ignores any unknown namespaces and <?if-ned-id?> clauses containing exclusively unknown ned-ids, assuming that these parts of the template are not applicable in the current running system. Note, however, that <supported-ned-id-match> in the current implementation only filters the list of currently loaded NEDs and does not result in relaxed loading mode.
Because relaxed loading mode performs less strict validation and potentially prevents some errors from being detected, the package developer should always make sure to test in the system with all the supported ned-ids loaded, i.e. when the loading mode is strict. The loading mode can be verified by looking at the value of template-loading-mode leaf for the corresponding package under /packages/package list.
If the package does not declare any supported-ned-ids, then the templates are loaded in strict mode, using the full set of currently loaded NED IDs. This may make the package less reusable between different systems, but is usually fine in environments where the package is intended to be used in runtime systems fully under the control of the package developer.
When applying the template via API, you typically pass parameters to a template through variables, as described in and . One limitation of this mechanism is that a variable can only hold one string value. Yet, sometimes there is a need to pass not just a single value, but a list, map, or even more complex data structures from API to the template.
One way to achieve this is to use smaller templates, such as invoking the template repeatedly, one by one for each list item (or perhaps pair-by-pair in the case of a map). However, there are certain disadvantages to this approach. One of them is the performance: every invocation of the template from the API requires a context switch between the user application process and the NSO core process, which can be costly. Another disadvantage is that the logic is split between Java or Python code and the template, which makes it harder to understand and implement.
An alternative approach described in this section involves modeling the required auxiliary data as operational data and populating it in the code, before applying the template. For a service, the service callback code in Java or Python first populates the auxiliary data and then passes control to the template, which handles the main service configuration logic. The auxiliary data is accessible in the template, by means of XPath, just like any other service input data.
There are different approaches to modeling the auxiliary data. It can reside in the service tree as it is private to the service instance; either integrated in the existing data tree or as a separate subtree under the service instance. It can also be located outside of the service instance, however, it is important to keep in mind that operational data cannot be shared by multiple services because there are no refcounters or backpointers stored on operational data.
After the service is deployed, the auxiliary leafs remain in the database which facilitates debugging because they can be seen via all northbound interfaces. If this is not the intention, they can be hidden with the help of tailf:hidden statement. Because operational data is also a part of FASTMAP diff, these values will be deleted when the service is deleted and need to be recomputed when the service is re-deployed. This also means that in most cases there should be no need to write any additional code to clean up this data.
One example of a task that is hard to solve in the template by native XPath functions is converting a network prefix into a network mask or vice versa. Below is a snippet of a data model that is part of a service input data and contains a list of interfaces along with IP addresses to be configured on those interfaces. If the input IP address contains a prefix, but the target device accepts an IP address with a network mask instead, then you can use an auxiliary operational leaf to pass the mask (calculated from the prefix) to the template.
The code that calls the template needs to populate the mask. For example, using the Python Maagic API in a service:
The corresponding iface-template might then be as simple as:
The archetypical use case for XML templates is service provisioning and NSO allows you to directly invoke a template for a service, without writing boilerplate code in Python or Java. You can take advantage of this feature by configuring the servicepoint attribute on the root config-template element. For example:
Adding the attribute registers this template for the given servicepoint, defined in the YANG service model. Without any additional attributes, the registration corresponds to the standard create service callback.
In a similar manner, you can register templates for each state of a nano service, using componenttype and state attributes. The section contains examples.
Services also have pre- and post-modification callbacks, further described in , which you can also implement with templates. Simply put, pre- and post-modification templates are applied before and after applying the main service template.
These pre- and post-modification templates can only be used in classic (non-nano) services when the create callback is implemented as a template. That is, they cannot be used together with create callbacks implemented in Java or Python. If you want to mix the two approaches for the same service, consider using nano services.
To define a template as pre- or post-modification, appropriately configure the cbtype attribute, along with servicepoint. The cbtype attribute supports these three values:
pre-modification
create
post-modification
The $OPERATION variable is set internally by NSO in pre- and post-modification templates to contain the service operation, i.e., create, update, or delete, that triggered the callback. The $OPERATION variable can be used together with template conditional statements (see ) to apply different parts of the template depending on the triggering operation. Note that the service data is not available in the pre- or post-modification callbacks when $OPERATION = 'delete' since the service has been deleted already in the transaction context where the template is applied.
You can request additional information when applying templates in order to understand what is going on. When applying or committing a template in the CLI, the debug pipe command enables debug information:
The debug xpath option outputs all XPath evaluations for the transaction, and is not limited to the XPath expressions inside templates.
The debug template option outputs XPath expression results from the template, under which context expressions are evaluated, what operation is used, and how it affects the configuration, for all templates that are invoked. You can narrow it down to only show debugging information for a template of interest:
Additionally, the template and xpath debugging can be combined:
For XPath evaluation, you can also inspect the XPath trace log if it is enabled (e.g. with tail -f logs/xpath.trace). XPath trace is enabled in the ncs.conf configuration file and is enabled by default for the examples.
Another option to help you get the XPath selections right is to use the NSO CLI show command with the xpath display flag to find out the correct path to an instance node. This shows the name of the key elements and also the namespace changes.
When using more complex expressions, the ncs_cmd utility can be used to experiment with and debug expressions. ncs_cmd is used in a command shell. The command does not print the result as XPath selections but is still of great use when debugging XPath expressions. The following example selects FastEthernet interface names on the device c0:
The following text walks through the output of the debug template command for a dns-v3 example service, found in examples.ncs/implement-a-service/dns-v3. To try it out for yourself, start the example with make demo and configure a service instance:
The XML template used in the service is simple but non-trivial:
Applying the template produces a substantial amount of output. Let's interpret it piece by piece. The output starts with:
The templating engine found the foreach in the dns-template.xml file at line 4. In this case, it is the only foreach block in the file but in general, there might be more. The {/target-device} expression is evaluated using the /dns[name='instance1'] context, resulting in the complete /dns[name='instance1']/target-device path. Note that the latter is based on the root node (not shown in the output), not the context node (which happens to be the same as the root node at the start of template evaluation).
NSO found two nodes in the leaf-list for this expression, which you can verify in the CLI:
Next comes:
The template starts with the first iteration of the loop with the c1 value. Since the node was an item in a leaf-list, the context refers to the actual value. If instead, it was a list, the context would refer to a single item in the list.
This line signifies the system “applied” line 6 in the template, selecting the c1 device for further configuration. The line also informs you the device (the item in the /devices/device list with this name) exists.
The template then evaluates the if condition, resulting in processing of the lines 10 and 11 in the template:
The last line shows how a new value is added to the target leaf-list, that was not there (non-existing) before.
As the if statement matched, the else part does not apply and a new iteration of the loop starts, this time with the c2 value.
Now the same steps take place for the other, c2, device:
Finally, the template processing completes as there are no more nodes in the loop, and NSO outputs the new dry-run configuration:
NSO template engine supports a number of XML processing instructions to allow more dynamic templates:
The variable value in both set and for processing instructions are evaluated in the same way as the values within XML tags in a template (see ). So, it can be a mix of literal values and XPath expressions surrounded by {...}.
The variable value is always stored as a string, so any XPath expression will be converted to literal using the XPath string() function. Namely, if the expression results in an integer or a boolean, then the resulting value would be a string representation of the integer or boolean. If the expression results in a node set, then the value of the variable is a concatenated string of values of nodes in this node set.
It is important to keep in mind that while in some cases XPath converts the literal to another type implicitly (for example, in an expression {$x < 3} a value x='1' is converted to integer 1 implicitly), in other cases an explicit conversion is needed. For example, using the expression {$x > $y}, if x='9' and y='11', the result of the expression is true due to alphabetic order as both variables are strings. In order to compare the values as numbers, an explicit conversion of at least one argument is required: {number($x) > $y}.
This section lists a few useful functions, available in XPath expressions. The list is not exhaustive; please refer to the , , and NSO-specific extensions in in Manual Pages for a full list.
Get started with the NSO CLI.
The NSO CLI (command line interface) provides a unified CLI towards the complete network. The NSO CLI is a northbound interface to the NSO representation of the network devices and network services. Do not confuse this with a cut-through CLI that reaches the devices directly. Although the network might be a mix of vendors and device interfaces with different CLI flavors, NSO provides one northbound CLI.
Starting the CLI:
Like many CLI's there is an operational mode and a configuration mode. Show commands display different data in those modes. A show in configuration mode displays network configuration data from the NSO configuration database, the CDB. Show in operational mode shows live values from the devices and any operational data stored in the CDB. The CLI starts in operational mode. Note that different prompts are used for the modes (these can be changed in
$ python3 script.pyimport ncs
with ncs.maapi.single_write_trans('admin', 'python') as t:
t.set_elem2('Kilroy was here', '/ncs:devices/device{ce0}/description')
t.apply()
with ncs.maapi.single_read_trans('admin', 'python') as t:
desc = t.get_elem('/ncs:devices/device{ce0}/description')
print("Description for device ce0 = %s" % desc)import ncs
with ncs.maapi.Maapi() as m:
with ncs.maapi.Session(m, 'admin', 'python'):
# The first transaction
with m.start_read_trans() as t:
address = t.get_elem('/ncs:devices/device{ce0}/address')
print("First read: Address = %s" % address)
# The second transaction
with m.start_read_trans() as t:
address = t.get_elem('/ncs:devices/device{ce1}/address')
print("Second read: Address = %s" % address)with ncs.maapi.Maapi() as m:
with ncs.maapi.Session(m, 'admin', 'python'):
with m.start_write_trans() as t:
# Read/write/request ...root = ncs.maagic.get_root(t)node = ncs.maagic.get_node(t, '/ncs:devices/device{ce0}')# The examples are equal unless there is a namespace collision.
# For the ncs namespace it would look like this:
root.ncs__devices.ncs__device['ce0'].ncs__address
# equals
root.devices.device['ce0'].address# This example have three namespaces referring to a leaf, value, with the same
# name and this load order: /ex/a:value=11, /ex/b:value=22 and /ex/c:value=33
root.ex.value # returns 11
root.ex.a__value # returns 11
root.ex.b__value # returns 22
root.ex.c__value # returns 33dev_name = root.devices.device['ce0'].name # 'ce0'
dev_address = root.devices.device['ce0'].address # '127.0.0.1'
dev_port = root.devices.device['ce0'].port # 10022root.devices.device['ce0'].name = 'ce0'
root.devices.device['ce0'].address = '127.0.0.1'
root.devices.device['ce0'].port = 10022
root.devices.device['ce0'].port = '10022' # Also valid
# This will raise an Error exception
root.devices.device['ce0'].port = 'netconf'del root.devices.device['ce0'] # List element
del root.devices.device['ce0'].name # Leaf
del root.devices.device['ce0'].device_type.cli # Presence containerpc = root.container.presence_container.create()root.container.presence_container.exists() # Returns True or False
bool(root.container.presence_container) # Returns True or Falsedel root.container.presence_container
root.container.presence_container.delete()ne_type = root.devices.device['ce0'].device_type.ne_type
if ne_type == 'cli':
# Handle CLI
elif ne_type == 'netconf':
# Handle NETCONF
elif ne_type == 'generic':
# Handle generic
else:
# Don't handleroot.devices.device['ce0'].device_type.netconf.create()
str(root.devices.device['ce0'].device_type.ne_type) # Returns 'netconf'# Single value key
ce5 = root.devices.device.create('ce5')
# Multiple values key
o = root.container.list.create('foo', 'bar')'ce0' in root.devices.device # Returns True or False# Single value key
del root.devices.device['ce5']
# Multiple values key
del root.container.list['foo', 'bar']# use Python's del function
del root.devices.device
# use List's delete() method
root.container.list.delete()str(root.devices.device['ce0'].state.admin_state) # May return 'unlocked'
root.devices.device['ce0'].state.admin_state.string # May return 'unlocked'
root.devices.device['ce0'].state.admin_state.value # May return 1root.devices.device['ce0'].state.admin_state = 'locked'
root.devices.device['ce0'].state.admin_state = 0
# This will raise an Error exception
root.devices.device['ce0'].state.admin_state = 3 # Not a valid enum# /model/device is a leafref to /devices/device/name
dev = root.model.device # May return 'ce0'# /model/device is a leafref to /devices/device/name
root.model.device = 'ce0'# Read
root.devices.device['ce0'].device_type.cli.ned_id # May return 'ios-id:cisco-ios'
# Write when identity cisco-ios is unique throughout the system (not recommended)
root.devices.device['ce0'].device_type.cli.ned_id = 'cisco-ios'
# Write with unique identity
root.devices.device['ce0'].device_type.cli.ned_id = 'ios-id:cisco-ios'# /model/iref is an instance-identifier
root.model.iref # May return "/ncs:devices/ncs:device[ncs:name='ce0']"# /model/iref is an instance-identifier
root.devices.device['ce0'].device_type.cli.ned_id = "/ncs:devices/ncs:device[ncs:name='ce0']"# /model/ll is a leaf-list with the type string
# read a LeafList object
ll = root.model.ll
# iteration
for item in root.model.ll:
do_stuff(item)
# check if the leaf-list exists (i.e. is non-empty)
if root.model.ll:
do_stuff()
if root.model.ll.exists():
do_stuff()
# check the leaf-list contains a specific item
if 'foo' in root.model.ll:
do_stuff()
# length
len(root.model.ll)
# create a new item in the leaf-list
root.model.ll.create('bar')
# set the whole leaf-list in one operation
root.model.ll = ['foo', 'bar', 'baz']
# remove a specific item from the list
del root.model.ll['bar']
root.model.ll.remove('baz')
# delete the whole leaf-list
del root.model.ll
root.model.ll.delete()
# get the leaf-list as a Python list
root.model.ll.as_list()# Read
root.model.bin # May return '\x00foo\x01bar'
# Write
root.model.bin = b'\x00foo\x01bar'# read a bits leaf - a Bits object may be returned (None if non-existent)
root.model.bits
# get a bytearray
root.model.bits.bytearray()
# get a space separated string with bit names
str(root.model.bits)# set a bits leaf using a string of space separated bit names
root.model.bits = 'turboMode enableEncryption'
# set a bits leaf using a Python bytearray
root.model.bits = bytearray(b'\x11')
# set a bits leaf using a Python binary string
root.model.bits = b'\x11'
# read a bits leaf, update the Bits object and set it
b = x.model.bits
b.clr_bit(0)
x.model.bits = bpc = root.container.empty_leaf.create()root.container.empty_leaf.exists() # Returns True or False
bool(root.container.empty_leaf) # Returns True or Falsedel root.container.empty_leaf
root.container.empty_leaf.delete()import ncs
with ncs.maapi.Maapi() as m:
with ncs.maapi.Session(m, 'admin', 'python'):
root = ncs.maagic.get_root(m)
output = root.devices.check_sync()
for result in output.sync_result:
print('sync-result {')
print(' device %s' % result.device)
print(' result %s' % result.result)
print('}')import ncs
with ncs.maapi.Maapi() as m:
with ncs.maapi.Session(m, 'admin', 'python'):
with m.start_read_trans() as t:
root = ncs.maagic.get_root(t)
output = root.devices.check_sync()
for result in output.sync_result:
print('sync-result {')
print(' device %s' % result.device)
print(' result %s' % result.result)
print('}')import ncs
with ncs.maapi.Maapi() as m:
with ncs.maapi.Session(m, 'admin', 'python'):
root = ncs.maagic.get_root(m)
input = root.action.double.get_input()
input.number = 21
output = root.action.double(input)
print(output.result)import ncs
with ncs.maapi.Maapi() as m:
with ncs.maapi.Session(m, 'admin', 'python'):
root = ncs.maagic.get_root(m)
input = root.leaf_list_action.llist.get_input()
input.args = ['testing action']
output = root.leaf_list_action.llist(input)
print(output.result)import argparse
import ncs
def parseArgs():
parser = argparse.ArgumentParser()
parser.add_argument('--name', help="device name", required=True)
parser.add_argument('--address', help="device address", required=True)
parser.add_argument('--port', help="device address", type=int, default=22)
parser.add_argument('--desc', help="device description",
default="Device created by maagic_create_device.py")
parser.add_argument('--auth', help="device authgroup", default="default")
return parser.parse_args()
def main(args):
with ncs.maapi.Maapi() as m:
with ncs.maapi.Session(m, 'admin', 'python'):
with m.start_write_trans() as t:
root = ncs.maagic.get_root(t)
print("Setting device '%s' configuration..." % args.name)
# Get a reference to the device list
device_list = root.devices.device
device = device_list.create(args.name)
device.address = args.address
device.port = args.port
device.description = args.desc
device.authgroup = args.auth
dev_type = device.device_type.cli
dev_type.ned_id = 'cisco-ios-cli-3.0'
device.state.admin_state = 'unlocked'
print('Committing the device configuration...')
t.apply()
print("Committed")
# This transaction is no longer valid
#
# fetch-host-keys and sync-from does not require a transaction
# continue using the Maapi object
#
root = ncs.maagic.get_root(m)
device = root.devices.device[args.name]
print("Fetching SSH keys...")
output = device.ssh.fetch_host_keys()
print("Result: %s" % output.result)
print("Syncing configuration...")
output = device.sync_from()
print("Result: %s" % output.result)
if not output.result:
print("Error: %s" % output.info)
if __name__ == '__main__':
main(parseArgs())class PlanComponent(object):
"""Service plan component.
The usage of this class is in conjunction with a nano service that
uses a reactive FASTMAP pattern.
With a plan the service states can be tracked and controlled.
A service plan can consist of many PlanComponent's.
This is operational data that is stored together with the service
configuration.
"""
def __init__(self, service, name, component_type):
"""Initialize a PlanComponent."""
def append_state(self, state_name):
"""Append a new state to this plan component.
The state status will be initialized to 'ncs:not-reached'.
"""
def set_reached(self, state_name):
"""Set state status to 'ncs:reached'."""
def set_failed(self, state_name):
"""Set state status to 'ncs:failed'."""
def set_status(self, state_name, status):
"""Set state status."""self_plan = PlanComponent(service, 'self', 'ncs:self')
self_plan.append_state('ncs:init')
self_plan.append_state('ncs:ready')
self_plan.set_reached('ncs:init')
route_plan = PlanComponent(service, 'router', 'myserv:router')
route_plan.append_state('ncs:init')
route_plan.append_state('myserv:syslog-initialized')
route_plan.append_state('myserv:ntp-initialized')
route_plan.append_state('myserv:dns-initialized')
route_plan.append_state('ncs:ready')
route_plan.set_reached('ncs:init')self_plan.set_reached('ncs:ready')$ cd packages
$ ncs-make-package --service-skeleton python pyaction --component-class
action.Action \
--action-example$ tree pyaction
pyaction/
+-- README
+-- doc/
+-- load-dir/
+-- package-meta-data.xml
+-- python/
| +-- pyaction/
| +-- __init__.py
| +-- action.py
+-- src/
| +-- Makefile
| +-- yang/
| +-- action.yang
+-- templates/# -*- mode: python; python-indent: 4 -*-
from ncs.application import Application
from ncs.dp import Action
# ---------------
# ACTIONS EXAMPLE
# ---------------
class DoubleAction(Action):
@Action.action
def cb_action(self, uinfo, name, kp, input, output):
self.log.info('action name: ', name)
self.log.info('action input.number: ', input.number)
output.result = input.number * 2
class LeafListAction(Action):
@Action.action
def cb_action(self, uinfo, name, kp, input, output):
self.log.info('action name: ', name)
self.log.info('action input.args: ', input.args)
output.result = [ w.upper() for w in input.args]
# ---------------------------------------------
# COMPONENT THREAD THAT WILL BE STARTED BY NCS.
# ---------------------------------------------
class Action(Application):
def setup(self):
self.log.info('Worker RUNNING')
self.register_action('action-action', DoubleAction)
self.register_action('llist-action', LeafListAction)
def teardown(self):
self.log.info('Worker FINISHED')admin@ncs> request action double number 21
result 42
[ok][2016-04-22 10:30:39]$ cd packages
$ ncs-make-package --service-skeleton python pyservice \
--component-class service.Service$ tree pyservice
pyservice/
+-- README
+-- doc/
+-- load-dir/
+-- package-meta-data.xml
+-- python/
| +-- pyservice/
| +-- __init__.py
| +-- service.py
+-- src/
| +-- Makefile
| +-- yang/
| +-- service.yang
+-- templates/# -*- mode: python; python-indent: 4 -*-
from ncs.application import Application
from ncs.application import Service
import ncs.template
# ------------------------
# SERVICE CALLBACK EXAMPLE
# ------------------------
class ServiceCallbacks(Service):
@Service.create
def cb_create(self, tctx, root, service, proplist):
self.log.info('Service create(service=', service._path, ')')
# Add this service logic >>>>>>>
vars = ncs.template.Variables()
vars.add('MAGIC', '42')
vars.add('CE', service.device)
vars.add('INTERFACE', service.unit)
template = ncs.template.Template(service)
template.apply('pyservice-template', vars)
self.log.info('Template is applied')
dev = root.devices.device[service.device]
dev.description = "This device was modified by %s" % service._path
# <<<<<<<<< service logic
@Service.pre_modification
def cb_pre_modification(self, tctx, op, kp, root, proplist):
self.log.info('Service premod(service=', kp, ')')
@Service.post_modification
def cb_post_modification(self, tctx, op, kp, root, proplist):
self.log.info('Service premod(service=', kp, ')')
# ---------------------------------------------
# COMPONENT THREAD THAT WILL BE STARTED BY NCS.
# ---------------------------------------------
class Service(Application):
def setup(self):
self.log.info('Worker RUNNING')
self.register_service('service-servicepoint', ServiceCallbacks)
def teardown(self):
self.log.info('Worker FINISHED')<config-template xmlns="http://tail-f.com/ns/config/1.0">
<devices xmlns="http://tail-f.com/ns/ncs">
<device tags="nocreate">
<name>{$CE}</name>
<config tags="merge">
<interface xmlns="urn:ios">
<FastEthernet>
<name>0/{$INTERFACE}</name>
<description>The maagic: {$MAGIC}</description>
</FastEthernet>
</interface>
</config>
</device>
</devices>
</config-template>$ cd packages
$ ncs-make-package --service-skeleton python pyvalidation --component-class
validation.ValidationApplication \
--disable-service-example --validation-example$ tree pyaction
pyaction/
+-- README
+-- doc/
+-- load-dir/
+-- package-meta-data.xml
+-- python/
| +-- pyaction/
| +-- __init__.py
| +-- validation.py
+-- src/
| +-- Makefile
| +-- yang/
| +-- validation.yang
+-- templates/# -*- mode: python; python-indent: 4 -*-
import ncs
from ncs.dp import ValidationError, ValidationPoint
# ---------------
# VALIDATION EXAMPLE
# ---------------
class Validation(ValidationPoint):
@ValidationPoint.validate
def cb_validate(self, tctx, keypath, value, validationpoint):
self.log.info('validate: ', str(keypath), '=', str(value))
if value == 'invalid':
raise ValidationError('invalid value')
return ncs.CONFD_OK
# ---------------------------------------------
# COMPONENT THREAD THAT WILL BE STARTED BY NCS.
# ---------------------------------------------
class ValidationApplication(ncs.application.Application):
def setup(self):
# The application class sets up logging for us. It is accessible
# through 'self.log' and is a ncs.log.Log instance.
self.log.info('ValidationApplication RUNNING')
# When using actions, this is how we register them:
#
self.register_validation('pyvalidation-valpoint', Validation)
# If we registered any callback(s) above, the Application class
# took care of creating a daemon (related to the service/action point).
# When this setup method is finished, all registrations are
# considered done and the application is 'started'.
def teardown(self):
# When the application is finished (which would happen if NCS went
# down, packages were reloaded or some error occurred) this teardown
# method will be called.
self.log.info('ValidationApplication FINISHED')admin@ncs% set validation validate-value invalid
admin@ncs% validate
Failed: 'validation validate-value': invalid value
[ok][2016-04-22 10:30:39]shared_apply_template
shared_copy_tree
shared_create
shared_insert
shared_set_elem
shared_set_elem2
shared_set_valuesload_config()
load_config_cmds()
load_config_stream()
apply_template()
copy_tree()
create()
insert()
set_elem()
set_elem2()
set_object
set_values()import socket
import _ncs
from _ncs import maapi
sock_maapi = socket.socket()
maapi.connect(sock_maapi,
ip='127.0.0.1',
port=_ncs.NCS_PORT)
maapi.load_schemas(sock_maapi)
maapi.start_user_session(
sock_maapi,
'admin',
'python',
[],
'127.0.0.1',
_ncs.PROTO_TCP)
maapi.install_crypto_keys(sock_maapi)
th = maapi.start_trans(sock_maapi, _ncs.RUNNING, _ncs.READ)
path = "/devices/authgroups/group{default}/umap{admin}/remote-password"
encrypted_password = maapi.get_elem(sock_maapi, th, path)
decrypted_password = _ncs.decrypt(str(encrypted_password))
maapi.finish_trans(sock_maapi, th)
maapi.end_user_session(sock_maapi)
sock_maapi.close()
print("Default authgroup admin password = %s" % decrypted_password)import socket
import _ncs
from _ncs import maapi
sock_maapi = socket.socket()
maapi.connect(sock_maapi,
ip='127.0.0.1',
port=_ncs.NCS_PORT)
maapi.load_schemas(sock_maapi)
_ncs.maapi.start_user_session(
sock_maapi,
'admin',
'python',
[],
'127.0.0.1',
_ncs.PROTO_TCP)
ns_hash = _ncs.str2hash("http://tail-f.com/ns/ncs")
results = maapi.request_action(sock_maapi, [], ns_hash, "/devices/check-sync")
for result in results:
v = result.v
t = v.confd_type()
if t == _ncs.C_XMLBEGIN:
print("sync-result {")
elif t == _ncs.C_XMLEND:
print("}")
elif t == _ncs.C_BUF:
tag = result.tag
print(" %s %s" % (_ncs.hash2str(tag), str(v)))
elif t == _ncs.C_ENUM_HASH:
tag = result.tag
text = v.val2str((ns_hash, '/devices/check-sync/sync-result/result'))
print(" %s %s" % (_ncs.hash2str(tag), text))
maapi.end_user_session(sock_maapi)
sock_maapi.close()import socket
import _ncs
from _ncs import cdb
sock_cdb = socket.socket()
cdb.connect(
sock_cdb,
type=cdb.DATA_SOCKET,
ip='127.0.0.1',
port=_ncs.NCS_PORT)
cdb.start_session2(sock_cdb, cdb.OPERATIONAL, cdb.LOCK_WAIT | cdb.LOCK_REQUEST)
path = "/operdata/value"
cdb.set_elem(sock_cdb, _ncs.Value(42, _ncs.C_UINT32), path)
new_value = cdb.get(sock_cdb, path)
cdb.end_session(sock_cdb)
sock_cdb.close()
print("/operdata/value is now %s" % new_value) import ncs
def print_ned_ids(port):
with ncs.maapi.single_read_trans('admin', 'system', db=ncs.OPERATIONAL, port=port) as t:
dev_ned_id = ncs.maagic.get_node(t, '/devices/ned-ids/ned-id')
for id in dev_ned_id.keys():
print(id)
if __name__ == '__main__':
print('=== lsa-1 ===')
print_ned_ids(4569)
print('=== lsa-2 ===')
print_ned_ids(4570) $ python3 read_nedids.py
=== lsa-1 ===
{ned:lsa-netconf}
{ned:netconf}
{ned:snmp}
{cisco-nso-nc-5.5:cisco-nso-nc-5.5}
=== lsa-2 ===
{ned:lsa-netconf}
{ned:netconf}
{ned:snmp}
{"[<_ncs.Value type=C_IDENTITYREF(44) value='idref<211668964'...>]"}
{"[<_ncs.Value type=C_IDENTITYREF(44) value='idref<151824215'>]"}
{"[<_ncs.Value type=C_IDENTITYREF(44) value='idref<208856485'...>]"}with ncs.maapi.single_read_trans('admin', 'system', db=ncs.OPERATIONAL, port=port,
load_schemas=ncs.maapi.LOAD_SCHEMAS_RELOAD) as t: === lsa-1 ===
{ned:lsa-netconf}
{ned:netconf}
{ned:snmp}
{cisco-nso-nc-5.5:cisco-nso-nc-5.5}
=== lsa-2 ===
{ned:lsa-netconf}
{ned:netconf}
{ned:snmp}
{cisco-asa-cli-6.13:cisco-asa-cli-6.13}
{cisco-ios-cli-6.72:cisco-ios-cli-6.72}
{router-nc-1.0:router-nc-1.0}TypeError: cannot pickle '<object>' objectimport ncs
import _ncs
from ncs.dp import Action
from multiprocessing import Process
import multiprocessing
def child(uinfo, self):
print(f"uinfo: {uinfo}, self: {self}")
class DoAction(Action):
@Action.action
def cb_action(self, uinfo, name, kp, input, output, trans):
t1 = multiprocessing.Process(target=child, args=(uinfo, self))
t1.start()
class Main(ncs.application.Application):
def setup(self):
self.log.info('Main RUNNING')
self.register_action('sleep', DoAction)
def teardown(self):
self.log.info('Main FINISHED')import ncs
import _ncs
from ncs.dp import Action
from multiprocessing import Process
import multiprocessing
def child(usid, th, action_point):
print(f"uinfo: {usid}, th: {th}, action_point: {action_point}")
class DoAction(Action):
@Action.action
def cb_action(self, uinfo, name, kp, input, output, trans):
usid = uinfo.usid
th = uinfo.actx_thandle
action_point = self.actionpoint
t1 = multiprocessing.Process(target=child, args=(usid,th,action_point,))
t1.start()
class Main(ncs.application.Application):
def setup(self):
self.log.info('Main RUNNING')
self.register_action('sleep', DoAction)
def teardown(self):
self.log.info('Main FINISHED')container dRFS {
list device {
key name;
leaf name {
type string;
}
}
}request device-action extract-device name ex0 public Properties create(
....
NavuContainer lowerLayerNSO = ....
Maapi maapi = service.context().getMaapi();
int tHandle = service.context().getMaapiHandle();
NavuNode dstVpn = lowerLayerNSO.container("config").
container("l3vpn", "vpn").
list("l3vpn").
sharedCreate(serviceName);
ConfPath dst = dstVpn.getConfPath();
ConfPath src = service.getConfPath();
maapi.copy_tree(tHandle, true, src, dst);module cfs-vlan {
...
list cfs-vlan {
key name;
leaf name {
type string;
}
uses ncs:service-data;
ncs:servicepoint cfs-vlan;
leaf a-router {
type leafref {
path "/dispatch-map/router";
}
mandatory true;
}
leaf z-router {
type leafref {
path "/dispatch-map/router";
}
mandatory true;
}
leaf iface {
type string;
mandatory true;
}
leaf unit {
type int32;
mandatory true;
}
leaf vid {
type uint16;
mandatory true;
}
}
}admin@upper-nso% show cfs-vlan
cfs-vlan v1 {
a-router ex0;
z-router ex5;
iface eth3;
unit 3;
vid 77;
}admin@upper-nso% show dispatch-map
dispatch-map ex0 {
rfs-node lower-nso-1;
}
dispatch-map ex1 {
rfs-node lower-nso-1;
}
dispatch-map ex2 {
rfs-node lower-nso-1;
}
dispatch-map ex3 {
rfs-node lower-nso-2;
}
dispatch-map ex4 {
rfs-node lower-nso-2;
}
dispatch-map ex5 {
rfs-node lower-nso-2;
}<config-template xmlns="http://tail-f.com/ns/config/1.0"
servicepoint="cfs-vlan">
<devices xmlns="http://tail-f.com/ns/ncs">
<!-- Do this for the two leafs a-router and z-router -->
<?foreach {a-router|z-router}?>
<device>
<!--
Pick up the name of the rfs-node from the dispatch-map
and do not change the current context thus the string()
-->
<name>{string(deref(current())/../rfs-node)}</name>
<config>
<vlan xmlns="http://com/example/rfsvlan">
<!-- We do not want to change the current context here either -->
<name>{string(/name)}</name>
<!-- current() is still a-router or z-router -->
<router>{current()}</router>
<iface>{/iface}</iface>
<unit>{/unit}</unit>
<vid>{/vid}</vid>
<description>Interface owned by CFS: {/name}</description>
</vlan>
</config>
</device>
<?end?>
</devices>
</config-template>admin@upper-nso% request cfs-vlan v1 get-modifications no-lsa
cli {
local-node {
data devices {
device lower-nso-1 {
config {
+ rfs-vlan:vlan v1 {
+ router ex0;
+ iface eth3;
+ unit 3;
+ vid 77;
+ description "Interface owned by CFS: v1";
+ }
}
}
device lower-nso-2 {
config {
+ rfs-vlan:vlan v1 {
+ router ex5;
+ iface eth3;
+ unit 3;
+ vid 77;
+ description "Interface owned by CFS: v1";
+ }
}
}
}
}
}admin@upper-nso% request cfs-vlan v1 get-modifications
cli {
local-node {
.....
}
lsa-service {
service-id /devices/device[name='lower-nso-1']/config/rfs-vlan:vlan[name='v1']
data devices {
device ex0 {
config {
r:sys {
interfaces {
+ interface eth3 {
+ enabled;
+ unit 3 {
+ enabled;
+ description "Interface owned by CFS: v1";
+ vlan-id 77;
+ }
+ }
}
}
}
}
}
}
lsa-service {
service-id /devices/device[name='lower-nso-2']/config/rfs-vlan:vlan[name='v1']
data devices {
device ex5 {
config {
r:sys {
interfaces {
+ interface eth3 {
+ enabled;
+ unit 3 {
+ enabled;
+ description "Interface owned by CFS: v1";
+ vlan-id 77;
+ }
+ }
}
}
}
}
}
}admin@upper-nso% set cfs-vlan v1 a-router ex0 z-router ex5 iface eth3 unit 3 vid 78
[ok][2016-10-20 16:52:45]
[edit]
admin@upper-nso% commit dry-run outformat native
native {
device {
name lower-nso-1
data <rpc xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"
message-id="1">
<edit-config xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0">
<target>
<running/>
</target>
<test-option>test-then-set</test-option>
<error-option>rollback-on-error</error-option>
<with-inactive xmlns="http://tail-f.com/ns/netconf/inactive/1.0"/>
<config>
<vlan xmlns="http://com/example/rfsvlan">
<name>v1</name>
<vid>78</vid>
<private>
<re-deploy-counter>-1</re-deploy-counter>
</private>
</vlan>
</config>
</edit-config>
</rpc>
}
...........
....module rfs-vlan {
...
list vlan {
key name;
leaf name {
tailf:cli-allow-range;
type string;
}
uses ncs:service-data;
ncs:servicepoint "rfs-vlan";
leaf router {
type string;
}
leaf iface {
type string;
mandatory true;
}
leaf unit {
type int32;
mandatory true;
}
leaf vid {
type uint16;
mandatory true;
}
leaf description {
type string;
mandatory true;
}
}
}admin@lower-nso-1> show configuration vlan
vlan v1 {
router ex0;
iface eth3;
unit 3;
vid 77;
description "Interface owned by CFS: v1";
}
[ok][2016-10-20 17:01:08]
admin@lower-nso-1> request vlan v1 get-modifications
cli {
local-node {
data devices {
device ex0 {
config {
r:sys {
interfaces {
+ interface eth3 {
+ enabled;
+ unit 3 {
+ enabled;
+ description "Interface owned by CFS: v1";
+ vlan-id 77;
+ }
+ }
}
}
}
}
}
}
}request move-device move src-nso lower-1 dest-nso lower-2 device-name ex0 list l3vpn {
description "Layer3 VPN";
key name;
leaf name {
type string;
}
leaf route-distinguisher {
description "Route distinguisher/target identifier unique for the VPN";
mandatory true;
type uint32;
}
list endpoint {
key "id";
leaf id {
type string;
}
leaf ce-device {
mandatory true;
type leafref {
path "/ncs:devices/ncs:device/ncs:name";
}
}
leaf ce-interface {
mandatory true;
type string;
}
....
leaf as-number {
tailf:info "CE Router as-number";
type uint32;
}
}
container qos {
leaf qos-policy {
...... list l3vpn {
description "Layer3 VPN";
key name;
leaf name {
type string;
}
leaf route-distinguisher {
description "Route distinguisher/target identifier unique for the VPN";
mandatory true;
type uint32;
}
list endpoint {
key "id";
leaf id {
type string;
}
leaf ce-device {
mandatory true;
type string;
}
.......module myserv {
namespace "http://example.com/myserv";
prefix ms;
.....
list srv {
key name;
leaf name {
type string;
}
uses ncs:service-data;
ncs:servicepoint vlanspnt;
leaf router {
type leafref {
path "/ncs:devices/ncs:device/ncs:name";
.....
}
}module myserv {
namespace "http://example.com/myserv";
prefix ms;
.....
list srv {
key name;
leaf name {
type string;
}
uses ncs:service-data;
ncs:servicepoint vlanspnt;
leaf router {
type string;
.....
}
}module myserv-rfs {
namespace "http://example.com/myserv-rfs";
prefix ms-rfs;
.....
list srv {
key name;
leaf name {
type string;
}
uses ncs:service-data;
ncs:servicepoint vlanspnt;
leaf router {
type leafref {
path "/ncs:devices/ncs:device/ncs:name";
.....
}
}$ ncs-make-package --lsa-netconf-ned /path/to-rfs-yang myserv-rfs-ned <netconf-north-bound>
<enabled>true</enabled>
<transport>
<ssh>
<enabled>true</enabled>
<ip>0.0.0.0</ip>
<port>2022</port>
</ssh>
</transport>
</netconf-north-bound>admin@upper-nso% show devices device | display-level 4
device lower-nso-1 {
lsa-remote-node lower-nso-1;
authgroup default;
device-type {
netconf {
ned-id lsa-netconf;
}
}
state {
admin-state unlocked;
}
}admin@upper-nso% show cluster remote-node
remote-node lower-nso-1 {
address 127.0.2.1;
authgroup default;
}admin@upper-nso% request devices device lower-nso-* ssh fetch-host-keys
admin@upper-nso% request cluster remote-node lower-nso-* ssh fetch-host-keysadmin@upper-nso% set devices device lower-nso-* out-of-sync-commit-behaviour acceptadmin@upper-nso% show devices device | display-level 4
device lower-nso-1 {
lsa-remote-node lower-nso-1;
authgroup default;
device-type {
netconf {
ned-id cisco-nso-nc-5.7;
}
}
state {
admin-state unlocked;
}
}ncs-make-package --no-netsim --no-java --no-python \
--lsa-netconf-ned ./path/to/rfs/src/yang \
myrfs-service-nedncs-make-package --no-netsim --no-java --no-python \
--lsa-netconf-ned ./path/to/rfs/src/yang \
--lsa-lower-nso cisco-nso-nc-5.7
myrfs-service-ned$ make clean manual
$ make start-manual
$ make cli-upper-nso> configure
% set cluster device-notifications enabled
% set cluster remote-node lower-nso-1 authgroup default username admin
% set cluster remote-node lower-nso-1 address 127.0.0.1 port 2023
% set cluster remote-node lower-nso-2 authgroup default username admin
% set cluster remote-node lower-nso-2 address 127.0.0.1 port 2024
% set cluster commit-queue enabled
% commit
% request cluster remote-node lower-nso-* ssh fetch-host-keys$ ln -sf ${NCS_DIR}/packages/lsa/cisco-nso-nc-5.4 upper-nso/packages% exit
> request packages reload
e>>> System upgrade is starting.
>>> Sessions in configure mode must exit to operational mode.
>>> No configuration changes can be performed until upgrade has completed.
>>> System upgrade has completed successfully.
reload-result {
package cisco-nso-nc-5.4
result true
}> configure
Entering configuration mode private
% set devices device lower-nso-1 device-type netconf ned-id cisco-nso-nc-5.4
% set devices device lower-nso-1 authgroup default
% set devices device lower-nso-1 lsa-remote-node lower-nso-1
% set devices device lower-nso-1 state admin-state unlocked
% set devices device lower-nso-2 device-type netconf ned-id cisco-nso-nc-5.4
% set devices device lower-nso-2 authgroup default
% set devices device lower-nso-2 lsa-remote-node lower-nso-2
% set devices device lower-nso-2 state admin-state unlocked
% commit
Commit complete.
% request devices fetch-ssh-host-keys
fetch-result {
device lower-nso-1
result updated
fingerprint {
algorithm ssh-ed25519
value 4a:c6:5d:91:6d:4a:69:7a:4e:0d:dc:4e:51:51:ee:e2
}
}
fetch-result {
device lower-nso-2
result updated
fingerprint {
algorithm ssh-ed25519
value 4a:c6:5d:91:6d:4a:69:7a:4e:0d:dc:4e:51:51:ee:e2
}
}
% request devices sync-from
sync-result {
device lower-nso-1
result true
}
sync-result {
device lower-nso-2
result true
}% show devices device config devices device | display xpath | display-level 5
/devices/device[name='lower-nso-1']/config/ncs:devices/device[name='ex0']
/devices/device[name='lower-nso-1']/config/ncs:devices/device[name='ex1']
/devices/device[name='lower-nso-1']/config/ncs:devices/device[name='ex2']
/devices/device[name='lower-nso-2']/config/ncs:devices/device[name='ex3']
/devices/device[name='lower-nso-2']/config/ncs:devices/device[name='ex4']
/devices/device[name='lower-nso-2']/config/ncs:devices/device[name='ex5']% run show devices device lower-nso-1 live-status alarms summary
live-status alarms summary indeterminates 0
live-status alarms summary criticals 0
live-status alarms summary majors 0
live-status alarms summary minors 0
live-status alarms summary warnings 0$ ncs-make-package --no-netsim --no-java --no-python \
--lsa-netconf-ned package-store/rfs-vlan/src/yang \
--lsa-lower-nso cisco-nso-nc-5.4 \
--package-version 5.4 --dest upper-nso/packages/rfs-vlan-nc-5.4 \
--build rfs-vlan-nc-5.4--lsa-netconf-ned package-store/rfs-vlan/src/yang--package-version 5.4--dest upper-nso/packages/rfs-vlan-nc-5.4rfs-vlan-nc-5.4$ ln -sf ../../package-store/cfs-vlan upper-nso/packages% exit
> request packages reload
>>> System upgrade is starting.
>>> Sessions in configure mode must exit to operational mode.
>>> No configuration changes can be performed until upgrade has completed.
>>> System upgrade has completed successfully.
reload-result {
package cfs-vlan
result true
}
reload-result {
package cisco-nso-nc-5.4
result true
}
reload-result {
package rfs-vlan-nc-5.4
result true
}
> configure
Entering configuration mode private% set cfs-vlan v1 a-router ex0 z-router ex5 iface eth3 unit 3 vid 77
% commit dry-run
.....
local-node {
data devices {
device lower-nso-1 {
config {
services {
+ vlan v1 {
+ router ex0;
+ iface eth3;
+ unit 3;
+ vid 77;
+ description "Interface owned by CFS: v1";
+ }
}
}
}
device lower-nso-2 {
config {
services {
+ vlan v1 {
+ router ex5;
+ iface eth3;
+ unit 3;
+ vid 77;
+ description "Interface owned by CFS: v1";
+ }
}
}
}
}
.....









devices {
device ex0 {
address 127.0.0.1;
port 12022;
ssh {
...
/* Refcount: 1 */
/* Backpointer: [ /drfs:dRFS/drfs:device[drfs:name='ex0']/rfs-vlan:vlan[rfs-vlan:name='v1'] ] */
interface eth3 {
...
}
...
}
}
dRFS {
device ex0 {
vlan v1 {
private {
...
}
}
}
}request device-action install-device name ex0 config <cfg>request device-action delete-device name ex0dispatch-map ex0 {
rfs-node lower-nso-2;
}tailf:opaquetailf:opaque$OPERATION: Defined if the template is registered for a servicepoint with the cbtype attribute set to pre-/post-modification (see Service Callpoints and Templates). Contains the requested service operation; create, update, or delete.
nocreate: Merge with a node if it exists. If it does not exist, it will not be created.
delete: Delete the node.
If insert="after", then the constraint is fulfilled if the element exists in the configuration before the element indicated by the guard, but after the element indicated by the value attribute.
If insert="before", then the constraint is fulfilled if the element exists in the configuration after the element indicated by the guard, but before the element indicated by the or value attribute.
Here, GbEth is the name of the macro. This macro takes three parameters, name, ip, and mask. The parameters name and mask have default values, and ip does not.
The default value for mask is a fixed string, while the one for name by default gets its value through an XPath expression.
A macro can be expanded in another location in the template using the <?expand?> processing instruction. As shown in the example (line 29), the <?expand?> instruction takes the name of the macro to expand, and an optional list of parameters and their values.
The parameters in the macro definition are replaced with the values given during expansion. If a parameter is not given any value during expansion, the default value is used. If there is no default value in the definition, not supplying a value causes an error.
Macro definitions cannot be nested - that is, a macro definition cannot contain another macro definition. But a macro definition can have <?expand?> instructions to expand another macro within this macro (line 17 in the example).
The macro expansion and the parameter replacement work on just strings - there is no schema validation or XPath evaluation at this stage. A macro expansion just inserts the macro definition at the expansion site.
Macros can be defined in multiple files, and macros defined in the same package are visible to all templates in that package. This means that a template file could have just the definitions of macros, and another file in the same package could use those macros.
Allows you to manipulate the current context node used to evaluate XPath expressions in the template. The expression is evaluated within the current XPath context and must evaluate to exactly one node in the data tree.
Store both the current context node and the root node of the XPath accessible tree with name being the key to access it later. It is possible to switch to this context later using switch-context with the name. Multiple contexts can be stored simultaneously under different names. Using save-context with the same name multiple times will result in the stored context being overwritten.
Used to switch to a context stored using save-context with the specified name. This means that both the current context node and the root node of the XPath accessible tree will be changed to the stored values. switch-context does not remove the context from the storage and can be used as many times as needed, however using it with a name that does not exist in the storage causes an error.
If there are multiple versions of the same NED expected to be loaded in the system, which define different versions of the same namespace, this processing instruction helps to resolve ambiguities in the schema between different versions of the NED. The part of the template following this processing instruction, up to matching elif-ned-id, else or end processing instruction is only applied to devices with the ned-id matching one of the ned-ids specified as a parameter to this processing instruction. If there are no ambiguities to resolve, then this processing instruction is not required. The ned-ids must contain one or more qualified NED ID identities separated by spaces.
The elif-ned-id is optional and used to define a part of the template that applies to devices with another set of ned-ids than previously specified. Multiple elif-ned-id instructions are allowed in a single block of if-ned-id instructions. The set of ned-ids specified as a parameter to elif-ned-id instruction must be non-intersecting with the previously specified ned-ids in this block.
The if-ned-id-match and elif-ned-id-match processing instructions work similarly to if-ned-id and elif-ned-id but they accept a regular expression as an argument instead of a list of ned-ids. The regular expression is matched against all of the ned-ids supported by the package. If the if-ned-id-match processing instruction is nested inside of another if-ned-id-match or if-ned-id processing instruction, then the regular expression will only be matched against the subset of ned-ids matched by the encompassing processing instruction. The if-ned-id-match and elif-ned-id-match processing instructions are only allowed inside a device's mounted configuration subtree rooted at /devices/device/config.
Define a new macro with the specified name and optional parameters. Macro definitions must come at the top of the template, right after the config-template tag. For a detailed description see .
Insert and expand the named macro, using the specified values for parameters. For a detailed description, see .
sort-by() in Manual Pages
min() in Manual Pages
Allows you to assign a new variable or manipulate the existing value of a variable v. If used to create a new variable, the scope of visibility of this variable is limited to the parent tag of the processing instruction or the current processing instruction block. Specifically, if a new variable is defined inside a loop, then it is discarded at the end of each iteration.
Processing instruction block that allows conditional execution based on the boolean result of the expression. For a detailed description, see Conditional Statements.
The expression must evaluate to a (possibly empty) XPath node-set. The template engine will then iterate over each node in the node set by changing the XPath current context node to this node and evaluating all children tags within this context. For the detailed description see Loop Statements.
This processing instruction allows you to iterate over the same set of template tags by changing a variable value. The variable visibility scope obeys the same rules as the set processing instruction, except the variable value, is carried over to the next iteration instead of being discarded at the end of each iteration.
Only the condition expression is mandatory, either or both of initial and next value assignment can be omitted, e.g.:
For a detailed description see Loop Statements.
This instruction is analogous to copy_tree() function available in the MAAPI API. The parameter is an XPath expression that must evaluate to exactly one node in the data tree and indicate the source path to copy from. The target path is defined by the position of the copy-tree instruction in the template within the current context.
Allows to manipulate the root node of the XPath accessible tree. This expression is evaluated in an XPath context where the accessible tree is the entire datastore, which means that it is possible to select a root node outside the currently accessible tree. The current context node remains unchanged. The expression must evaluate to exactly one node in the data tree.
ncs.confNSO organizes all managed devices as a list of devices. The path to a specific device is devices device DEVICE-NAME. The CLI sequence below does the following:
Show operational data for all devices: fetches operational data from the network devices like interface statistics, and also operational data that is maintained by NSO like alarm counters.
Move to configuration mode. Show configuration data for all devices: In this example, this is done before the configuration from the real devices has been loaded in the network to NSO. At this point, only the NSO-configured data like IP Address, port, etc. are shown.
Show device operational data and configuration data:
It can be annoying to move between modes to display configuration data and operational data. The CLI has ways around this.
Show config data in operational mode and vice versa:
Look at the device configuration above, no configuration relates to the actual configuration on the devices. To boot-strap NSO and discover the device configuration, it is possible to perform an action to synchronize NSO from the devices, devices sync-from. This reads the configuration over available device interfaces and populates the NSO data store with the corresponding configuration. The device-specific configuration is populated below the device's entry in the configuration tree and can be listed specifically.
Perform the action to synchronize from devices:
Display the device configuration after the synchronization:
NSO provides a network CLI in two different styles (selectable by the user): J-style and C-style. The CLI is automatically rendered using the data models described by the YANG files. There are three distinctly different types of YANG files, the built-in NSO models describing the device manager and the service manager, models imported from the managed devices, and finally service models. Regardless of model type, the NSO CLI seamlessly handles all models as a whole.
This creates an auto-generated CLI, without any extra effort, except the design of our YANG files. The auto-generated CLI supports the following features:
Unified CLI across the complete network, devices, and network services.
Command line history and command line editor.
Tab completion for the content of the configuration database.
Monitoring and inspecting log files.
Inspecting the system configuration and system state.
Copying and comparing different configurations, for example, between two interfaces or two devices.
Configuring common settings across a range of devices.
The CLI contains commands for manipulating the network configuration.
An alias provides a shortcut for a complex command.
Alias expansion is performed when a command line is entered. Aliases are part of the configuration and are manipulated accordingly. This is done by manipulating the nodes in the alias configuration tree.
Actions in the YANG files are mapped into actual commands. In J-style CLI actions are mapped to the request commands.
Even though the auto-generated CLI is fully functional it can be customized and extended in numerous ways:
Built-in commands can be moved, hidden, deleted, reordered, and extended.
Confirmation prompts can be added to built-in commands.
New commands can be implemented using the Java API, ordinary executables, and shell scripts.
New commands can be mounted freely in the existing command hierarchy.
The built-in tab completion mechanism can be overridden using user-defined callbacks.
New command hierarchies can be created.
A command timeout can be added, both a global timeout for all commands and command-specific timeouts.
Actions and parts of the configuration tree can be hidden and can later be made visible when the user enters a password.
How to customize and extend the auto-generated CLI is described in Plug-and-play Scripting.
The CLI is entirely data model-driven. The YANG model(s) defines a hierarchy of configuration elements. The CLI follows this tree. The NSO CLI provides various commands for configuring and monitoring software, hardware, and network connectivity of managed devices.
The CLI supports two modes:
Operational mode: For monitoring the state of the NSO node.
Configure mode: For changing the state of the network.
The prompt indicates which mode the CLI is in. When moving from operational mode to configure mode using the configure command, the prompt is changed from host# to host(config)#. The prompts can be configured using the c-prompt1 and c-prompt2 settings in the ncs.conf file.
For example:
The operational mode is the initial mode after successful login to the CLI. It is primarily used for viewing the system status, controlling the CLI environment, monitoring and troubleshooting network connectivity, and initiating the configure mode.
A list of base commands available in the operational mode is listed below in the Operational Mode Commands section. Additional commands are rendered from the loaded YANG files.
The configure mode can be initiated by entering the configure command in operational mode. All changes to the network configuration are done to a copy of the active configuration. These changes do not take effect until a successful commit or commit confirm command is entered.
A list of base commands available in configure mode is listed below in the Configure Mode Commands section. Additional commands are rendered from the loaded YANG files.
The CLI is started using the ncs_cli program. It can be used as a login program (replacing the shell for a user), started manually once the user has logged in, or used in scripts for performing CLI operations.
In some NSO installations, ordinary users would have the ncs_cli program as a login shell, and the root user would have to log in and then start the CLI using ncs_cli, whereas in others, the ncs_cli can be invoked freely as a normal shell command.
The ncs_cli program supports a range of options, primarily intended for debugging and development purposes (see description below).
The ncs_cli program can also be used for batch processing of CLI commands, either by storing the commands in a file and running ncs_cli on the file, or by having the following line at the top of the file (with the location of the program modified appropriately):
When the CLI is run non-interactively it will terminate at the first error and will only show the output of the commands executed. It will not output the prompt or echo the commands. This is the same behavior as for shell scripts.
To run a script non-interactively, such as a script or through a pipe, and still produce prompts and echo commands, use the --interactive option.
-h, --help
Display help text.
-H, --host HostName
Gives the name of the current host. The ncs_cli program will use the value of the system call gethostbyname() by default. The hostname is used in the CLI prompt.
-A, --address Address
CLI address to connect to. The default is 127.0.0.1. This can be controlled by either this flag or the UNIX environment variable NCS_IPC_ADDR. The -A flag takes precedence.
-P, --port PortNumber
CLI port to connect to. The default is the NSO IPC port, which is 4569 This can be controlled by either this flag, or the UNIX environment variable NCS_IPC_PORT. The -P flag takes precedence.
-c, --cwd Directory
The current working directory (CWD) for the user once in the CLI. All file references from the CLI will be relative to the CWD. By default, the value will be the actual CWD where ncs_cli is invoked.
-p, --proto ssh | tcp | console
The protocol the user is using to connect. This value is used in the audit logs. Defaults to ssh if SSH_CONNECTION environment variable is set; console otherwise.
For clispec(5) and confd_lib_maapi(3) refer to Manual Pages.
The CLI comes in two flavors: C-Style (Cisco XR style) and the J-style. It is possible to choose one specifically or switch between them.
Starting the CLI (C-style, Cisco XR style):
Starting the CLI (J-style):
It is possible to interactively switch between these styles while inside the CLI using the builtin switch command:
C-style is mainly used throughout the documentation for examples etc., except when otherwise stated.
If the number of ongoing sessions has reached the configured system limit, no more CLI sessions will be allowed until one of the existing sessions has been terminated.
This makes it impossible to get into the system — a situation that may not be acceptable. The CLI therefore has a mechanism for handling this problem. When the CLI detects that the session limit has been reached, it will check if the new user has the privileges to execute the logout command. If the user does, it will display a list of the current user sessions in NSO and ask the user if one of the sessions should be terminated to make room for the new session.
Once NSO is synchronized with the devices' configuration, done by using the devices sync-from command, it is possible to modify the devices. The CLI is used to modify the NSO representation of the device configuration and then committed as a transaction to the network.
As an example, to change the speed setting on the interface GigabitEthernet0/1 across several devices:
Note the availability of commit flags.
Any failure on any device will make the whole transaction fail. It is also possible to perform a manual rollback, a rollback is the undoing of a commit.
This is operational data and the CLI is in configuration mode so the way of showing operational data in config mode is used.
The command show configuration rollback changes can be used to view rollback changes in more detail. It will show what will be done when the rollback file is loaded, similar to loading the rollback and using show configuration:
The command show configuration commit changes can be used to see which changes were done in a given commit, i.e. the roll-forward commands performed in that commit:
The command rollback-files apply-rollback-file can be used to perform the rollback:
And now the commit the rollback:
When the command rollback-files apply-rollback-file fixed-number 10019 is run the changes recorded in rollback 10019-N (where N is the highest, thus the most recent rollback number) will all be undone. In other words, the configuration will be rolled back to the state it was in before the commit associated with rollback 10019 was performed.
It is also possible to undo individual changes by running the command rollback-files apply-rollback-file selective. E.g. to undo the changes recorded in rollback 10019, but not the changes in 10020-N run the command rollback-files apply-rollback-file selective fixed-number 10019.
This operation may fail if the commits following rollback 10019 depend on the changes made in rollback 10019.
It is possible to process the output from a command using an output redirect. This is done using the | character (a pipe character):
The precise list of pipe commands depends on the command executed. Some pipe commands, like select and de-select, are only available for the show command, whereas others are universally available.
This redirect target counts the number of lines in the output. For example:
The include targets is used to only include lines matching a regular expression:
In the example above only lines containing aaa are shown. Similarly lines not containing a regular expression can be included. This is done using the exclude target:
It is possible to display the context for a match using the pipe command include -c. Matching lines will be prefixed by <line no>: and context lines with <line no>-. For example:
It is possible to display the context for a match using the pipe command context-match:
It is possible to display the output starting at the first match of a regular expression. This is done using the begin pipe command:
The output can also be saved to a file using the save or append redirect target:
Or to save the configuration, except all passwords:
The regular expressions are a subset of the regular expressions found in egrep and in the AWK programming language. Some common operators are:
.
Matches any character.
^
Matches the beginning of a string.
$
Matches the end of a string.
[abc...]
Character class, which matches any of the characters abc... Character ranges are specified by a pair of characters separated by a -.
[^abc...]
Negated character class, which matches any character except abc... .
r1 | r2
Alternation. It matches either r1 or r2.
For example, to only display uid and gid do the following:
There are several options for displaying the configuration and stats data in NSO. The most basic command consists of displaying a leaf or a subtree of the configuration by giving the path to the element.
To display the configuration of a device do:
This can also be done for a group of devices by substituting the instance name (ce0 in this case) with Regular Expressions.
To display the config of all devices:
It is possible to limit the output even further. View only the HTTP settings on each device:
There is an alternative syntax for this using the select pipe command:
The select pipe command can be used multiple times for adding additional content:
There is also a de-select pipe command that can be used to instruct the CLI to not display certain parts of the config. The above printout could also be achieved by first selecting the ip container, and then de-selecting the source-route leaf:
A use-case for the de-select pipe command is to de-select the config container to only display the device settings without actually displaying their config:
The above statements also work for the save command. To save the devices managed by NSO, but not the contents of their config container:
It is possible to use the select command to select which list instances to display. To display all devices that have the interface GigabitEthernet 0/0/0/4:
This means to display all device instances that have the interface GigabitEthernet 0/0/0/4. Only the subtree defined by the select path will be displayed. It is also possible to display the entire content of the config container for each instance by using an additional select statement:
The match-all pipe command is used for telling the CLI to only display instances that match all select commands. The default behavior is match-any which means to display instances that match any of the given select commands.
The display command is used to format configuration and statistics data. There are several output formats available, and some of these are unique to specific modes, such as configuration or operational mode. The output formats json, keypath, xml, and xpath are available in most modes and CLI styles (J, I, and C). The output formats netconf and maagic are only available if devtools has been set to true in the CLI session settings.
For instance, assuming we have a data model featuring a set of hosts, each containing a set of servers, we can display the configuration data as JSON. This is depicted in the example below.
Still working with the same data model as used in the example above, we might want to see the current configuration in keypath format.
The following example shows how to do that and shows the resulting output:
To modify a range of instances, at the same time, use range expressions or display a specific range of instances.
Basic range expressions are written with a combination of x..y (meaning from x to y), x,y (meaning x and y) and * (meaning any value), example:
It is possible to use range expressions for all key elements of integer type, both for setting values, executing actions, and displaying status and config.
Range expressions are also supported for key elements of non-integer types as long as they are restricted to the pattern [a-zA-Z-]*[0-9]+/[0-9]+/[0-9]+/.../[0-9]+ and the annotation tailf:cli-allow-range is used on the key leaf. This is the case for the device list.
The following can be done in the CLI to display a subset of the devices (ce0, ce1, ce3):
If the devices have names with slashes, for example, Firewall/1/1, Firewall/1/2, Firewall/1/3, Firewall/2/1, Firewall/2/2, and Firewall/2/3, expressions like this are possible:
In configure mode, it is possible to edit a range of instances in one command:
Or, like this:
Command history is maintained separately for each mode. When entering configure mode from operational for the first time, an empty history is used. It is not possible to access the command history from operational mode when in configure mode and vice versa. When exiting back into operational mode access to the command history from the preceding operational mode session will be used. Likewise, the old command history from the old configure mode session will be used when re-entering configure mode.
The default keystrokes for editing the command line and moving around the command history are as follows.
Move the cursor back by one character: Ctrl-b or Left Arrow.
Move the cursor back by one word: Esc-b or Alt-b.
Move the cursor forward one character: Ctrl-f or Right Arrow.
Move the cursor forward one word: Esc-f or Alt-f.
Move the cursor to the beginning of the command line: Ctrl-a or Home.
Move the cursor to the end of the command line: Ctrl-e or End.
Delete the character before the cursor: Ctrl-h, Delete, or Backspace.
Delete the character following the cursor: Ctrl-d.
Delete all characters from the cursor to the end of the line: Ctrl-k.
Delete the whole line: Ctrl-u or Ctrl-x.
Delete the word before the cursor: Ctrl-w, Esc-Backspace, or Alt-Backspace.
Delete the word after the cursor: Esc-d or Alt-d.
Insert the most recently deleted text at the cursor: Ctrl-y.
Scroll backward through the command history: Ctrl-p or Up Arrow.
Scroll forward through the command history: Ctrl-n or Down Arrow.
Search the command history in reverse order: Ctrl-r.
Show a list of previous commands: run the show cli history command.
Capitalize the word at the cursor, i.e. make the first character uppercase and the rest of the word lowercase: Esc-c.
Change the word at the cursor to lowercase: Esc-l.
Change the word at the cursor to uppercase: Esc-u.
Abort a command/Clear line: Ctrl-c.
Quote insert character, i.e. do not treat the next keystroke as an edit command: Ctrl-v/ESC-q.
Redraw the screen: Ctrl-l.
Transpose characters: Ctrl-t.
Enter multi-line mode. Enables entering multi-line values when prompted for a value in the CLI: ESC-m.
Exit configuration mode: Ctrl-z.
It is not necessary to type the full command or option name for the CLI to recognize it. To display possible completions, type the partial command followed immediately by <tab> or <space>.
If the partially typed command uniquely identifies a command, the full command name will appear. Otherwise, a list of possible completions is displayed.
Long lines can be broken into multiple lines using the backslash (\) character at the end of the line. This is primarily useful inside scripts.
Completion is disabled inside quotes. To type an argument containing spaces either quote them with a \ (e.g. file show foo\ bar) or with a " (e.g. file show "foo bar"). Space completion is disabled when entering a filename.
Command completion also applies to filenames and directories:
All characters following a !, up to the next new line, are ignored. This makes it possible to have comments in a file containing CLI commands, and still be able to paste the file into the command-line interface. For example:
To enter the comment character as an argument, it has to be prefixed with a backslash (\) or used inside quotes (").
The /* ... */ comment style is also supported.
When using large configurations it may make sense to be able to associate comments (annotations) and tags with the different parts. Then filter the configuration with respect to the annotations or tags. For example, tagging parts of the configuration that relate to a certain department or customer.
NSO has support for both tags and annotations. There is a specific set of commands available in the CLI for annotating and tagging parts of the configuration. There is also a set of pipe commands for controlling whether the tags and annotations should be displayed and for filtering depending on annotation and tag content.
The commands are:
annotate <statement> <text>
tag add <statement> <tag>
tag clear <statement> <tag>
tag del <statement> <tag>
Example:
To view the placement of tags and annotations in the configuration it is recommended to use the pipe command display curly-braces. The annotations and tags will be displayed as comments where the tags are prefixed by Tags:. For example:
It is possible to hide the tags and annotations when viewing the configuration or to explicitly include them in the listing. This is done using the display annotations/tags and hide annotations/tags pipe commands. To hide all attributes (annotations, tags, and FASTMAP attributes) use the hide attributes pipe command.
Annotations and tags are part of the configuration. When adding, removing, or modifying an annotation or a tag, the configuration needs to be committed similar to any other change to the configuration.
Messages appear when entering and exiting configure mode, when committing a configuration, and when typing a command or value that is not valid:
When committing a configuration, the CLI first validates the configuration, and if there is a problem it will indicate what the problem is.
If a missing identifier or a value is out of range a message will indicate where the errors are:
Parts of the CLI behavior can be controlled from the ncs.conf file. See the ncs.conf(5) in Manual Pages manual page for a comprehensive description of all the options.
There are a number of session variables in the CLI. They are only used during the session and are not persistent. Their values are inspected using show cli in operational mode, and set using set in operational mode. Their initial values are in order derived from the content of the ncs.conf file, and the global defaults as configured at /aaa:session and user-specific settings configured at /aaa:user{<user>}/setting.
The different values control different parts of the CLI behavior:
New commands can be added by placing a script in the scripts/command directory. See Plug-and-play Scripting.
The default behavior is to enforce Unix-style access restrictions. That is, the user's uid, gid, and gids are used to control what the user has read and write access to.
However, it is also possible to jail a CLI user to their home directory (or the directory where ncs_cli is started). This is controlled using the ncs.conf parameter restricted-file-access. If this is set to true, then the user only has access to the home directory.
Help and information texts are specified in several places. In the Yang files, the tailf:info element is used to specify a descriptive text that is shown when the user enters ? in the CLI. The first sentence of the info text is used when showing one-line descriptions in the CLI.
NCS understands multiple quoting schemes on input and de-quotes a value when parsing the command. Still, it uses what it considers a canonical quoting scheme when printing out this value, e.g., when pushing a configuration change to the device. However, different devices may have different quoting schemes, possibly not compatible with the NCS canonical quoting scheme. For example, the following value cannot be printed out by NCS as two backslashes \\ match \ in the quoting scheme used by NCS when encoding values.
General rules for NCS to represent backslash are as follows, and so on. It can only get an odd number of backslashes output from NCS.
\ and \\ are represented as \.
\\\ and \\\\ are represented as \\\.
\\\\\ and \\\\\\ are represented as \\\\\.
A backslash \ is represented as a backslash \ when it is followed by a character that does not need to be escaped but is represented as double backslashes \\ if the next character could be escaped. With remote passwords, if you are using special characters, be sure to follow recommended guidelines, see Configure Mode for more information.
To let NCS pass through a quoted string verbatim, one can do as stated below:
Enable the NCS configuration parameter escapeBackslash in the ncs.conf file. This is a global setting on NCS which affects all the NEDs.
Alternatively, a certain NED may be updated on request to be able to transform the value printed by NCS to what the device expects if one only wants to affect a certain device instead of all the connected ones.
If there are numeric triplets following a backslash \, NCS will treat them as octal numbers and convert them to one character based on ASCII code. For example:
\123 is converted to S.
\067 is converted to 7.
Develop service packages to run user code.
When setting up an application project, there are several things to think about. A service package needs a service model, NSO configuration files, and mapping code. Similarly, NED packages need YANG files and NED code. We can either copy an existing example and modify that, or we can use the tool ncs-make-package to create an empty skeleton for a package for us. The ncs-make-package tool provides a good starting point for a development project. Depending on the type of package, we use ncs-make-package to set up a working development structure.
As explained in NSO Packages, NSO runs all user Java code and also loads all data models through an NSO package. Thus, a development project is the same as developing a package. Testing and running the package is done by putting the package in the NSO load-path and running NSO.
There are different kinds of packages; NED packages, service packages, etc. Regardless of package type, the structure of the package as well as the deployment of the package into NSO is the same. The script ncs-make-package creates the following for us:
A Makefile to build the source code of the package. The package contains source code and needs to be built.
If it's a NED package, a netsim directory that is used by the ncs-netsim tool to simulate a network of devices.
If it is a service package, skeleton YANG and Java files that can be modified are generated.
In this section, we will develop an MPLS service for a network of provider edge routers (PE) and customer equipment routers (CE). The assumption is that the routers speak NETCONF and that we have proper YANG modules for the two types of routers. The techniques described here work equally well for devices that speak other protocols than NETCONF, such as Cisco CLI or SNMP.
We first want to create a simulation environment where ConfD is used as a NETCONF server to simulate the routers in our network. We plan to create a network that looks like this:
To create the simulation network, the first thing we need to do is create NSO packages for the two router models. The packages are also exactly what NSO needs to manage the routers.
Assume that the yang files for the PE routers reside in ./pe-yang-files and the YANG files for the CE routers reside in ./ce-yang-files The ncs-make-package tool is used to create two device packages, one called pe and the other ce.
At this point, we can use the ncs-netsim tool to create a simulation network. ncs-netsim will use the Tail-f ConfD daemon as a NETCONF server to simulate the managed devices, all running on localhost.
The above command creates a network with 8 routers, 5 running the YANG models for a CE router and 3 running a YANG model for the PE routers. ncs-netsim can be used to stop, start, and manipulate this network. For example:
ncs-setupIn the previous section, we described how to use ncs-make-package and ncs-netsim to set up a simulation network. Now, we want to use NCS to control and manage precisely the simulated network. We can use the ncs-setup tool setup a directory suitable for this. ncs-setup has a flag to set up NSO initialization files so that all devices in a ncs-netsim network are added as managed devices to NSO. If we do:
The above commands, db, log, etc., directories and also create an NSO XML initialization file in ./NCS/ncs-cdb/netsim_devices_init.xml. The init file is important; it is created from the content of the netsim directory and it contains the IP address, port, auth credentials, and NED type for all the devices in the netsim environment. There is a dependency order between ncs-setup and ncs-netsim since ncs-setup creates the XML init file based on the contents in the netsim environment; therefore we must run the ncs-netsim create-network command before we execute the ncs-setup command. Once ncs-setup has been run, and the init XML file has been generated, it is possible to manually edit that file.
If we start the NSO CLI, we have for example :
If we take a look at the directory structure of the generated NETCONF NED packages, we have in ./ce
It is a NED package, and it has a directory called netsim at the top. This indicates to the ncs-netsim tool that ncs-netsim can create simulation networks that contain devices running the YANG models from this package. This section describes the netsim directory and how to modify it. ncs-netsim uses ConfD to simulate network elements, and to fully understand how to modify a generated netsim directory, some knowledge of how ConfD operates may be required.
The netsim directory contains three files:
confd.conf.netsim is a configuration file for the ConfD instances. The file will be /bin/sed substituted where the following list of variables will be substituted for the actual value for that ConfD instance:
%IPC_PORT% for /confdConfig/confdIpcAddress/port
%NETCONF_SSH_PORT%
Remember the picture of the network we wish to work with, there the routers, PE and CE, have an IP address and some additional data. So far here, we have generated a simulated network with YANG models. The routers in our simulated network have no data in them, we can log in to one of the routers to verify that:
The ConfD devices in our simulated network all have a Juniper CLI engine, thus we can, using the command ncs-netsim cli [devicename], log in to an individual router.
To achieve this, we need to have some additional XML initializing files for the ConfD instances. It is the responsibility of the install target in the netsim Makefile to ensure that each ConfD instance gets initialized with the proper init data. In the NSO example collection, the example $NCS_DIR/examples.ncs/mpls contains precisely the two above-mentioned PE and CE packages but modified, so that the network elements in the simulated network get initialized properly.
If we run that example in the NSO example collection we see:
A fully simulated router network loaded into NSO, with ConfD simulating the 7 routers.
With the scripting mechanism, an end-user can add new functionality to NSO in a plug-and-play-like manner. See about the scripting concept in general. It is also possible for a developer of an NSO package to enclose scripts in the package.
Scripts defined in an NSO package work pretty much as system-level scripts configured with the /ncs-config/scripts/dir configuration parameter. The difference is that the location of the scripts is predefined. The scripts directory must be named scripts and must be located in the top directory of the package.
In this complete example examples.ncs/getting-started/developing-with-ncs/11-scripting, there is a README file and a simple post-commit script packages/scripting/scripts/post-commit/show_diff.sh as well as a simple command script packages/scripting/scripts/command/echo.sh.
So far we have only talked about packages that describe a managed device, i.e., ned packages. There are also callback, application, and service packages. A service package is a package with some YANG code that models an NSO service together with Java code that implements the service. See .
We can generate a service package skeleton, using ncs-make-package, as:
Make sure that the package is part of the load path, and we can then create test service instances that do nothing.
The ncs-make-package will generate skeleton files for our service models and for our service logic. The package is fully buildable and runnable even though the service models are empty. Both CLI and Webui can be run. In addition to this, we also have a simulated environment with ConfD devices configured with YANG modules.
Calling ncs-make-package with the arguments above will create a service skeleton that is placed in the root in the generated service model. However, services can be augmented anywhere or can be located in any YANG module. This can be controlled by giving an argument --augment NAME where NAME is the path to where the service should be augmented, or in the case of putting the service as a root container in the service YANG this can be controlled by giving the argument --root-container NAME.
Services created using ncs-make-package will be of type list. However, it is possible to have services that are of type container instead. A container service needs to be specified as a presence container.
The service implementation logic of a service can be expressed using the Java language. For each such service, a Java class is created. This class should implement the create() callback method from the ServiceCallback interface. This method will be called to implement the service-to-device mapping logic for the service instance.
We declare in the component for the package, that we have a callback component. In the package-meta-data.xml for the generated package, we have:
When the package is loaded, the NSO Java VM will load the jar files for the package, and register the defined class as a callback class. When the user creates a service of this type, the create() method will be called.
In the following sections, we are going to show how to write a service application through several examples. The purpose of these examples is to illustrate the concepts described in previous chapters.
Service Model - a model of the service you want to provide.
Service Validation Logic - a set of validation rules incorporated into your model.
Service Logic - a Java class mapping the service model operations onto the device layer.
If we take a look at the Java code in the service generated by ncs-make-package, first we have the create() which takes four parameters. The ServiceContext instance is a container for the current service transaction, with this e.g. the transaction timeout can be controlled. The container service is a NavuContainer holding a read/write reference to the path in the instance tree containing the current service instance. From this point, you can start accessing all nodes contained within the created service. The root container is a NavuContainer holding a reference to the NSO root. From here you can access the whole data model of the NSO. The opaque parameter contains a java.util.Properties object instance. This object may be used to transfer additional information between consecutive calls to the create callback. It is always null in the first callback method when a service is first created. This Properties object can be updated (or created if null) but should always be returned.
The opaque object is extremely useful for passing information between different invocations of the create() method. The returned Properties object instance is stored persistently. If the create method computes something on its first invocation, it can return that computation to have it passed in as a parameter on the second invocation.
This is crucial to understand, the Mapping Logic fastmap mode relies on the fact that a modification of an existing service instance can be realized as a full deletion of what the service instance created when the service instance was first created, followed by yet another create, this time with slightly different parameters. The NSO transaction engine will then compute the minimal difference and send southbound to all involved managed devices. Thus a good service instance create() method will - when being modified - recreate exactly the same structures it created the first time.
The best way to debug this and to ensure that a modification of a service instance really only sends the minimal NETCONF diff to the southbound managed devices, is to turn on NETCONF trace in the NSO, modify a service instance, and inspect the XML sent to the managed devices. A badly behaving create() method will incur large reconfigurations of the managed devices, possibly leading to traffic interruptions.
It is highly recommended to also implement a selftest() action in conjunction with a service. The purpose of the selftest() action is to trigger a test of the service. The ncs-make-package tool creates an selftest() action that takes no input parameters and has two output parameters.
The selftest() implementation is expected to do some diagnosis of the service. This can possibly include the use of testing equipment or probes.
The NSO Java VM logging functionality is provided using LOG4J. The logging is composed of a configuration file (log4j2.xml) where static settings are made i.e. all settings that could be done for LOG4J (see for more comprehensive log settings). There are also dynamically configurable log settings under /java-vm/java-logging.
When we start the NSO Java VM in main() the log4j2.xml log file is parsed by the LOG4J framework and it applies the static settings to the NSO Java VM environment. The file is searched for in the Java CLASSPATH.
NSO Java VM starts several internal processes or threads. One of these threads executes a service called NcsLogger which handles the dynamic configurations of the logging framework. When NcsLogger starts, it initially reads all the configurations from /java-vm/java-logging and applies them, thus overwriting settings that were previously parsed by the LOG4J framework.
After it has applied the changes from the configuration it starts to listen to changes that are made under /java-vm/java-logging.
The LOG4J framework has 8 verbosity levels: ALL,DEBUG,ERROR,FATAL,INFO,OFF,TRACE, and WARN. They have the following relations: ALL > TRACE > DEBUG > INFO > WARN > ERROR
To change a verbosity level one needs to create a logger. A logger is something that controls the logging of certain parts of the NSO Java API.
The loggers in the system are hierarchically structured which means that there is one root logger that always exists. All descendants of the root logger inherit their settings from the root logger if the descendant logger doesn't overwrite its settings explicitly.
The LOG4J loggers are mapped to the package level in NSO Java API so the root logger that exits has a direct descendant which is the package: com and it has in turn a descendant com.tailf.
The com.tailf logger has a direct descendant that corresponds to every package in the system for example: com.tailf.cdb, com.tailf.maapi etc.
As in the default case, one could configure a logger in the static settings that is in a log4j2.properties file this would mean that we need to explicitly restart the NSO Java VM,or one could alternatively configure a logger dynamically if an NSO restart is not desired.
Recall that if a logger is not configured explicitly then it will inherit its settings from its predecessors. To overwrite a logger setting we create a logger in NSO.
To create a logger, for example, let's say that one uses Maapi API to read and write configuration changes in NSO. We want to show all traces including INFO level traces. To enable INFO traces for Maapi classes (located in the package com.tailf.maapi) during runtime we start for example a CLI session and create a logger called com.tailf.maapi.
When we commit our changes to CDB the NcsLogger will notice that a change has been made under /java-vm/java-logging, it will then apply the logging settings to the logger com.tailf.maapi that we just created. We explicitly set the INFO level to that logger. All the descendants from com.tailf.maapi will automatically inherit their settings from that logger.
So where do the traces go? The default configuration (in log4j2.properties): appender.dest1.type=Console the LOG4J framework forwards all traces to stdout/stderr.
In NSO, all stdout/stderr goes first through the service manager. The service manager has a configuration under /java-vm/stdout-capture that controls where the stdout/stderr will end up.
The default setting is in a file called ./ncs-java-vm.log.
It is important to consider that when creating a logger (in this case com.tailf.maapi) the name of the logger has to be an existing package known by NSO classloader.
One could also create a logger named com.tailf with some desired level. This would set all packages (com.tailf.*) to the same level. A common usage is to set com.tailf to level INFO which would set all traces, including INFO from all packages to level INFO.
If one would like to turn off all available traces in the system (quiet mode), then configure com.tailf or (com) to level OFF.
There are INFO level messages in all parts of the NSO Java API. ERROR levels when an exception occurs and some warning messages (level WARN) for some places in packages.
There are also protocol traces between the Java API and NSO which could be enabled if we create a logger com.tailf.conf with DEBUG trace level.
When processing in the java-vm fails, the exception error message is reported back to NCS. This can be more or less informative depending on how elaborate the message is in the thrown exception. Also, the exception can be wrapped one or several times with the original exception indicated as the root cause of the wrapped exception.
In debugging and error reporting, these root cause messages can be valuable to understand what actually happens in the Java code. On the other hand, in normal operations, just a top-level message without too many details is preferred. The exceptions are also always logged in the java-vm log but if this log is large it can be troublesome to correlate a certain exception to a specific action in NCS. For this reason, it is possible to configure the level of details shown by NCS for an java-vm exception. The leaf /ncs:java-vm/exception-error-message/verbosity takes one of three values:
standard: Show the message from the top exception. This is the default.
verbose: Show all messages for the chain of cause exceptions, if any.
trace: Show messages for the chain of cause exceptions with exception class and the trace for the bottom root cause.
Here is an example of how this can be used. In the web-site service example, we try to create a service without the necessary pre-preparations:
NSO will, at first start to take the packages found in the load path and copy these into a directory under the supervision of NSO located at ./state/package-in-use. Later starts of NSO will not take any new copies from the packages load-path so changes will not take effect by default. The reason for this is that in normal operation, changing package definition as a side-effect of a restart is an unwanted behavior. Instead, these types of changes are part of an NSO installation upgrade.
During package development as opposed to operations, it is usually desirable that all changes to package definitions in the package load-path take effect immediately. There are two ways to make this happen. Either start ncs with the --with-reload-packages directive:
Or, set the environment variable NCS_RELOAD_PACKAGES, for example like this:
It is a strong recommendation to use the NCS_RELOAD_PACKAGES environment variable approach since it guarantees that the packages are updated in all situations.
It is also possible to request a running NSO to reload all its packages.
This request can only be performed in operational mode, and the effect is that all packages will be updated, and any change in YANG models or code will be effectuated. If any YANG models are changed an automatic CDB data upgrade will be executed. If manual (user code) data upgrades are necessary the package should contain an upgrade component. This upgrade component will be executed as a part of the package reload. See for information on how to develop an upgrade component.
If the change in a package does not affect the data model or shared Java code, there is another command:
This will redeploy the private JARs in the Java VM for the Java package, restart the Python VM for the Python package, and reload the templates associated with the package. However, this command will not be sensitive to changes in the YANG models or shared JARs for the Java package.
By default, NCS will start the Java VM by invoking the command $NCS_DIR/bin/ncs-start-java-vm That script will invoke:
The class NcsJVMLauncher contains the main() method. The started Java VM will automatically retrieve and deploy all Java code for the packages defined in the load path in the ncs.conf file. No other specification than the package-meta-data.xml for each package is needed.
In the NSO CLI, there exist several settings and actions for the NSO Java VM, if we do:
We see some of the settings that are used to control how the NSO Java VM runs. In particular, here we're interested in /java-vm/stdout-capture/file
The NSO daemon will, when it starts, also start the NSO Java VM, and it will capture the stdout output from the NSO Java VM and send it to the file ./logs/ncs-java-vm.log. For more details on the Java VM settings, see the .
Thus if we tail -f that file, we get all the output from the Java code. That leads us to the first and most simple way of developing Java code. If we now:
Edit our Java code.
Recompile that code in the package, e.g cd ./packages/myrfs/src; make
Restart the Java code, either through telling NSO to restart the entire NSO Java VM from the NSO CLI (Note, this requires an env variable NCS_RELOAD_PACKAGES=true):
Or instructing NSO to just redeploy the package we're currently working on.
We can then do tail -f logs/ncs-java-vm.log to check for printouts and log messages. Typically there is quite a lot of data in the NSO Java VM log. It can sometimes be hard to find our own printouts and log messages. Therefore it can be convenient to use the command below which will make the relevant exception stack traces visible in the CLI.
It's also possible to dynamically, from the CLI control the level of logging as well as which Java packages that shall log. Say that we're interested in Maapi calls, but don't want the log cluttered with what is really NSO Java library internal calls. We can then do:
Now, considerably less log data will come. If we want these settings to always be there, even if we restart NSO from scratch with an empty database (no .cdb file in ./ncs-cdb) we can save these settings as XML, and put that XML inside the ncs-cdb directory, that way ncs will use this data as initialization data on a fresh restart. We do:
The ncs-setup --reset command stops the NSO daemon and resets NSO back to factory defaults. A restart of NSO will reinitialize NSO from all XML files found in the CDB directory.
It's possible to tell NSO to not start the NSO Java VM at all. This is interesting in two different scenarios. First is if want to run the NSO Java code embedded in a larger application, such as a Java Application Server (JBoss), the other is when debugging a package.
First, we configure NSO to not start the NSO Java VM at all by adding the following snippet to ncs.conf:
Now, after a restart or a configuration reload, no Java code is running, if we do:
We will see that the oper-status of the packages is java-uninitialized. We can also do:
This is expected since we've told NSO to not start the NSO Java VM. Now, we can do that manually, at the UNIX shell prompt.
So, now we're in a position where we can manually stop the NSO Java VM, recompile the Java code, and restart the NSO Java VM. This development cycle works fine. However, even though we're running the NSO Java VM standalone, we can still redeploy packages from the NSO CLI to reload and restart just our Java code, (no need to restart the NSO Java VM).
Since we can run the NSO Java VM standalone in a UNIX Shell, we can also run it inside Eclipse. If we stand in a NSO project directory, like NCS generated earlier in this section, we can issue the command:
This will generate two files, .classpath and .project. If we add this directory to Eclipse as a File -> New -> Java Project, uncheck the Use default location and enter the directory where the .classpath and .project have been generated. We're immediately ready to run this code in Eclipse. All we need to do is to choose the main() routine in the NcsJVMLauncher class.
The Eclipse debugger works now as usual, and we can at will, start and stop the Java code. One caveat here that is worth mentioning is that there are a few timeouts between NSO and the Java code that will trigger when we sit in the debugger. While developing with the Eclipse debugger and breakpoints, we typically want to disable all these timeouts.
First, we have three timeouts in ncs.conf that matter. Copy the system ncs.conf and set the three values of the following to a large value. See man page for a detailed description of what those values are.
If these timeouts are triggered, NSO will close all sockets to the Java VM and all bets are off.
Edit the file and enter the following XML entry just after the Web UI entry.
Now, restart NCS.
We also have a few timeouts that are dynamically reconfigurable from the CLI. We do:
Then, to save these settings so that NCS will have them again on a clean restart (no CDB files):
The Eclipse Java debugger can connect remotely to an NSO Java VM and debug that NSO Java VM This requires that the NSO Java VM has been started with some additional flags. By default, the script in $NCS_DIR/bin/ncs-start-java-vm is used to start the NSO Java VM. If we provide the -d flag, we will launch the NSO Java VM with:
This is what is needed to be able to remotely connect to the NSO Java VM, in the ncs.conf file:
Now, if we in Eclipse, add a debug configuration and connect to port 9000 on localhost, we can attach the Eclipse debugger to an already running system and debug it remotely.
ncs-projectAn NSO project is a complete running NSO installation. It contains all the needed packages and the config data that is required to run the system.
By using the ncs-project commands, the project can be populated with the necessary packages and kept updated. This can be used for encapsulating NSO demos or even a full-blown turn-key system.
For a developer, the typical workflow looks like this:
Create a new project using the ncs-project create command.
Define what packages to use in the project-meta-data.xml file.
Fetch any remote packages with the ncs-project update command.
Using the ncs-project create command, a new project is created. The file project-meta-data.xml should be updated with relevant information as will be described below. The project will also get a default ncs.conf configuration file that can be edited to better match different scenarios. All files and directories should be put into a version control system, such as Git.
A directory called test_project is created containing the files and directories of an NSO project as shown below:
The Makefile contains targets for building, starting, stopping, and cleaning the system. It also contains targets for entering the CLI as well as some useful targets for dealing with any Git packages. Study the Makefile to learn more.
Any initial CDB data can be put in the init_data directory. The Makefile will copy any files in this directory to the ncs-cdb before starting NSO.
There is also a test directory created with a directory structure used for automatic tests. These tests are dependent on the test tool .
To fill this project with anything meaningful, the project-meta-data.xml file needs to be edited.
The project version number is configurable, the version we get from the create command is 1.0. The description should also be changed to a small text explaining what the project is intended for. Our initial content of the project-meta-data.xml may now look like this:
For this example, let's say we have a released package: ncs-4.1.2-cisco-ios-4.1.5.tar.gz, a package located in a remote git repository foo.git, and a local package that we have developed ourselves: mypack. The relevant part of our project-meta-data.xml file would then look like this:
By specifying netsim devices in the project-meta-data.xml file, the necessary commands for creating the netsim configuration will be generated in the setup.mk file that ncs-project update creates. The setup.mk file is included in the top Makefile, and provides some useful make targets for creating and deleting our netsim setup.
When done editing the project-meta-data.xml, run the command ncs-project update. Add the -v switch to see what the command does.
Answer yes when asked to overwrite the setup.mk. After this, a new runtime directory is created with NCS and simulated devices configured. You are now ready to compile your system with: make all.
If you have a lot of packages, all located in the same Git repository, it is convenient to specify the repository just once. This can be done by adding a packages-store section as shown below:
This means that if a package does not have a git repository defined, the repository and branch in the packages-store is used.
When the development is done the project can be bundled together and distributed further. The ncs-project comes with a command, exportused for this purpose. The export command creates a tarball of the required files and any extra files as specified in the project-meta-data.xml file.
When using export, a subset of the packages should be configured for exporting. The reason for not exporting all packages in a project is if some of the packages are used solely for testing or similar. When configuring the bundle the packages included in the bundle are leafrefs to the packages defined at the root of the model, see the example below (The NSO Project YANG model). We can also define a specific tag, commit, or branch, even a different location for the packages, different from the one used while developing. For example, we might develop against an experimental branch of a repository, but bundle with a specific release of that same repository.
The bundle also has a name and a list of included files. Unless another name is specified from the command line, the final compressed file will be named using the configured bundle name and project version.
We create the tar-ball by using the export command:
There are two ways to make use of a bundle:
Together with the ncs-project create --from-bundle=<bundlefile> command.
Extract the included packages using tar for manual installation in an NSO deployment.
In the first scenario, it is possible to create an NSO project, populated with the packages from the bundle, to create a ready-to-run NSO system. The optional init_data part makes it possible to prepare CDB with configuration, before starting the system the very first time. The project-meta-data.xml file will specify all the packages as local to avoid any dangling pointers to non-accessible git repositories.
The second scenario is intended for the case when you want to install the packages manually, or via a custom process, into your running NSO systems.
The switch --snapshot will add a timestamp in the name of the created bundle file to make it clear that it is not a proper version numbered release.
To import our exported project we would do an ncs-project create and point out where the bundle is located.
ncs-project has a full set of man pages that describe its usage and syntax. Below is an overview of the commands which will be explained in more detail further down below.
project-meta-data.xml FileThe project-meta-data.xml file defines the project metadata for an NSO project according to the $NCS_DIR/src/ncs/ncs_config/tailf-ncs-project.yang YANG model. See the tailf-ncs-project.yang module where all options are described in more detail. To get an overview, use the IETF RFC 8340-based YANG tree diagram.
Below is a list of the settings in the tailf-ncs-project.yang that is configured through the metadata file. A detailed description can be found in the YANG model.
name: Unique name of the project.
project-version: The version of the project. This is for administrative purposes only.
packages-store:
<GigabitEthernet tags="nocreate">
<name>{link/interface-number}</name>
<description tags="merge">Link to PE</description>
... <GigabitEthernet tags="delete">
<name>{link/interface-number}</name>
<description tags="merge">Link to PE</description>
...<config-template xmlns="http://tail-f.com/ns/config/1.0">
<devices xmlns="http://tail-f.com/ns/ncs">
<device tags="nocreate">
<name>{/name}</name>
<config tags="merge">
<!-- ... -->
</config>
</device>
</devices>
</config-template>admin@ncs(config)# devices device rtr01 config ...
admin@ncs(config-device-rtr01)# commit dry-run outformat xml
result-xml {
local-node {
data <devices xmlns="http://tail-f.com/ns/ncs">
<device>
<name>rtr01</name>
<config>
<!-- ... -->
</config>
</device>
</devices>
}
}
admin@ncs(config-device-rtr01)# commit
admin@ncs# show running-config devices device rtr01 config ... | display xml
<config xmlns="http://tail-f.com/ns/config/1.0">
<devices xmlns="http://tail-f.com/ns/ncs">
<device>
<name>rtr01</name>
<config>
<!-- ... -->
</config>
</device>
</devices>
</config>admin@ncs# show running-config devices device c0 config interface GigabitEthernet
devices device c0
config
interface GigabitEthernet0/0/0/0
ip address 10.1.2.3 255.255.255.0
exit
interface GigabitEthernet0/0/0/1
ip address 10.1.4.3 255.255.255.0
exit
interface GigabitEthernet0/0/0/2
ip address 10.1.9.3 255.255.255.0
exit
!
!admin@ncs# templatize devices device c0 config interface GigabitEthernet
Found potential templates at:
devices device c0 \ config \ interface GigabitEthernet {$GigabitEthernet-name}
Template path:
devices device c0 \ config \ interface GigabitEthernet {$GigabitEthernet-name}
Variables in template:
{$GigabitEthernet-name} {$address}
<config xmlns="http://tail-f.com/ns/config/1.0">
<devices xmlns="http://tail-f.com/ns/ncs">
<device>
<name>c0</name>
<config>
<interface xmlns="urn:ios">
<GigabitEthernet>
<name>{$GigabitEthernet-name}</name>
<ip>
<address>
<primary>
<address>{$address}</address>
<mask>255.255.255.0</mask>
</primary>
</address>
</ip>
</GigabitEthernet>
</interface>
</config>
</device>
</devices>
</config>$ cd $NCS_DIR/examples.ncs/implement-a-service/dns-v3
$ make demo
admin@ncs# templatize devices device c*$ cd $NCS_DIR/packages/neds/cisco-ios-cli-3.8/
$ yanger -f sample-xml-skeleton \
--sample-xml-skeleton-doctype=config \
--sample-xml-skeleton-path='/ip/name-server' \
--sample-xml-skeleton-defaults \
src/yang/tailf-ned-cisco-ios.yang
<?xml version='1.0' encoding='UTF-8'?>
<config xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<ip xmlns="urn:ios">
<name-server>
<name-server-list>
<address/>
</name-server-list>
<vrf>
<name/>
<name-server-list>
<address/>
</name-server-list>
</vrf>
</name-server>
</ip>
</config> <name>rtr01</name> <name>{$CE}</name>/endpoint/ce/device../ce/device<policy-map xmlns="urn:ios" tags="merge">
<name>{$POLICY_NAME}</name>
<class>
<name>{$CLASS_NAME}</name>
<?if {qos-class/priority = 'realtime'}?>
<priority-realtime>
<percent>{$CLASS_BW}</percent>
</priority-realtime>
<?elif {qos-class/priority = 'critical'}?>
<priority-critical>
<percent>{$CLASS_BW}</percent>
</priority-critical>
<?else?>
<bandwidth>
<percent>{$CLASS_BW}</percent>
</bandwidth>
<?end?>
<set>
<ip>
<dscp>{$CLASS_DSCP}</dscp>
</ip>
</set>
</class>
</policy-map><ip xmlns="urn:ios">
<route>
<?foreach {/tunnel}?>
<ip-route-forwarding-list>
<prefix>{network}</prefix>
<mask>{netmask}</mask>
<forwarding-address>{tunnel-endpoint}</forwarding-address>
</ip-route-forwarding-list>
<?end?>
</route>
</ip><interface xmlns="urn:ios">
<?for i=0; {$i < 4}; i={$i + 1}?>
<FastEthernet>
<name>0/{$i}</name>
<shutdown/>
</FastEthernet>
<?end?>
</interface><config tags="merge">
<interface xmlns="urn:ios">
... <GigabitEthernet tags="replace">
<name>{link/interface-number}</name>
<description tags="merge">Link to PE</description>
... <GigabitEthernet tags="create">
<name>{link/interface-number}</name>
<description tags="merge">Link to PE</description>
...<rule insert="first">
<name>{$FIRSTRULE}</name>
</rule>
<rule insert="last">
<name>{$LASTRULE}</name>
</rule>
<rule insert="after" value={$FIRSTRULE}>
<name>{$SECONDRULE}</name>
</rule>
<rule insert="before" value={$LASTRULE}>
<name>{$SECONDTOLASTRULE}</name>
</rule><rule>
<name>deny-all</name>
<ip>0.0.0.0</ip>
<mask>0.0.0.0</mask>
<action>deny</action>
</rule><rule>
<name>service-2</name>
<ip>192.168.0.0</ip>
<mask>255.255.255.0</mask>
<action>permit</action>
</rule>
<rule>
<name>service-1</name>
<ip>10.0.0.0</ip>
<mask>255.0.0.0</mask>
<action>permit</action>
</rule>
<rule>
<ip>0.0.0.0</ip>
<mask>0.0.0.0</mask>
<action>deny</action>
</rule><rule insert="first" guard="deny-all">
<name>{$NAME}</name>
<ip>{$IP}</ip>
<mask>{$MASK}</mask>
<action>permit</action>
</rule> 1 <config-template xmlns="http://tail-f.com/ns/config/1.0">
<?macro GbEth name='{/name}' ip mask='255.255.255.0'?>
<GigabitEthernet>
<name>$name</name>
5 <ip>
<address>
<primary>
<address>$ip</address>
<mask>$mask</mask>
10 </primary>
</address>
</ip>
</GigabitEthernet>
<?endmacro?>
15
<?macro GbEthDesc name='{/name}' ip mask='255.255.255.0' desc?>
<?expand GbEth name='$name' ip='$ip' mask='$mask'?>
<GigabitEthernet>
<name>$name</name>
20 <description>$desc</description>
</GigabitEthernet>
<?endmacro?>
<devices xmlns="http://tail-f.com/ns/ncs">
25 <device tags="nocreate">
<name>{/device}</name>
<config tags="merge">
<interface xmlns="urn:ios">
<?expand GbEthDesc name='0/0/0/0' ip='10.250.1.1'
30 desc='Link to core'?>
</interface>
</config>
</device>
</devices>
35 </config-template>} list interface {
key "name";
leaf name {
type string;
}
leaf address {
type inet:ip-address;
}
} // ...
container links {
list link {
key "intf-name";
leaf intf-name {
type string;
}
leaf intf-addr {
type inet:ip-address;
}
}
} <interface>
<name>{/links/link[0]/intf-name}</name>
<address>{intf-addr}</address>
</interface> <interface>
<name>{/links/link/intf-name}</name>
<address>{intf-addr}</address>
</interface> <interface>
<name>{string(/links-list/intf-name)}</name>
</interface><ncs-package xmlns="http://tail-f.com/ns/ncs-packages">
<name>mypackage</name>
<!-- ... -->
<!-- Exact NED id match, requires namespace -->
<supported-ned-id xmlns:id="http://tail-f.com/ns/ned-id/cisco-ios-cli-3.0">
id:cisco-ios-cli-3.0
</supported-ned-id>
<!-- Regex-based NED id match -->
<supported-ned-id-match>router-nc-1</supported-ned-id-match>
</ncs-package>list interface {
key name;
leaf name {
type string;
}
leaf address {
type tailf:ipv4-address-and-prefix-length;
description
"IP address with prefix in the following format, e.g.: 10.2.3.4/24";
}
leaf mask {
config false;
type inet:ipv4-address;
description
"Auxiliary data populated by service code, represents network mask
corresponding to the prefix in the address field, e.g.: 255.255.255.0";
}
} def cb_create(self, tctx, root, service, proplist):
interface_list = service.interface
for intf in interface_list:
prefix = intf.address.split('/')[1]
intf.mask = ipaddress.IPv4Network(0, int(prefix)).netmask
# Template variables don't need to contain mask
# as it is passed via (operational) database
template = ncs.template.Template(service)
template.apply('iface-template') <interface>
<name>{/interface/name}</name>
<ip-address>{substring-before(address, '/')}</ip-address>
<ip-mask>{mask}</ip-mask>
</interface><config-template xmlns="http://tail-f.com/ns/config/1.0"
servicepoint="some-service">
<!-- ... -->
</config-template><config-template xmlns="http://tail-f.com/ns/config/1.0"
servicepoint="some-service"
cbtype="post-modification">
<?if {$OPERATION = 'create'}?>
<devices xmlns="http://tail-f.com/ns/ncs">
<device>
<name>{/device}</name>
<config>
<!-- ... -->
</config>
</device>
</devices>
<?elif {$OPERATION = 'update'}?>
<!-- ... -->
<?else?>
<!-- $OPERATION = 'delete' -->
<!-- ... -->
<?end?>
</config-template>admin@ncs(config)# commit dry-run | debug templateadmin@ncs(config)# commit dry-run | debug xpathadmin@ncs(config)# commit dry-run | debug template l3vpnadmin@ncs(config)# commit dry-run | debug template | debug xpathadmin@ncs# show running-config devices device c0 config ios:interface | display xpath
/devices/device[name='c0']/config/ios:interface/FastEthernet[name='1/0']
/devices/device[name='c0']/config/ios:interface/FastEthernet[name='1/1']
/devices/device[name='c0']/config/ios:interface/FastEthernet[name='1/2']
/devices/device[name='c0']/config/ios:interface/FastEthernet[name='2/1']
/devices/device[name='c0']/config/ios:interface/FastEthernet[name='2/2']$ ncs_cmd -c "x /devices/device[name='c0']/config/ios:interface/FastEthernet/name"
/devices/device{c0}/config/interface/FastEthernet{1/0}/name [1/0]
/devices/device{c0}/config/interface/FastEthernet{1/1}/name [1/1]
/devices/device{c0}/config/interface/FastEthernet{1/2}/name [1/2]
/devices/device{c0}/config/interface/FastEthernet{2/1}/name [2/1]
/devices/device{c0}/config/interface/FastEthernet{2/2}/name [2/2]admin@ncs# config
admin@ncs(config)# load merge example.cfg
admin@ncs(config)# commit dry-run | debug template 1 <config-template xmlns="http://tail-f.com/ns/config/1.0"
servicepoint="dns">
<devices xmlns="http://tail-f.com/ns/ncs">
<?foreach {/target-device}?>
5 <device>
<name>{.}</name>
<config>
<ip xmlns="urn:ios">
<?if {/dns-server-ip}?>
10 <!-- If dns-server-ip is set, use that. -->
<name-server>{/dns-server-ip}</name-server>
<?else?>
<!-- Otherwise, use the default one. -->
<name-server>192.0.2.1</name-server>
15 <?end?>
</ip>
</config>
</device>
<?end?>
20 </devices>
</config-template>Processing instruction 'foreach': evaluating the node-set \
(from file "dns-template.xml", line 4)
Evaluating "/target-device" (from file "dns-template.xml", line 4)
Context node: /dns[name='instance1']
Result:
For /dns[name='instance1']/target-device[.='c1'], it evaluates to []
For /dns[name='instance1']/target-device[.='c2'], it evaluates to []admin@ncs(config)# show full-configuration dns instance1 target-device | display xpath
/dns[name='instance1']/target-device [ c1 c2 ]Processing instruction 'foreach': next iteration: \
context /dns[name='instance1']/target-device[.='c1'] \
(from file "dns-template.xml", line 4)
Evaluating "." (from file "dns-template.xml", line 6)
Context node: /dns[name='instance1']/target-device[.='c1']
Result:
For /dns[name='instance1']/target-device[.='c1'], it evaluates to "c1"Operation 'merge' on existing node: /devices/device[name='c1'] \
(from file "dns-template.xml", line 6)Processing instruction 'if': evaluating the condition \
(from file "dns-template.xml", line 9)
Evaluating conditional expression "boolean(/dns-server-ip)" \
(from file "dns-template.xml", line 9)
Context node: /dns[name='instance1']/target-device[.='c1']
Result: true - continuingProcessing instruction 'if': recursing (from file "dns-template.xml", line 9)
Evaluating "/dns-server-ip" (from file "dns-template.xml", line 11)
Context node: /dns[name='instance1']/target-device[.='c1']
Result:
For /dns[name='instance1'], it evaluates to "192.0.2.110"
Operation 'merge' on non-existing node: \
/devices/device[name='c1']/config/ios:ip/name-server[.='192.0.2.110'] \
(from file "dns-template.xml", line 11)Processing instruction 'else': skipping (from file "dns-template.xml", line 12)
Processing instruction 'foreach': next iteration: \
context /dns[name='instance1']/target-device[.='c2'] \
(from file "dns-template.xml", line 4)Evaluating "." (from file "dns-template.xml", line 6)
Context node: /dns[name='instance1']/target-device[.='c2']
Result:
For /dns[name='instance1']/target-device[.='c2'], it evaluates to "c2"
Operation 'merge' on existing node: /devices/device[name='c2'] \
(from file "dns-template.xml", line 6)
Processing instruction 'if': evaluating the condition \
(from file "dns-template.xml", line 9)
Evaluating conditional expression "boolean(/dns-server-ip)" \
(from file "dns-template.xml", line 9)
Context node: /dns[name='instance1']/target-device[.='c2']
Result: true - continuing
Processing instruction 'if': recursing (from file "dns-template.xml", line 9)
Evaluating "/dns-server-ip" (from file "dns-template.xml", line 11)
Context node: /dns[name='instance1']/target-device[.='c2']
Result:
For /dns[name='instance1'], it evaluates to "192.0.2.110"
Operation 'merge' on non-existing node: \
/devices/device[name='c2']/config/ios:ip/name-server[.='192.0.2.110'] \
(from file "dns-template.xml", line 11)
Processing instruction 'else': skipping (from file "dns-template.xml", line 12)cli {
local-node {
data devices {
device c1 {
config {
ip {
- name-server 192.0.2.1;
+ name-server 192.0.2.1 192.0.2.110;
}
}
}
device c2 {
config {
ip {
+ name-server 192.0.2.110;
}
}
}
}
+dns instance1 {
+ target-device [ c1 c2 ];
+ dns-server-ip 192.0.2.110;
+}
}
} <?set v = value?> <?if {expression}?>
...
<?elif {expression}?>
...
<?else?>
...
<?end?> <?foreach {expression}?>
...
<?end?> <?for v = start_value; {progress condition}; v = next_value?>
...
<?end?> <?for ; {condition}; ?> <?copy-tree {source}?> <?set-root-node {expression}?> <?macro GbEth name='{/name}' ip mask='255.255.255.0'?>$> ncs_cli -C -u admin$> ncs_cli -J -u adminshow running-config | tab
show running-config | include aaa | tabadmin@ncs(config)# aaa authentication users user John
Value for 'uid' (<int>): 1006
Value for 'gid' (<int>): 1006
Value for 'password' (<hash digest string>): ******
Value for 'ssh_keydir' (<string>): /var/ncs/homes/john/.ssh
Value for 'homedir' (<string>): /var/ncs/homes/johnautowizard false
...
autowizard true$> ncs_cli -C -u adminadmin@ncs# show devices device
devices device ce0
...
alarm-summary indeterminates 0
alarm-summary criticals 0
alarm-summary majors 0
alarm-summary minors 0
alarm-summary warnings 0
devices device ce1
...
admin@ncs# config
Entering configuration mode terminal
admin@ncs(config)# show full-configuration devices device
devices device ce0
address 127.0.0.1
port 10022
ssh host-key ssh-dss
...
!
devices device ce1
...
!
...admin@ncs# show running-config devices device
admin@ncs(config)# do show running-config devices deviceadmin@ncs(config)# devices sync-from
sync-result {
device ce0
result true
}
sync-result {
device ce1
result true
}
...admin@ncs(config)# show full-configuration devices device ce0 config
devices device ce0
config
no ios:service pad
no ios:ip domain-lookup
no ios:ip http secure-server
ios:ip source-route
ios:interface GigabitEthernet0/1
exit
ios:interface GigabitEthernet0/10
exit
ios:interface GigabitEthernet0/11
exit
ios:interface GigabitEthernet0/12
exit
ios:interface GigabitEthernet0/13
exit
...
!
!
...admin@ncs# configure
Entering configuration mode terminal
admin@ncs(config)##!/bin/ncs_clincs_cli --help
Usage: ncs_cli [options] [file]
Options:
--help, -h display this help
--host, -H <host> current host name (used in prompt)
--address, -A <addr> cli address to connect to
--port, -P <port> cli port to connect to
< ... output omitted ... >admin@ncs# switch cliadmin@ncs(config)# devices device ce0..1 config ios:interface GigabitEthernet0/1 speed auto
admin@ncs(config-if)# top
admin@ncs(config)# show configuration
devices device ce0
config
ios:interface GigabitEthernet0/1
speed auto
exit
!
!
devices device ce1
config
ios:interface GigabitEthernet0/1
speed auto
exit
!
!
admin@ncs(config)# commit ?
Possible completions:
and-quit Exit configuration mode
check Validate configuration
comment Add a commit comment
commit-queue Commit through commit queue
label Add a commit label
no-confirm No confirm
no-networking Send nothing to the devices
no-out-of-sync-check Commit even if out of sync
no-overwrite Do not overwrite modified data on the device
no-revision-drop Fail if device has too old data model
save-running Save running to file
---
dry-run Show the diff but do not perform commit
[<cr>
admin@ncs(config)# commit
Commit complete.admin@ncs(config)# show configuration rollback changes 10019
devices device ce0
config
ios:interface GigabitEthernet0/1
no speed auto
exit
!
!
devices device ce1
config
ios:interface GigabitEthernet0/1
no speed auto
exit
!
!admin@ncs(config)# show configuration commit changes 10019
!
! Created by: admin
! Date: 2015-02-03 12:29:08
! Client: cli
!
devices device ce0
config
ios:interface GigabitEthernet0/1
speed auto
exit
!
!
devices device ce1
config
ios:interface GigabitEthernet0/1
speed auto
exit
!
!admin@ncs(config)# rollback-files apply-rollback-file fixed-number 10019
admin@ncs(config)# show configuration
devices device ce0
config
ios:interface GigabitEthernet0/1
no speed auto
exit
!
!
devices device ce1
config
ios:interface GigabitEthernet0/1
no speed auto
exit
!
!admin@ncs(config)# commit
Commit complete.admin@ncs# show running-config | ?
Possible completions:
annotation Show only statements whose annotation matches a pattern
append Append output text to a file
begin Begin with the line that matches
best-effort Display data even if data provider is unavailable or
continue loading from file in presence of failures
context-match Context match
count Count the number of lines in the output
csv Show table output in CSV format
de-select De-select columns
details Display show/commit details
display Display options
exclude Exclude lines that match
extended Display referring entries
hide Hide display options
include Include lines that match
linnum Enumerate lines in the output
match-all All selected filters must match
match-any At least one filter must match
more Paginate output
nomore Suppress pagination
save Save output text to a file
select Select additional columns
sort-by Select sorting indices
tab Enforce table output
tags Show only statements whose tags matches a pattern
until End with the line that matchesadmin@ncs# show running-config | count
Count: 1783 lines
admin@ncs# show running-config aaa | count
Count: 28 linesadmin@ncs# show running-config aaa | include aaa
aaa authentication users user admin
aaa authentication users user oper
aaa authentication users user private
aaa authentication users user publicadmin@ncs# show running-config aaa authentication | exclude password
aaa authentication users user admin
uid 1000
gid 1000
ssh_keydir /var/ncs/homes/admin/.ssh
homedir /var/ncs/homes/admin
!
aaa authentication users user oper
uid 1000
gid 1000
ssh_keydir /var/ncs/homes/oper/.ssh
homedir /var/ncs/homes/oper
!
aaa authentication users user private
uid 1000
gid 1000
ssh_keydir /var/ncs/homes/private/.ssh
homedir /var/ncs/homes/private
!
aaa authentication users user public
uid 1000
gid 1000
ssh_keydir /var/ncs/homes/public/.ssh
homedir /var/ncs/homes/public
!admin@ncs# show running-config aaa authentication | include -c 3 homes/admin
2- uid 1000
3- gid 1000
4- password $1$brH6BYLy$iWQA2T1I3PMonDTJOd0Y/1
5: ssh_keydir /var/ncs/homes/admin/.ssh
6: homedir /var/ncs/homes/admin
7-!
8-aaa authentication users user oper
9- uid 1000admin@ncs# show running-config aaa authentication | context-match homes/admin
aaa authentication users user admin
ssh_keydir /var/ncs/homes/admin/.ssh
aaa authentication users user admin
homedir /var/ncs/homes/adminadmin@ncs# show running-config aaa authentication users | begin public
aaa authentication users user public
uid 1000
gid 1000
password $1$DzGnyJGx$BjxoqYEj0QKxwVX5fbfDx/
ssh_keydir /var/ncs/homes/public/.ssh
homedir /var/ncs/homes/public
!admin@ncs# show running-config aaa | save /tmp/savedadmin@ncs# show running-config aaa | exclude password | save /tmp/savedadmin@ncs# show running-config aaa | include "(uid)|(gid)"
uid 1000
gid 1000
uid 1000
gid 1000
uid 1000
gid 1000
uid 1000
gid 1000admin@ncs# show running-config devices device ce0 config
devices device ce0
config
no ios:service pad
no ios:ip domain-lookup
no ios:ip http secure-server
ios:ip source-route
ios:interface GigabitEthernet0/1
exit
ios:interface GigabitEthernet0/10
exit
...
!
!admin@ncs# show running-config devices device * config
devices device ce0
config
no ios:service pad
no ios:ip domain-lookup
no ios:ip http secure-server
ios:ip source-route
ios:interface GigabitEthernet0/1
exit
ios:interface GigabitEthernet0/10
exit
...
!
!
devices device ce1
config
...
!
!
...admin@ncs# show running-config devices device * config ios:ip http
devices device ce0
config
no ios:ip http secure-server
!
!
devices device ce1
config
no ios:ip http secure-server
!
!
...admin@ncs# show running-config devices device * | \
select config ios:ip http
devices device ce0
config
no ios:ip http secure-server
!
!
devices device ce1
config
no ios:ip http secure-server
!
!
...admin@ncs# show running-config devices device * | \
select config ios:ip http | \
select config ios:ip domain-lookup
devices device ce0
config
no ios:ip domain-lookup
no ios:ip http secure-server
!
!
devices device ce1
config
no ios:ip domain-lookup
no ios:ip http secure-server
!
!
...admin@ncs# show running-config devices device * | \
select config ios:ip | \
de-select config ios:ip source-route
devices device ce0
config
no ios:ip domain-lookup
no ios:ip http secure-server
!
!
devices device ce1
config
no ios:ip domain-lookup
no ios:ip http secure-server
!
!
...admin@ncs# show running-config devices device * | de-select config
devices device ce0
address 127.0.0.1
port 10022
ssh host-key ssh-dss
...
!
authgroup default
device-type cli ned-id cisco-ios
state admin-state unlocked
!
devices device ce1
...
!
...admin@ncs# show running-config devices device * | \
de-select config | save /tmp/devicesadmin@ncs# show running-config devices device * | \
select config cisco-ios-xr:interface GigabitEthernet 0/0/0/4
devices device p0
config
cisco-ios-xr:interface GigabitEthernet 0/0/0/4
shutdown
exit
!
!
devices device p1
config
cisco-ios-xr:interface GigabitEthernet 0/0/0/4
shutdown
exit
!
!
...admin@ncs# show running-config devices device * | \
select config cisco-ios-xr:interface GigabitEthernet 0/0/0/4 | \
select config | match-all
devices device p0
config
cisco-ios-xr:hostname PE1
cisco-ios-xr:interface MgmtEth 0/0/CPU0/0
exit
...
cisco-ios-xr:interface GigabitEthernet 0/0/0/4
shutdown
exit
!
!
devices device p1
config
...
cisco-ios-xr:interface GigabitEthernet 0/0/0/4
shutdown
exit
!
!
...admin@ncs# show running-config hosts | display json
{
"data": {
"pipetargets_model:hosts": {
"host": [
{
"name": "host1",
"enabled": true,
"numberOfServers": 2,
"servers": {
"server": [
{
"name": "serv1",
"ip": "192.168.0.1",
"port": 5001
},
{
"name": "serv2",
"ip": "192.168.0.1",
"port": 5000
}
]
}
},
{
"name": "host2",
"enabled": false,
"numberOfServers": 0
...admin@ncs# show running-config hosts | display keypath
/hosts/host{host1} enabled
/hosts/host{host1}/numberOfServers 2
/hosts/host{host1}/servers/server{serv1}/ip 192.168.0.1
/hosts/host{host1}/servers/server{serv1}/port 5001
/hosts/host{host1}/servers/server{serv2}/ip 192.168.0.1
/hosts/host{host1}/servers/server{serv2}/port 5000
/hosts/host{host2} disabled
/hosts/host{host2}/numberOfServers 01..4,8,10..18admin@ncs# show running-config devices device ce0..1,3admin@ncs# show running-config devices device Firewall/1-2/*
admin@ncs# show running-config devices device Firewall/1-2/1,3admin@ncs(config)# devices device ce0..2 config ios:ethernet cfm ieeeadmin@ncs(config)# devices device ce0..2 config
admin@ncs(config-config)# ios:ethernet cfm ieee
admin@ncs(config-config)# show config
devices device ce0
config
ios:ethernet cfm ieee
!
!
devices device ce1
config
ios:ethernet cfm ieee
!
!
devices device ce2
config
ios:ethernet cfm ieee
!
!admin@ncs# <space>
Possible completions:
alarms Alarm management
autowizard Automatically query for mandatory elements
cd Change working directory
clear Clear parameter
cluster Cluster configuration
compare Compare running configuration to another
configuration or a file
complete-on-space Enable/disable completion on space
compliance Compliance reporting
config Manipulate software configuration information
describe Display transparent command information
devices The managed devices and device communication settings
display-level Configure show command display level
exit Exit the management session
file Perform file operations
help Provide help information
...
admin@ncs# dev<space>ices <space>
Possible completions:
check-sync Check if the NCS config is in sync with the device
check-yang-modules Check if NCS and the devices have compatible YANG
modules
clear-trace Clear all trace files
commit-queue List of queued commits
...
admin@ncs# devices check-s<space>ync! Command file created by Joe Smith
! First show the configuration before we change it
show running-config
! Enter configuration mode and configure an ethernet setting on the ce0 device
config
devices device ce0 config ios:ethernet cfm global
commit
top
exit
exit
! Doneadmin@ncs(config)# annotate aaa authentication users user admin \
"Only allow the XX department access to this user."
admin@ncs(config)# tag add aaa authentication users user oper oper_tag
admin@ncs(config)# commit
Commit complete.admin@ncs(config)# do show running-config aaa authentication users user | \
tags oper_tag | display curly-braces
/* Tags: oper_tag */
user oper {
uid 1000;
gid 1000;
password $1$9qV138GJ$.olmolTfRbFGQhWJMZ9kA0;
ssh_keydir /var/ncs/homes/oper/.ssh;
homedir /var/ncs/homes/oper;
}
admin@ncs(config)# do show running-config aaa authentication users user | \
annotation XX | display curly-braces
/* Only allow the XX department access to this user. */
user admin {
uid 1000;
gid 1000;
password $1$EcQwYvnP$Rvq3MPTMSz29UaVOHA/511;
ssh_keydir /var/ncs/homes/admin/.ssh;
homedir /var/ncs/homes/admin;
}admin@ncs# show c
-----------------^
syntax error:
Possible alternatives starting with c:
cli - Display cli settings
configuration - Commit configuration changes
admin@ncs# show configuration
------------------------------^
syntax error: expecting
commit - Commit configuration changesadmin@ncs# config
Entering configuration mode terminal
admin@ncs(config)# nacm rule-list any-group rule allowrule
admin@ncs(config-rule-allowrule)# commit
Aborted: 'nacm rule-list any-group rule allowrule action' is not configuredadmin@ncs# show cli
autowizard false
complete-on-space true
display-level 99999999
history 100
idle-timeout 1800
ignore-leading-space false
output-file terminal
paginate true
prompt1 \h\M#
prompt2 \h(\m)#
screen-length 71
screen-width 80
service prompt config true
show-defaults false
terminal xterm-256color
..."foo\\/bar\\?baz"-i, --ip IpAddress | IpAddress/Port
The IP (or IP address and port) which NSO reports that the user is connecting from. This value is used in the audit logs. Defaults to the information in the SSH_CONNECTION environment variable if set, 127.0.0.1 otherwise.
-v, --verbose
Produce additional output about the execution of the command, in particular during the initial handshake phase.
-n, --interactive
Force the CLI to echo prompts and commands. Useful when ncs_cli auto-detects it is not running in a terminal, e.g. when executing as a script, reading input from a file, or through a pipe.
-N, --noninteractive
Force the CLI to only show the output of the commands executed. Do not output the prompt or echo the commands, much like a shell does for a shell script.
-s, --stop-on-error
Force the CLI to terminate at the first error and use a non-zero exit code.
-E, --escape-char C
A special character that forcefully terminates the CLI when repeated three times in a row. Defaults to control underscore (Ctrl-_).
-J, -C
This flag sets the mode of the CLI. -J is Juniper style CLI, -C is Cisco XR style CLI.
-u, --user User
The username of the connecting user. Used for access control and group assignment in NSO (if the group mapping is kept in NSO). The default is to use the login name of the user.
-g, --groups GroupList
A comma-separated list of groups the connecting user is a member of. Used for access control by the AAA system in NSO to authorize data and command access. Defaults to the UNIX groups that the user belongs to, i.e. the same as the groups shell command returns.
-U, --uid Uid
The numeric user ID the user shall have. Used for executing OS commands on behalf of the user, when checking file access permissions, and when creating files. Defaults to the effective user ID (euid) in use for running the command. Note that NSO needs to run as root for this to work properly.
-G, --gid Gid
The numeric group ID of the user shall have. Used for executing OS commands on behalf of the user, when checking file access permissions, and when creating files. Defaults to the effective group ID (egid) in use for running the command. Note that NSO needs to run as root for this to work properly.
-D, --gids GidList
A comma-separated list of supplementary numeric group IDs the user shall have. Used for executing OS commands on behalf of the user and when checking file access permissions. Defaults to the supplementary UNIX group IDs in use for running the command. Note that NSO needs to run as root for this to work properly.
-a, --noaaa
Completely disables all AAA checks for this CLI. This can be used as a disaster recovery mechanism if the AAA rules in NSO have somehow become corrupted.
-O, --opaque Opaque
Pass an opaque string to NSO. The string is not interpreted by NSO, only made available to application code. See built-in variables in clispec(5) and maapi_get_user_session_opaque() in confd_lib_maapi(3). The string can be given either via this flag or via the UNIX environment variable NCS_CLI_OPAQUE. The -O flag takes precedence.
r1r2
Concatenation. It matches r1 and then r2.
r+
Matches one or more rs.
r*
Matches zero or more rs.
r?
Matches zero or one rs.
(r)
Grouping. It matches r.
/confdConfig/netconf/transport/ssh/port%NETCONF_TCP_PORT% - for /confdConfig/netconf/transport/tcp/port
%CLI_SSH_PORT% - for /confdConfig/cli/ssh/port
%SNMP_PORT% - for /confdConfig/snmpAgent/port
%NAME% - for the name of the ConfD instance.
%COUNTER% - for the number of the ConfD instance
The Makefile should compile the YANG files so that ConfD can run them. The Makefile should also have an install target that installs all files required for ConfD to run one instance of a simulated network element. This is typically all fxs files.
An optional start.sh file where additional programs can be started. A good example of a package where the netsim component contains some additional C programs is the webserver package in the NSO website example $NCS_DIR/web-server-farm.
FATALOFFALLOFFtailf-ncs.yangNcsLoggerlog-level-typeRun the application.
Possibly export the project for somebody else to run.
directory: Paths for package dependencies.
git
repo: Default git package repositories.
branch, tag, or commit ID.
netsim: List netsim devices used by the project to generate a proper Makefile running the ncs-project setup script.
device
prefix
num-devices
bundle: Information to collect files and packages to pack them in a tarball bundle.
name: tarball filename.
includes: Files to include.
package: Packages to include (leafref to the package list below).
name: Name of the package.
local, url, or git: Where to get the package. The Git option needs a branch, tag, or commit ID.
package: Packages used by the project.
name: Name of the package.
local, url, or git: Where to get the package. The Git option needs a branch, tag, or commit ID.

The else processing instruction should be used with care in this context, as the set of the ned-ids it handles depends on the set of ned-ids loaded in the system, which can be hard to predict at the time of developing the template. To mitigate this problem it is recommended that the package containing this template defines a set of supported-ned-ids as described in Namespaces and Multi-NED Support.
admin@ncs(config)# devices authgroups group default umap
admin remote-name admin remote-password "admin\"admin" $ ncs-make-package --netconf-ned ./pe-yang-files pe
$ ncs-make-package --netconf-ned ./ce-yang-files ce
$ (cd pe/src; make)
$ (cd pe/src; make) $ ncs-netsim create-network ./ce 5 ce create-network ./pe 3 pe$ ncs-netsim start
DEVICE ce0 OK STARTED
DEVICE ce1 OK STARTED
DEVICE ce2 OK STARTED
DEVICE ce3 OK STARTED
DEVICE ce4 OK STARTED
DEVICE pe0 OK STARTED
DEVICE pe1 OK STARTED
DEVICE pe2 OK STARTED $ ncs-setup --netsim-dir ./netsim --dest NCS;
$ cd NCS
$ cat README.ncs
.......
$ ncs$ ncs_cli -u admin
admin connected from 127.0.0.1 using console on zoe
admin@zoe> show configuration devices device ce0
address 127.0.0.1;
port 12022;
authgroup default;
device-type {
netconf;
}
state {
admin-state unlocked;
}|----package-meta-data.xml
|----private-jar
|----shared-jar
|----netsim
|----|----start.sh
|----|----confd.conf.netsim
|----|----Makefile
|----src
|----|----ncsc-out
|----|----Makefile
|----|----yang
|----|----|----interfaces.yang
|----|----java
|----|----|----build.xml
|----|----|----src
|----|----|----|----com
|----|----|----|----|----example
|----|----|----|----|----|----ce
|----|----|----|----|----|----|----namespaces
|----doc
|----load-dir$ ncs-netsim cli pe0
admin connected from 127.0.0.1 using console on zoe
admin@zoe> show configuration interface
No entries found.
[ok][2012-08-21 16:52:19]
admin@zoe> exit $ cd $NCS_DIR/examples.ncs/mpls/mpls-devices
$ make all
....
$ ncs-netsim start
.....
$ ncs
$ ncs_cli -u admin
admin connected from 127.0.0.1 using console on zoe
admin@zoe> show status packages package pe
package-version 1.0;
description "Generated netconf package";
ncs-min-version 2.0;
component pe {
ned {
netconf;
device {
vendor "Example Inc.";
}
}
}
oper-status {
up;
}
[ok][2012-08-22 14:45:30]
admin@zoe> request devices sync-from
sync-result {
device ce0
result true
}
sync-result {
device ce1
result true
}
sync-result {
.......
admin@zoe> show configuration devices device pe0 config if:interface
interface eth2 {
ip 10.0.12.9;
mask 255.255.255.252;
}
interface eth3 {
ip 10.0.17.13;
mask 255.255.255.252;
}
interface lo {
ip 10.10.10.1;
mask 255.255.0.0;
} $ ncs-make-package --service-skeleton java myrfs
$ cd test/src; makeadmin@zoe> show status packages package myrfs
package-version 1.0;
description "Skeleton for a resource facing service - RFS";
ncs-min-version 2.0;
component RFSSkeleton {
callback {
java-class-name [ com.example.myrfs.myrfs ];
}
}
oper-status {
up;
}
[ok][2012-08-22 15:30:13]
admin@zoe> configure
Entering configuration mode private
[ok][2012-08-22 15:32:46]
[edit]
admin@zoe% set services myrfs s1 dummy 3.4.5.6
[ok][2012-08-22 15:32:56] <component>
<name>RFSSkeleton</name>
<callback>
<java-class-name>com.example.myrfs.myrfs</java-class-name>
</callback>
</component> @ServiceCallback(servicePoint="myrfsspnt",
callType=ServiceCBType.CREATE)
public Properties create(ServiceContext context,
NavuNode service,
NavuNode root,
Properties opaque)
throws DpCallbackException {
String servicePath = null;
try {
servicePath = service.getKeyPath();
//Now get the single leaf we have in the service instance
// NavuLeaf sServerLeaf = service.leaf("dummy");
//..and its value (which is a ipv4-address )
// ConfIPv4 ip = (ConfIPv4)sServerLeaf.value();
//Get the list of all managed devices.
NavuList managedDevices = root.container("devices").list("device");
// iterate through all manage devices
for(NavuContainer deviceContainer : managedDevices.elements()){
// here we have the opportunity to do something with the
// ConfIPv4 ip value from the service instance,
// assume the device model has a path /xyz/ip, we could
// deviceContainer.container("config").
// .container("xyz").leaf(ip).set(ip);
//
// remember to use NAVU sharedCreate() instead of
// NAVU create() when creating structures that may be
// shared between multiple service instances
}
} catch (NavuException e) {
throw new DpCallbackException("Cannot create service " +
servicePath, e);
}
return opaque;
} tailf:action self-test {
tailf:info "Perform self-test of the service";
tailf:actionpoint myrfsselftest;
output {
leaf success {
type boolean;
}
leaf message {
type string;
description
"Free format message.";
}
} /**
* Init method for selftest action
*/
@ActionCallback(callPoint="myrfsselftest", callType=ActionCBType.INIT)
public void init(DpActionTrans trans) throws DpCallbackException {
}
/**
* Selftest action implementation for service
*/
@ActionCallback(callPoint="myrfsselftest", callType=ActionCBType.ACTION)
public ConfXMLParam[] selftest(DpActionTrans trans, ConfTag name,
ConfObject[] kp, ConfXMLParam[] params)
throws DpCallbackException {
try {
// Refer to the service yang model prefix
String nsPrefix = "myrfs";
// Get the service instance key
String str = ((ConfKey)kp[0]).toString();
return new ConfXMLParam[] {
new ConfXMLParamValue(nsPrefix, "success", new ConfBool(true)),
new ConfXMLParamValue(nsPrefix, "message", new ConfBuf(str))};
} catch (Exception e) {
throw new DpCallbackException("selftest failed", e);
}
} typedef log-level-type {
type enumeration {
enum level-all {
value 1;
}
enum level-debug {
value 2;
}
enum level-error {
value 3;
}
enum level-fatal {
value 4;
}
enum level-info {
value 5;
}
enum level-off {
value 6;
}
enum level-trace {
value 7;
}
enum level-warn {
value 8;
}
}
description
"Levels of logging for Java packages in log4j.";
}
....
container java-vm {
....
container java-logging {
tailf:info "Configure Java Logging";
list logger {
tailf:info "List of loggers";
key "logger-name";
description
"Each entry in this list holds one representation of a logger with
a specific level defined by log-level-type. The logger-name
is the name of a Java package. logger-name can thus be for
example com.tailf.maapi, or com.tailf etc.";
leaf logger-name {
tailf:info "The name of the Java package";
type string;
mandatory true;
description
"The name of the Java package for which this logger
entry applies.";
}
leaf level {
tailf:info "Log-level for this logger";
type log-level-type;
mandatory true;
description
"Corresponding log-level for a specific logger.";
}
}
}ncs@admin% set java-vm java-logging logger com.tailf.maapi level level-info
[ok][2010-11-05 15:11:47]
ncs@admin% commit
Commit complete. container stdout-capture {
tailf:info "Capture stdout and stderr";
description
"Capture stdout and stderr from the Java VM.
Only applicable if auto-start is 'true'.";
leaf enabled {
tailf:info "Enable stdout and stderr capture";
type boolean;
default true;
}
leaf file {
tailf:info "Write Java VM output to file";
type string;
default "./ncs-java-vm.log";
description
"Write Java VM output to filename.";
}
leaf stdout {
tailf:info "Write output to stdout";
type empty;
description
"If present write output to stdout, useful together
with the --foreground flag to ncs.";
}
}admin@ncs% set services web-site s1 ip 1.2.3.4 port 1111 url x.se
[ok][2013-03-25 10:46:46]
[edit]
admin@ncs% commit
Aborted: Service create failed
[error][2013-03-25 10:46:48]
This is a very generic error message with does not describe what really
happens in the java code. Here the java-vm log has to be analyzed to find
the problem. However, with this cli session open we can from another cli
set the error reporting level to trace:
$ ncs_cli -u admin
admin@ncs> configure
admin@ncs% set java-vm exception-error-message verbosity trace
admin@ncs% commit
If we now in the original cli session issue the commit again we get the
following error message that pinpoint the problem in the code:
admin@ncs% commit
Aborted: [com.tailf.dp.DpCallbackException] Service create failed
Trace : [java.lang.NullPointerException]
com.tailf.conf.ConfKey.hashCode(ConfKey.java:145)
java.util.HashMap.getEntry(HashMap.java:361)
java.util.HashMap.containsKey(HashMap.java:352)
com.tailf.navu.NavuList.refreshElem(NavuList.java:1007)
com.tailf.navu.NavuList.elem(NavuList.java:831)
com.example.websiteservice.websiteservice.WebSiteServiceRFS.crea...
com.tailf.nsmux.NcsRfsDispatcher.applyStandardChange(NcsRfsDispa...
com.tailf.nsmux.NcsRfsDispatcher.dispatch(NcsRfsDispatcher.java:...
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessor...
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethod...
java.lang.reflect.Method.invoke(Method.java:616)
com.tailf.dp.annotations.DataCallbackProxy.writeAll(DataCallback...
com.tailf.dp.DpTrans.protoCallback(DpTrans.java:1357)
com.tailf.dp.DpTrans.read(DpTrans.java:571)
com.tailf.dp.DpTrans.run(DpTrans.java:369)
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExec...
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExe...
java.lang.Thread.run(Thread.java:679)
com.tailf.dp.DpThread.run(DpThread.java:44)
[error][2013-03-25 10:47:09]$ ncs --with-reload-packages$ export NCS_RELOAD_PACKAGES=trueadmin@iron> request packages reloadadmin@iron> request packages package mypack redeploy $ java com.tailf.ncs.NcsJVMLauncher$ ncs_cli -u admin
admin connected from 127.0.0.1 using console on iron.local
admin@iron> show configuration java-vm | details
stdout-capture {
enabled;
file ./logs/ncs-java-vm.log;
}
connect-time 30;
initialization-time 20;
synchronization-timeout-action log-stop;
java-thread-pool {
pool-config {
cfg-core-pool-size 5;
cfg-keep-alive-time 60;
cfg-maximum-pool-size 256;
}
}
jmx {
jndi-address 127.0.0.1;
jndi-port 9902;
jmx-address 127.0.0.1;
jmx-port 9901;
}
[ok][2012-07-12 10:45:59] admin@iron% request java-vm restart
result Started
[ok][2012-07-12 10:57:08] admin@iron% request packages package stats redeploy
result true
[ok][2012-07-12 10:59:01]admin@iron% set java-vm exception-error-message verbosity trace admin@iron% set java-vm java-logging logger com.tailf.ncs level level-error
[ok][2012-07-12 11:10:50]
admin@iron% set java-vm java-logging logger com.tailf.conf level level-error
[ok][2012-07-12 11:11:15]
admin@iron% commit
Commit complete. $ ncs_load -F p -p /ncs:java-vm/java-logging > ./ncs-cdb/loglevels.xml
$ ncs-setup --reset
$ ncs<java-vm>
<auto-start>false</auto-start>
</java-vm> admin@iron> show status packages admin@iron> show status java-vm
start-status auto-start-not-enabled;
status not-connected;
[ok][2012-07-12 11:27:28]$ ncs-start-java-vm
.....
.. all stdout from NCS Java VM admin@iron% request packages package stats redeploy
result true
[ok][2012-07-12 10:59:01]$ ncs-setup --eclipse-setup/ncs-config/japi/new-session-timeout
/ncs-config/japi/query-timeout
/ncs-config/japi/connect-timeout$ cp $NCS_DIR/etc/ncs/ncs.conf . <japi>
<new-session-timeout>PT1000S</new-session-timeout>
<query-timeout>PT1000S</query-timeout>
<connect-timeout>PT1000S</connect-timeout>
</japi>$ ncs_cli -u admin
admin connected from 127.0.0.1 using console on iron.local
admin@iron> configure
Entering configuration mode private
[ok][2012-07-12 12:54:13]
admin@iron% set devices global-settings connect-timeout 1000
[ok][2012-07-12 12:54:31]
[edit]
admin@iron% set devices global-settings read-timeout 1000
[ok][2012-07-12 12:54:39]
[edit]
admin@iron% set devices global-settings write-timeout 1000
[ok][2012-07-12 12:54:44]
[edit]
admin@iron% commit
Commit complete.$ ncs_load -F p -p /ncs:devices/global-settings > ./ncs-cdb/global-settings.xml"-Xdebug -Xrunjdwp:transport=dt_socket,address=9000,server=y,suspend=n"<java-vm>
<start-command>ncs-start-java-vm -d</start-command>
</java-vm>$ ncs-project create test_project
Creating directory: /home/developer/dev/test_project
Using NCS 5.7 found in /home/developer/ncs_dir
wrote project to /home/developer/dev/test_projecttest_project/
|-- init_data
|-- logs
|-- Makefile
|-- ncs-cdb
|-- ncs.conf
|-- packages
|-- project-meta-data.xml
|-- README.ncs
|-- scripts
|-- |-- command
|-- |-- post-commit
|-- setup.mk
|-- state
|-- test
|-- |-- internal
|-- |-- |-- lux
|-- |-- |-- basic
|-- |-- |-- |-- Makefile
|-- |-- |-- |-- run.lux
|-- |-- |-- Makefile
|-- |-- Makefile
|-- Makefile
|-- pkgtest.env<project-meta-data xmlns="http://tail-f.com/ns/ncs-project">
<name>test_project</name>
<project-version>1.0</project-version>
<description>Skeleton for a NCS project</description>
<!-- More things to be added here -->
</project-meta-data> <!-- we will add a package-store section here -->
<!-- we will add a netsim section here -->
<package>
<name>cisco-ios</name>
<url>file:///tmp/ncs-4.1.2-cisco-ios-4.1.5.tar.gz</url>
</package>
<package>
<name>foo</name>
<git>
<repo>ssh://[email protected]/foo.git</repo>
<branch>stable</branch>
</git>
</package>
<package>
<name>mypack</name>
<local/>
</package> <netsim>
<device>
<name>cisco-ios</name>
<prefix>ce</prefix>
<num-devices>2</num-devices>
</device>
</netsim> $ ncs-project update -v
ncs-project: installing packages...
ncs-project: found local installation of "mypack"
ncs-project: unpacked tar file: /tmp/ncs-4.1.2-cisco-ios-4.1.5.tar.gz
ncs-project: git clone "ssh://[email protected]/foo.git" "/home/developer/dev/test_project/packages/cisco-ios"
ncs-project: git checkout -q "stable"
ncs-project: installing packages...ok
ncs-project: resolving package dependencies...
ncs-project: resolving package dependencies...ok
ncs-project: determining build order...
ncs-project: determining build order...ok
ncs-project: determining ncs-min-version...
ncs-project: determining ncs-min-version...ok
The file 'setup.mk' will be overwritten, Continue (y/n)? <packages-store>
<git>
<repo>ssh://[email protected]</repo>
<branch>stable</branch>
</git>
</packages-store>
<!-- then it is enough to specify the package like this: -->
<package>
<name>foo</name>
<git/>
</package>$ ncs-project export$ ncs-project create --from-bundle=test_project-1.0.tar.gz$ ncs-project --help
Usage: ncs-project <command>
COMMANDS
create Create a new ncs-project
update Update the project with any changes in the
project-meta-data.xml
git For each git package repo: execute an arbitrary git
command.
export Export a project, including init-data and configuration.
help Display the man page for <command>
OPTIONS
-h, --help Show this help text.
-n, --ncs-min-version Display the NCS version(s) needed
to run this project
--ncs-min-version-non-strict As -n, but include the non-matching
NCS version(s)
See manpage for ncs-project(1) for more info.$ yanger -f tree tailf-ncs-project.yang
module: tailf-ncs-project
+--rw project-meta-data
+--rw name string
+--rw project-version? version
+--rw description? string
+--rw packages-store
| +--rw directory* [name]
| | +--rw name string
| +--rw git* [repo]
| +--rw repo string
| +--rw (git-type)?
| +--:(branch)
| | +--rw branch? string
| +--:(tag)
| | +--rw tag? string
| +--:(commit)
| +--rw commit? string
+--rw netsim
| +--rw device* [name]
| +--rw name -> /project-meta-data/package/name
| +--rw prefix string
| +--rw num-devices int32
+--rw bundle!
| +--rw name? string
| +--rw includes
| | +--rw file* [path]
| | +--rw path string
| +--rw package* [name]
| +--rw name -> ../../../package/name
| +--rw (package-location)?
| +--:(local)
| | +--rw local? empty
| +--:(url)
| | +--rw url? string
| +--:(git)
| +--rw git
| +--rw repo? string
| +--rw (git-type)?
| +--:(branch)
| | +--rw branch? string
| +--:(tag)
| | +--rw tag? string
| +--:(commit)
| +--rw commit? string
+--rw package* [name]
+--rw name string
+--rw (package-location)?
+--:(local)
| +--rw local? empty
+--:(url)
| +--rw url? string
+--:(git)
+--rw git
+--rw repo? string
+--rw (git-type)?
+--:(branch)
| +--rw branch? string
+--:(tag)
| +--rw tag? string
+--:(commit)
+--rw commit? string<project-meta-data xmlns="http://tail-f.com/ns/ncs-project">
<name>l3vpn-demo</name>
<project-version>1.0</project-version>
<description>l3vpn demo</description>
<bundle>
<!-- filename default -->
<name>example_bundle</name>
<package>
<name>my-package-1</name>
<local/>
</package>
<!-- The same package as used by the project, but with a specific URL -->
<package>
<name>my-package-2</name>
<url>http://localhost:9999/my-local.tar.gz</url>
</package>
<package>
<name>my-package-3</name>
<git>
<repo>ssh://[email protected]/pkg/resource-manager.git</repo>
<tag>1.2</tag>
</git>
</package>
</bundle>
<package>
<name>my-package-1</name>
<local/>
</package>
<package>
<name>my-package-2</name>
<local/>
</package>
<package>
<name>my-package-3</name>
<git>
<repo>ssh://[email protected]/pkg/resource-manager.git</repo>
<tag>1.2</tag>
</git>
</package>
</project-meta-data> <?set-context-node {expression}?> <?save-context name?> <?switch-context name?> <?if-ned-id ned-ids?>
...
<?elif-ned-id ned-ids?>
...
<?else?>
...
<?end?> <?if-ned-id-match regex?>
...
<?elif-ned-id-match regex?>
...
<?else?>
...
<?end?> <?macro name params...?>
...
<?endmacro?> <?expand name params...?>Learn basic operational scenarios and common CLI commands.
This section helps you to get started with NSO, learn basic operational scenarios, and get acquainted with the most common CLI commands.
Make sure that you have installed NSO and that you have sourced the ncsrc file in $NCS_DIR. This sets up the paths and environment variables to run NSO. As this must be done every time before running NSO, it is recommended to add it to your profile.
We will use the NSO network simulator to simulate three Cisco IOS routers. NSO will talk Cisco CLI to those devices. You will use the NSO CLI and Web UI to perform the tasks. Sometimes you will use the native Cisco device CLI to inspect configuration or do out-of-band changes.
Note that both the NSO software (NCS) and the simulated network devices run on your local machine.
To start the simulator:
Go to examples.ncs/getting-started/using-ncs/1-simulated-cisco-ios. First of all, we will generate a network simulator with three Cisco devices. They will be called c0, c1, and c2.\
{% hint style="info" %} Most of this section follows the procedure in the README file, so it is useful to have it opened as well. {% endhint %}
Perform the following command:\
This creates three simulated devices all running Cisco IOS and they will be named c0, c1
The previous step started the simulated Cisco devices. It is now time to start NSO.
The first action is to prepare directories needed for NSO to run and populate NSO with information on the simulated devices. This is all done with the ncs-setup command. Make sure that you are in the examples.ncs/getting-started/using-ncs/1-simulated-cisco-ios directory. (Again ignore the details for the time being).\
Note the . at the end of the command referring to the current directory. What the command does is create directories needed for NSO in the current directory and populate NSO with devices that are running in netsim. We call this the "run-time" directory.
Start NSO.\
Let us analyze the above CLI command. First of all, when you start the NSO CLI it starts in operational mode, so to show configuration data, you have to explicitly run show running-config.
NSO manages a list of devices, each device is reached by the path devices device "name" . You can use standard tab completion in the CLI to learn this.
The address and port fields tells NSO where to connect to the device. For now, they all live in local host with different ports. The device-type structure tells NSO it is a CLI device and the specific CLI is supported by the Network Element Driver (NED) cisco-ios. A more detailed explanation of how to configure the device-type structure and how to choose NEDs will be addressed later in this guide.
So now NSO can try to connect to the devices:
NSO does not need to have the connections active continuously, instead, NSO will establish a connection when needed and connections are pooled to conserve resources. At this time, NSO can read the configurations from the devices and populate the configuration database, CDB.
The following command will synchronize the configurations of the devices with the CDB and respond with true if successful:
The NSO data store, CDB, will store the configuration for every device at the path devices device "name" config, everything after this path is the configuration in the device. NSO keeps this synchronized. The synchronization is managed with the following principles:
At initialization, NSO can discover the configuration as shown above.
The modus operandi when using NSO to perform configuration changes is that the network engineer uses NSO (CLI, WebUI, REST,...) to modify the representation in NSO CDB. The changes are committed to the network as a transaction that includes the actual devices. Only if all changes happen on the actual devices, will it be committed to the NSO data store. The transaction also covers the devices so if any of the devices participating in the transaction fails, NSO will roll back the configuration changes on all modified devices. This works even in the case of devices that do not natively support roll-back like Cisco IOS CLI.
NSO can detect out-of-band changes and reconcile them by either updating the CDB or modifying the configuration on the devices to reflect the currently stored configuration.
NSO only needs to be synchronized with the devices in the event of a change being made outside of NSO. Changes made using NSO will reflected in both the CDB and the devices. The following actions do not need to be taken:
Perform configuration change via NSO.
Perform sync-from action.
The above incorrect (or not necessary) sequence stems from the assumption that the NSO CLI talks directly to the devices. This is not the case; the northbound interfaces in NSO modify the configuration in the NSO data store, NSO calculates a minimum difference between the current configuration and the new configuration, giving only the changes to the configuration to the NEDS that runs the commands to the devices. All this as one single change-set.
View the configuration of the c0 device using the command:
Or, show a particular piece of configuration from several devices:
Or, show a particular piece of configuration from all devices:
The CLI can pipe commands, try TAB after | to see various pipe targets:
The above command shows the router config of all devices as XML and then saves it to a file router.xml.
To change the configuration, enter configure mode.\
Change or add some configuration across the devices, for example:\
It is important to understand how NSO applies configuration changes to the network. At this point, the changes are local to NSO, no configurations have been sent to the devices yet. Since the NSO Configuration Database, CDB is in sync with the network, NSO can calculate the minimum diff to apply the changes to the network.
The command below compares the ongoing changes with the running database:
It is possible to dry-run the changes to see the native Cisco CLI output (in this case almost the same as above):
The changes can be committed to the devices and the NSO CDB simultaneously with a single commit. In the commit command below, we pipe to details to understand the actions being taken.
Changes are committed to the devices and the NSO database as one transaction. If any of the device configurations fail, all changes will be rolled back and the devices will be left in the state that they were in before the commit and the NSO CDB will not be updated.
There are numerous options to the commit command which will affect the behavior of the atomic transactions:
As seen by the details output, NSO stores a roll-back file for every commit so that the whole transaction can be rolled back manually. The following is an example of a rollback file:
(Viewing files as an operational command, prefixing a command in configuration mode with do executes in operational mode.) To perform a manual rollback, first load the rollback file:
apply-rollback-file by default restores to that saved configuration, adding selective as a parameter allows you to just roll back the delta in that specific rollback file. Show the differences:
Commit the rollback:
A trace log can be created to see what is going on between NSO and the device CLI enable trace. Use the following command to enable trace:
Note that the trace settings only take effect for new connections, so it is important to disconnect the current connections. Make a change to for example c0:
Note the use of the command commit dry-run outformat native. This will display the net result device commands that will be generated over the native interface without actually committing them to the CDB or the devices. In addition, there is the possibility to append the reverse flag that will display the device commands for getting back to the current running state in the network if the commit is successfully executed.
Exit from the NSO CLI and return to the Unix Shell. Inspect the CLI trace:
As seen above, ranges can be used to send configuration commands to several devices. Device groups can be created to allow for group actions that do not require naming conventions. A group can reference any number of devices. A device can be part of any number of groups, and groups can be hierarchical.
The command sequence below creates a group of core devices and a group with all devices. Note that you can use tab completion when adding the device names to the group. Also, note that it requires configuration mode. (If you are still in the Unix Shell from the steps above, do $ncs_cli -C -u admin).
Note well the do show which shows the operational data for the groups. Device groups have a member attribute that shows all member devices, flattening any group members.
Device groups can contain different devices as well as devices from different vendors. Configuration changes will be committed to each device in its native language without needing to be adjusted in NSO.
You can, for example, at this point use the group to check if all core are in sync:
Assume that we would like to manage permit lists across devices. This can be achieved by defining templates and applying them to device groups. The following CLI sequence defines a tiny template, called community-list :
This can now be applied to a device group:
What if the device group core contained different vendors? Since the configuration is written in IOS the above template would not work on Juniper devices. Templates can be used on different device types (read NEDs) by using a prefix for the device model. The template would then look like:
The above indicates how NSO manages different models for different device types. When NSO connects to the devices, the NED checks the device type and revision and returns that to NSO. This can be inspected (note, in operational mode):
So here we see that c0 uses a tailf-ned-cisco-ios module which tells NSO which data model to use for the device. Every NED package comes with a YANG data model for the device (except for third-party YANG NED for which the YANG device model must be downloaded and fixed before it can be used). This renders the NSO data store (CDB) schema, the NSO CLI, WebUI, and southbound commands.
The model introduces namespace prefixes for every configuration item. This also resolves issues around different vendors using the same configuration command for different configuration elements. Note that every item is prefixed with ios:
Another important question is how to control if the template merges the list or replaces the list. This is managed via tags. The default behavior of templates is to merge the configuration. Tags can be inserted at any point in the template. Tag values are merge, replace, delete, create and nocreate.
Assume that c0 has the following configuration:
If we apply the template the default result would be:
We could change the template in the following way to get a result where the permit list would be replaced rather than merged. When working with tags in templates, it is often helpful to view the template as a tree rather than a command view. The CLI has a display option for showing a curly-braces tree view that corresponds to the data-model structure rather than the command set. This makes it easier to see where to add tags.
Different tags can be added across the template tree. If we now apply the template to the device c0 which already have community lists, the following happens:
Any existing values in the list are replaced in this case. The following tags are available:
merge (default): the template changes will be merged with the existing template.
replace: the template configuration will be replaced by the new configuration.
create: the template will create those nodes that do not exist. If a node already exists this will result in an error.
Note that a template can have different tags along the tree nodes.
A problem with the above template is that every value is hard-coded. What if you wanted a template where the community-list name and permit-list value are variables passed to the template when applied? Any part of a template can be a variable, (or actually an XPATH expression). We can modify the template to use variables in the following way:
The template now requires two parameters when applied (tab completion will prompt for the variable):
Note, that the replace tag was still part of the template and it would delete any existing community lists, which is probably not the desired outcome in the general case.
The template mechanism described so far is "fire-and-forget". The templates do not have any memory of what happened to the network, or which devices they touched. A user can modify the templates without anything happening to the network until an explicit apply-template action is performed. (Templates are of course, as all configuration changes, applied as a transaction). NSO also supports service templates that are more advanced in many ways, more information on this will be presented later in this guide.
Also, note that device templates have some additional restrictions on the values that can be supplied when applying the template. In particular, a value must either be a number or a single-quoted string. It is currently not possible to specify a value that contains a single quote (').
To make sure that configuration is applied according to site or corporate rules, you can use policies. Policies are validated at every commit, they can be of type error that implies that the change cannot go through or a warning which means that you have to confirm a configuration that gives a warning.
A policy is composed of:
Policy name.
Iterator: loop over a path in the model, for example, all devices, all services of a specific type.
Expression: a boolean expression that must be true for every node returned from the iterator, for example, SNMP must be turned on.
Warning or error: a message displayed to the user. If it is of the type warning, the user can still commit the change, if of type error the change cannot be made.
An example is shown below:
Now, if we try to delete a class-map a, we will get a policy violation:
The {name} variable refers to the node set from the iterator. This node-set will be the list of devices in NSO and the devices have an attribute called 'name'.
To understand the syntax for the expressions a pipe target in the CLI can be used:
To debug policies look at the end of logs/xpath.trace. This file will show all validated XPATH expressions and any errors.
Validation scripts can also be defined in Python, see more about that in .
In reality, network engineers will still modify configurations using other tools like out-of-band CLI or other management interfaces. It is important to understand how NSO manages this. The NSO network simulator supports CLI towards the devices. For example, we can use the IOS CLI on say c0 and delete a permit-list.
From the UNIX shell, start a CLI session towards c0.
Start the NSO CLI again:
NSO detects if its configuration copy in CDB differs from the configuration in the device. Various strategies are used depending on device support: transaction IDs, time stamps, and configuration hash-sums. For example, an NSO user can request a check-sync operation:
NSO can also compare the configurations with the CDB and show the difference:
At this point, we can choose if we want to use the configuration stored in the CDB as the valid configuration or the configuration on the device:
In the above example, we chose to overwrite the device configuration from NSO.
NSO will also detect out-of-sync when committing changes. In the following scenario, a local c0 CLI user adds an interface. Later the NSO user tries to add an interface:
At this point, we have two diffs:
The device and NSO CDB (devices device compare-config).
The ongoing transaction and CDB (show configuration).
To resolve this, you can choose to synchronize the configuration between the devices and the CDB before committing. There is also an option to over-ride the out-of-sync check:
Or:
As noted before, all changes are applied as complete transactions of all configurations on all of the devices. Either all configuration changes are completed successfully or all changes are removed entirely. Consider a simple case where one of the devices is not responding. For the transaction manager, an error response from a device or a non-responding device, are both errors and the transaction should automatically rollback to the state before the commit command was issued.
Stop c0:
Go back to the NSO CLI and perform a configuration change over c0 and c1:
NSO sends commands to all devices in parallel, not sequentially. If any of the devices fail to accept the changes or report an error, NSO will issue a rollback to the other devices. Note, that this works also for non-transactional devices like IOS CLI and SNMP. This works even for non-symmetrical cases where the rollback command sequence is not just the reverse of the commands. NSO does this by treating the rollback as it would any other configuration change. NSO can use the current configuration and previous configuration and generate the commands needed to roll back from the configuration changes.
The diff configuration is still in the private CLI session, it can be restored, modified (if the error was due to something in the config), or in some cases, fix the device.
NSO is not a best-effort configuration management system. The error reporting coupled with the ability to completely rollback failed changes to the devices, ensures that the configurations stored in the CDB and the configurations on the devices are always consistent and that no failed or orphan configurations are left on the devices.
First of all, if the above was not a multi-device transaction, meaning that the change should be applied independently device per device, then it is just a matter of performing the commit between the devices.
Second, NSO has a commit flag commit-queue async or commit-queue sync. The commit queue should primarily be used for throughput reasons when doing configuration changes in large networks. Atomic transactions come with a cost, the critical section of the database is locked when committing the transaction on the network. So, in cases where there are northbound systems of NSO that generate many simultaneous large configuration changes these might get queued. The commit queue will send the device commands after the lock has been released, so the database lock is much shorter. If any device fails, an alarm will be raised.
Go to the UNIX shell, start the device, and monitor the commit queue:
Devices can also be pre-provisioned, this means that the configuration can be prepared in NSO and pushed to the device when it is available. To illustrate this, we can start by adding a new device to NSO that is not available in the network simulator:
Above, we added a new device to NSO with an IP address local host, and port 10030. This device does not exist in the network simulator. We can tell NSO not to send any commands southbound by setting the admin-state to southbound-locked (actually the default). This means that all configuration changes will succeed, and the result will be stored in CDB. At any point in time when the device is available in the network, the state can be changed and the complete configuration pushed to the new device. The CLI sequence below also illustrates a powerful copy configuration command that can copy any configuration from one device to another. The from and to paths are separated by the keyword to.
As shown above, check-sync operations will tell the user that the device is southbound locked. When the device is available in the network, the device can be synchronized with the current configuration in the CDB using the sync-to action.
Different users or management tools can of course run parallel sessions to NSO. All ongoing sessions have a logical copy of CDB. An important case needs to be understood if there is a conflict when multiple users attempt to modify the same device configuration at the same time with different changes. First, let's look at the CLI sequence below, user admin to the left, user joe to the right.
There is no conflict in the above sequence, community is a list so both joe and admin can add items to the list. Note that user joe gets information about the user admin committing.
On the other hand, if two users modify an ordered-by user list in such a way that one user rearranges the list, along with other non-conflicting modifications, and one user deletes the entire list, the following happens:
In this case, joe commits a change to access-list after admin and a conflict message is displayed. Since the conflict is non-resolvable, the transaction has to be reverted. To reapply the changes made by joe to logging in a new transaction, the following commands are entered:
In this case, joe tries to reapply the changes made in the previous transaction and since access-list 10 has been removed, the move command will fail when applied by the reapply-commands command. Since the mode is best-effort, the next command will be processed. The changes to logging will succeed and joe then commits the transaction.
c2Run the CLI toward one of the simulated devices.\
This shows that the device has some initial configurations.
admin with a Cisco XR-style CLI.\NSO also supports a J-style CLI, that is started by using a -J modification to the command like this.
Throughout this user guide, we will show the commands in Cisco XR style.
At this point, NSO only knows the address, port, and authentication information of the devices. This management information was loaded to NSO by the setup utility. It also tells NSO how to communicate with the devices by using NETCONF, SNMP, Cisco IOS CLI, etc. However, at this point, the actual configuration of the individual devices is unknown.\
nocreate: the merge will only affect configuration items that already exist in the template. It will never create the configuration with this tag, or any associated commands inside it. It will only modify existing configuration structures.
delete: delete anything from this point.


$ ncs-netsim cli-i c1
admin connected from 127.0.0.1 using console *
c1> enable
c1# show running-config
class-map m
match mpls experimental topmost 1
match packet length max 255
match packet length min 2
match qos-group 1
!
...
c1# exitadmin@ncs# show running-config devices device
devices device c0
address 127.0.0.1
port 10022
...
authgroup default
device-type cli ned-id cisco-ios
state admin-state unlocked
config
no ios:service pad
no ios:ip domain-lookup
no ios:ip http secure-server
ios:ip source-route
!
! ...$ ncs-netsim create-network $NCS_DIR/packages/neds/cisco-ios 3 c$ ncs-setup --netsim-dir ./netsim --dest . $ ncsadmin@ncs# devices connect
connect-result {
device c0
result true
info (admin) Connected to c0 - 127.0.0.1:10022
}
connect-result {
device c1
result true
info (admin) Connected to c1 - 127.0.0.1:10023
}
connect-result {
device c2
result true
info (admin) Connected to c2 - 127.0.0.1:10024
}....admin@ncs# devices sync-from
sync-result {
device c0
result true
}....admin@ncs# show running-config devices device c0 config
devices device c0
config
no ios:service pad
ios:ip vrf my-forward
bgp next-hop Loopback 1
!
...admin@ncs# show running-config devices device c0..2 config ios:router
devices device c0
config
ios:router bgp 64512
aggregate-address 10.10.10.1 255.255.255.251
neighbor 1.2.3.4 remote-as 1
neighbor 1.2.3.4 ebgp-multihop 3
neighbor 2.3.4.5 remote-as 1
neighbor 2.3.4.5 activate
neighbor 2.3.4.5 capability orf prefix-list both
neighbor 2.3.4.5 weight 300
!
!
!
devices device c1
config
ios:router bgp 64512
...admin@ncs# show running-config devices device config ios:routeradmin@ncs# show running-config devices device config ios:router \
| display xml | save router.xmladmin@ncs# config
Entering configuration mode terminal
admin@ncs(config)# admin@ncs(config)# devices device c0..2 config ios:router bgp 64512
neighbor 10.10.10.0 remote-as 64502
admin@ncs(config-router)#admin@ncs(config-router)# top
admin@ncs(config)# show configuration
devices device c0
config
ios:router bgp 64512
neighbor 10.10.10.0 remote-as 64502
...admin@ncs(config)# commit dry-run outformat native
native {
device {
name c0
data router bgp 64512
neighbor 10.10.10.0 remote-as 64502
!
...admin@ncs% commit | detailsadmin@ncs(config)# commit TAB
Possible completions:
and-quit Exit configuration mode
check Validate configuration
comment Add a commit comment
commit-queue Commit through commit queue
label Add a commit label
no-confirm No confirm
no-networking Send nothing to the devices
no-out-of-sync-check Commit even if out of sync
no-overwrite Do not overwrite modified data on the device
no-revision-drop Fail if device has too old data model
save-running Save running to file
---
dry-run Show the diff but do not perform commitadmin@ncs(config)# do file show logs/rollback1000
Possible completions:
rollback10001 rollback10002 rollback10003 \
rollback10004 rollback10005
admin@ncs(config)# do file show logs/rollback10005
# Created by: admin
# Date: 2014-09-03 14:35:10
# Via: cli
# Type: delta
# Label:
# Comment:
# No: 10005
ncs:devices {
ncs:device c0 {
ncs:config {
ios:router {
ios:bgp 64512 {
delete:
ios:neighbor 10.10.10.0;
}
}
}
}admin@ncs(config)# rollback-files apply-rollback-file fixed-number 10005admin@ncs(config)# show configuration
devices device c0
config
ios:router bgp 64512
no neighbor 10.10.10.0 remote-as 64502
!
!
!
devices device c1
config
ios:router bgp 64512
no neighbor 10.10.10.0 remote-as 64502
!
!
!
devices device c2
config
ios:router bgp 64512
no neighbor 10.10.10.0 remote-as 64502
!
!
!admin@ncs(config)# commit
Commit complete.admin@ncs(config)# devices global-settings trace raw trace-dir logs
admin@ncs(config)# commit
Commit complete.
admin@ncs(config)# devices disconnectadmin@ncs(config)# devices device c0 config ios:interface FastEthernet
1/2 ip address 192.168.1.1 255.255.255.0
admin@ncs(config-if)# commit dry-run outformat native
admin@ncs(config-if)# commit less logs/ned-cisco-ios-c0.traceadmin@ncs(config)# devices device-group core device-name [ c0 c1 ]
admin@ncs(config-device-group-core)# commit
admin@ncs(config)# devices device-group all device-name c2 device-group core
admin@ncs(config-device-group-all)# commit
admin@ncs(config)# show full-configuration devices device-group
devices device-group all
device-name [ c2 ]
device-group [ core ]
!
devices device-group core
device-name [ c0 c1 ]
!
admin@ncs(config)# do show devices device-group
NAME MEMBER INDETERMINATES CRITICALS MAJORS MINORS WARNINGS
-------------------------------------------------------------------------
all [ c0 c1 c2 ] 0 0 0 0 0
core [ c0 c1 ] 0 0 0 0 0admin@ncs# devices device-group core check-sync
sync-result {
device c0
result in-sync
}
sync-result {
device c1
result in-sync
}admin@ncs(config)# devices template community-list
ned-id cisco-ios-cli-3.0
config ios:ip
community-list standard test1
permit permit-list 64000:40
admin@ncs(config-permit-list-64000:40)# commit
Commit complete.
admin@ncs(config-permit-list-64000:40)# top
admin@ncs(config)# show full-configuration devices template
devices template community-list
config
ios:ip community-list standard test1
permit permit-list 64000:40
!
!
!
!
[ok][2013-08-09 11:27:28]admin@ncs(config)# devices device-group core apply-template \
template-name community-list
admin@ncs(config)# show configuration
devices device c0
config
ios:ip community-list standard test1 permit 64000:40
!
!
devices device c1
config
ios:ip community-list standard test1 permit 64000:40
!
!
admin@ncs(config)# commit dry-run outformat native
native {
device {
name c0
data ip community-list standard test1 permit 64000:40
}
device {
name c1
data ip community-list standard test1 permit 64000:40
}
}
admin@ncs(config)# commit
Commit complete.template community-list {
config {
junos:configuration {
...
}
ios:ip {
...
}admin@ncs# show devices device module
NAME NAME REVISION FEATURES DEVIATIONS
-------------------------------------------------------------------
c0 tailf-ned-cisco-ios 2014-02-12 - -
tailf-ned-cisco-ios-stats 2014-02-12 - -
c1 tailf-ned-cisco-ios 2014-02-12 - -
tailf-ned-cisco-ios-stats 2014-02-12 - -
c2 tailf-ned-cisco-ios 2014-02-12 - -
tailf-ned-cisco-ios-stats 2014-02-12 - -admin@ncs# show running-config devices device c0 config ios:ip community-list
devices device c0
config
ios:ip community-list 1 permit
ios:ip community-list 2 deny
ios:ip community-list standard s permit
ios:ip community-list standard test1 permit 64000:40
!
!admin@ncs# show running-config devices device c0 config ios:ip community-list
devices device c0
config
ios:ip community-list 1 permit
ios:ip community-list 2 deny
ios:ip community-list standard s permit}admin@ncs# show running-config devices device c0 config ios:ip community-list
devices device c0
config
ios:ip community-list 1 permit
ios:ip community-list 2 deny
ios:ip community-list standard s permit
ios:ip community-list standard test1 permit 64000:40
!
!admin@ncs(config)# show full-configuration devices template
devices template community-list
config
ios:ip community-list standard test1
permit permit-list 64000:40
!
!
!
!
admin@ncs(config)# show full-configuration devices \
template | display curly-braces
template community-list {
config {
ios:ip {
community-list {
standard test1 {
permit {
permit-list 64000:40;
}
}
}
}
}
}
admin@ncs(config)# tag add devices template community-list
ned-id cisco-ios-cli-3.0
config ip community-list replace
admin@ncs(config)# commit
Commit complete.
admin@ncs(config)# show full-configuration devices
template | display curly-braces
template community-list {
config {
ios:ip {
/* Tags: replace */
community-list {
standard test1 {
permit {
permit-list 64000:40;
}
}
}
}
}
}admin@ncs(config)# show full-configuration devices device c0 \
config ios:ip community-list
devices device c0
config
ios:ip community-list 1 permit
ios:ip community-list 2 deny
ios:ip community-list standard s permit
ios:ip community-list standard test1 permit 64000:40
!
!
admin@ncs(config)# devices device c0 apply-template \
template-name community-list
admin@ncs(config)# show configuration
devices device c0
config
no ios:ip community-list 1 permit
no ios:ip community-list 2 deny
no ios:ip community-list standard s permit
!
!admin@ncs(config)# no devices template community-list config ios:ip \
community-list standard test1
admin@ncs(config)# devices template community-list config ios:ip \
community-list standard \
{$LIST-NAME} permit permit-list {$AS}
admin@ncs(config-permit-list-{$AS})# commit
Commit complete.
admin@ncs(config-permit-list-{$AS})# top
admin@ncs(config)# show full-configuration devices template
devices template community-list
config
ios:ip community-list standard {$LIST-NAME}
permit permit-list {$AS}
!
!
!
!admin@ncs(config)# devices device-group all apply-template
template-name community-list variable { name LIST-NAME value 'test2' }
variable { name AS value '60000:30' }
admin@ncs(config)# commitadmin@ncs(config)# policy rule class-map
Possible completions:
error-message Error message to print on expression failure
expr XPath 1.0 expression that returns a boolean
foreach XPath 1.0 expression that returns a node set
warning-message Warning message to print on expression failure
admin@ncs(config)# policy rule class-map foreach /devices/device \
expr config/ios:class-map[name='a'] \
warning-message "Device {name} must have a class-map a"
admin@ncs(config-rule-class-map)# top
admin@ncs(config)# commit
Commit complete.
admin@ncs(config)# show full-configuration policy
policy rule class-map
foreach /devices/device
expr config/ios:class-map[ios:name='a']
warning-message "Device {name} must have a class-map a"
!admin@ncs(config)# no devices device c2 config ios:class-map match-all a
admin@ncs(config)# validate
Validation completed with warnings:
Device c2 must have a class-map a
admin@ncs(config)# commit
The following warnings were generated:
Device c2 must have a class-map a
Proceed? [yes,no] yes
Commit complete.
admin@ncs(config)# validate
Validation completed with warnings:
Device c2 must have a class-map aadmin@ncs(config)# show full-configuration devices device c2 config \
ios:class-map | display xpath
/ncs:devices/ncs:device[ncs:name='c2']/ncs:config/ \
ios:class-map[ios:name='cmap1']/ios:prematch match-all
...4-Sep-2014::11:05:30.103 Evaluating XPath for policy: class-map:
/devices/device
get_next(/ncs:devices/device) = {c0}
XPath policy match: /ncs:devices/device{c0}
get_next(/ncs:devices/device{c0}) = {c1}
XPath policy match: /ncs:devices/device{c1}
get_next(/ncs:devices/device{c1}) = {c2}
XPath policy match: /ncs:devices/device{c2}
get_next(/ncs:devices/device{c2}) = false
exists("/ncs:devices/device{c2}/config/class-map{a}") = true
exists("/ncs:devices/device{c1}/config/class-map{a}") = true
exists("/ncs:devices/device{c0}/config/class-map{a}") = true$ ncs-netsim cli-i c0
c0> enable
c0# configure
Enter configuration commands, one per line. End with CNTL/Z.
c0(config)# show full-configuration ip community-list
ip community-list standard test1 permit
ip community-list standard test2 permit 60000:30
c0(config)# no ip community-list standard test2
c0(config)#
c0# exit
$$ ncs_cli -C -u adminadmin@ncs# devices check-sync
sync-result {
device c0
result out-of-sync
info got: e54d27fe58fda990797d8061aa4d5325 expected: 36308bf08207e994a8a83af710effbf0
}
sync-result {
device c1
result in-sync
}
sync-result {
device c2
result in-sync
}
admin@ncs# devices device-group core check-sync
sync-result {
device c0
result out-of-sync
info got: e54d27fe58fda990797d8061aa4d5325 expected: 36308bf08207e994a8a83af710effbf0
}
sync-result {
device c1
result in-sync
}admin@ncs# devices device c0 compare-config
diff
devices {
device c0 {
config {
ios:ip {
community-list {
+ standard test1 {
+ permit {
+ }
+ }
- standard test2 {
- permit {
- permit-list 60000:30;
- }
- }
}
}
}
}
}admin@ncs# devices sync-
Possible completions:
sync-from Synchronize the config by pulling from the devices
sync-to Synchronize the config by pushing to the devices
admin@ncs# devices sync-to$ ncs-netsim cli-i c0
c0> enable
c0# configure
Enter configuration commands, one per line. End with CNTL/Z.
c0(config)# interface FastEthernet 1/0 ip address 192.168.1.1 255.255.255.0
c0(config-if)#
c0# exit
$ ncs_cli -C -u admin
admin@ncs# config
Entering configuration mode terminal
admin@ncs(config)# devices device c0 config ios:interface \
FastEthernet1/1 ip address 192.168.1.1 255.255.255.0
admin@ncs(config-if)# commit
Aborted: Network Element Driver: device c0: out of syncadmin@ncs(config)# devices device c0 compare-config
diff
devices {
device c0 {
config {
ios:interface {
FastEthernet 1/0 {
ip {
address {
primary {
+ mask 255.255.255.0;
+ address 192.168.1.1;
}
}
}
}
}
}
}
}
admin@ncs(config)# show configuration
devices device c0
config
ios:interface FastEthernet1/1
ip address 192.168.1.1 255.255.255.0
exit
!
!admin@ncs(config)# commit no-out-of-sync-checkadmin@ncs(config)# devices global-settings out-of-sync-commit-behaviour
Possible completions:
accept reject$ ncs-netsim stop c0
DEVICE c0 STOPPEDadmin@ncs(config)# devices device c0 config ios:ip community-list \
standard test3 permit 50000:30
admin@ncs(config-config)# devices device c1 config ios:ip \
community-list standard test3 permit 50000:30
admin@ncs(config-config)# top
admin@ncs(config)# show configuration
devices device c0
config
ios:ip community-list standard test3 permit 50000:30
!
!
devices device c1
config
ios:ip community-list standard test3 permit 50000:30
!
!
admin@ncs(config)# commit
Aborted: Failed to connect to device c0: connection refused: Connection refused
admin@ncs(config)# *** ALARM connection-failure: Failed to connect to
device c0: connection refused: Connection refusedadmin@ncs(config)# commit commit-queue async
commit-queue-id 2236633674
Commit complete.
admin@ncs(config)# do show devices commit-queue | notab
devices commit-queue queue-item 2236633674
age 11
status executing
devices [ c0 c1 c2 ]
transient c0
reason "Failed to connect to device c0: connection refused"
is-atomic true$ncs-netsim start c0
DEVICE c0 OK STARTED
$ncs_cli -C -u admin
admin@ncs# show devices commit-queue
devices commit-queue queue-item 2236633674
age 11
status executing
devices [ c0 c1 c2 ]
transient c0
reason "Failed to connect to device c0: connection refused"
is-atomic true
admin@ncs# show devices commit-queue
devices commit-queue queue-item 2236633674
age 11
status executing
devices [ c0 c1 c2 ]
is-atomic true
admin@ncs# show devices commit-queue
% No entries found.admin@ncs(config)# devices device c3 address 127.0.0.1 port 10030 \
authgroup default device-type cli
ned-id cisco-ios
admin@ncs(config-device-c3)# state admin-state southbound-locked
admin@ncs(config-device-c3)# commitadmin@ncs(config)# copy cfg merge devices device c0 config \
ios:ip community-list to \
devices device c3 config ios:ip community-list
admin@ncs(config)# show configuration
devices device c3
config
ios:ip community-list standard test2 permit 60000:30
ios:ip community-list standard test3 permit 50000:30
!
!
admin@ncs(config)# commit
admin@ncs(config)# devices check-sync
...
sync-result {
device c3
result locked
}admin@ncs(config)# devices device c0 config ios:snmp-server community fozbar
joe@ncs(config)# devices device c0 config ios:snmp-server community fezbar
admin@ncs(config-config)# commit
System message at 2014-09-04 13:15:19...
Commit performed by admin via console using cli.
joe@ncs(config-config)# commit
joe@ncs(config)# show full-configuration devices device c0 config ios:snmp-server
devices device c0
config
ios:snmp-server community fezbar
ios:snmp-server community fozbar
!
!admin@ncs(config)# no devices device c0 config access-list 10
joe@ncs(config)# move devices device c0 config access-list 10 permit 168.215.202.0 0.0.0.255 first
joe@ncs(config)# devices device c0 config logging history informational
joe@ncs(config)# devices device c0 config logging source-interface Vlan512
joe@ncs(config)# devices device c0 config logging 10.1.22.122
joe@ncs(config)# devices device c0 config logging 66.162.108.21
joe@ncs(config)# devices device c0 config logging 50.58.29.21
admin@ncs% commit
System message at 2022-09-01 14:17:59...
Commit performed by admin via console using cli.
joe@ncs(config-config)# commit
Aborted: Transaction 542 conflicts with transaction 562 started by user admin: 'devices device c0 config access-list 10' read-op on-descendant write-op delete in work phase(s)
--------------------------------------------------------------------------
This transaction is in a non-resolvable state.
To attempt to reapply the configuration changes made in the CLI,
in a new transaction, revert the current transaction by running
the command 'revert' followed by the command 'reapply-commands'.
-------------------------------------------------------------------------- joe@ncs(config)# revert no-confirm
joe@ncs(config)# reapply-commands best-effort
move devices device c0 config access-list 10 permit 168.215.202.0 0.0.0.255 first
Error: on line 1: move devices device c0 config access-list 10 permit 168.215.202.0 0.0.0.255 first
devices device c0 config
logging history informational
logging facility local0
logging source-interface Vlan512
logging 10.1.22.122
logging 66.162.108.21
logging 50.58.29.21
joe@ncs(config-config)# show config
logging facility local0
logging history informational
logging 10.1.22.122
logging 50.58.29.21
logging 66.162.108.21
logging source-interface Vlan512
joe@ncs(config-config)# commit
Commit complete.$ ncs-netsim start
DEVICE c0 OK STARTED
DEVICE c1 OK STARTED
DEVICE c2 OK STARTED$ ncs_cli -C -u admin$ ncs_cli -J -u adminDeep dive into service implementation.
Before you Proceed
This section discusses the implementation details of services in NSO. The reader should already be familiar with the concepts described in the introductory sections and Implementing Services.
For an introduction to services, see Develop a Simple Service instead.
Each service type in NSO extends a part of the data model (a list or a container) with the ncs:servicepoint statement and the ncs:service-data grouping. This is what defines an NSO service.
The service point instructs NSO to involve the service machinery (Service Manager) for management of that part of the data tree and the ncs:service-data grouping contains definitions common to all services in NSO. Defined in tailf-ncs-services.yang, ncs:service-data includes parts that are required for the proper operation of FASTMAP and the Service Manager. Every service must therefore use this grouping as part of its data model.
In addition, ncs:service-data provides a common service interface to the users, consisting of:
While not part of ncs:service-data as such, you may consider the service-commit-queue-event notification part of the core service interface. The notification provides information about the state of the service when the service uses the commit queue. As an example, an event-driven application uses this notification to find out when a service instance has been deployed to the devices. See the showcase_rc.py script in examples.ncs/development-guide/concurrency-model/perf-stack/ for sample Python code, leveraging the notification. See tailf-ncs-services.yang for the full definition of the notification.
NSO Service Manager is responsible for providing the functionality of the common service interface, requiring no additional user code. This interface is the same for classic and nano services, whereas nano services further extend the model.
NSO calls into Service Manager when accessing actions and operational data under the common service interface, or when the service instance configuration data (the data under the service point) changes. NSO being a transactional system, configuration data changes happen in a transaction.
When applied, a transaction goes through multiple stages, as shown by the progress trace (e.g. using commit | details in the CLI). The detailed output breaks up the transaction into four distinct phases:
validation
write-start
prepare
commit
These phases deal with how the network-wide transactions work:
The validation phase prepares and validates the new configuration (including NSO copy of device configurations), then the CDB processes the changes and prepares them for local storage in the write-start phase.
The prepare stage sends out the changes to the network through the Device Manager and the HA system. The changes are staged (e.g. in the candidate data store) and validated if the device supports it, otherwise, the changes are activated immediately.
If all systems took the new configuration successfully, enter the commit phase, marking the new NSO configuration as active and activating or committing the staged configuration on remote devices. Otherwise, enter the abort phase, discarding changes, and ask NEDs to revert activated changes on devices that do not support transactions (e.g. without candidate data store).
There are also two types of locks involved with the transaction that are of interest to the service developer; the service write lock and the transaction lock. The latter is a global lock, required to serialize transactions, while the former is a per-service-type lock for serializing services that cannot be run in parallel. See for more details and their impact on performance.
The first phase, historically called validation, does more than just validate data and is the phase a service deals with the most. The other three support the NSO service framework but a service developer rarely interacts with directly.
We can further break down the first phase into the following stages:
rollback creation
pre-transform validation
transforms
full data validation
When the transaction starts applying, NSO captures the initial intent and creates a rollback file, which allows one to reverse or roll back the intent. For example, the rollback file might contain the information that you changed a service instance parameter but it would not contain the service-produced device changes.
Then the first, partial validation takes place. It ensures the service input parameters are valid according to the service YANG model, so the service code can safely use provided parameter values.
Next, NSO runs transaction hooks and performs the necessary transforms, which alter the data before it is saved, for example encrypting passwords. This is also where the Service Manager invokes FASTMAP and service mapping callbacks, recording the resulting changes. NSO takes service write locks in this stage, too.
After transforms, there are no more changes to the configuration data, and the full validation starts, including YANG model constraints over the complete configuration, custom validation through validation points, and configuration policies (see in Operation and Usage).
Throughout the phase, the transaction engine makes checkpoints, so it can restart the transaction faster in case of concurrency conflicts. The check for conflicts happens at the end of this first phase when NSO also takes the global transaction lock. Concurrency is further discussed in .
The main callback associated with a service point is the create callback, designed to produce the required (new) configuration, while FASTMAP takes care of the other operations, such as update and delete.
NSO implements two additional, optional callbacks for scenarios where create is insufficient. These are pre- and post-modification callbacks that NSO invokes before (pre) or after (post) create. These callbacks work outside of the scope tracked by FASTMAP. That is, changes done in pre- and post-modification do not automatically get removed during the update or delete of the service instance.
For example, you can use the pre-modification callback to check the service prerequisites (pre-check) or make changes that you want persisted even after the service is removed, such as enabling some global device feature. The latter may be required when NSO is not the only system managing the device and removing the feature configuration would break non-NSO managed services.
Similarly, you might use post-modification to reset the configuration to some default after the service is removed. Say the service configures an interface on a router for customer VPN. However, when the service is deprovisioned (removed), you don't want to simply erase the interface configuration. Instead, you want to put it in shutdown and configure it for a special, unused VLAN. The post-modification callback allows you to achieve this goal.
The main difference from create callback is that pre- and post-modification are called on update and delete, as well as service create. Since the service data node may no longer exist in case of delete, the API for these callbacks does not supply the service object. Instead, the callback receives the operation and key path to the service instance. See the following API signatures for details.
The Python callbacks use the following function arguments:
tctx: A TransCtxRef object containing transaction data, such as user session and transaction handle information.
op: Integer representing operation: create (ncs.dp.NCS_SERVICE_CREATE), update (ncs.dp.NCS_SERVICE_UPDATE), or delete (ncs.dp.NCS_SERVICE_DELETE) of the service instance.
The Java callbacks use the following function arguments:
context: A ServiceContext object for accessing root and service instance NavuNode in the current transaction.
operation: ServiceOperationType enum representing operation: CREATE, UPDATE, DELETE of the service instance.
See examples.ncs/development-guide/services/post-modification-py and examples.ncs/development-guide/services/post-modification-java examples for a sample implementation of the post-modification callback.
Additionally, you may implement these callbacks with templates. Refer to for details.
FASTMAP greatly simplifies service code, so it usually only needs to deal with the initial mapping. NSO achieves this by first discarding all the configuration performed during the create callback of the previous run. In other words, the service create code always starts anew, with a blank slate.
If you need to keep some private service data across runs of the create callback, or pass data between callbacks, such as pre- and post-modification, you can use opaque properties.
The opaque object is available in the service callbacks as an argument, typically named proplist (Python) or opaque (Java). It contains a set of named properties with their corresponding values.
If you wish to use the opaque properties, it is crucial that your code returns the properties object from the create call, otherwise, the service machinery will not save the new version.
Compared to pre- and post-modification callbacks, which also persist data outside of FASTMAP, NSO deletes the opaque data when the service instance is deleted, unlike with the pre- and post-modification data.
The examples.ncs/development-guide/services/post-modification-py and examples.ncs/development-guide/services/post-modification-java examples showcase the use of opaque properties.
NSO by default enables concurrent scheduling and execution of services to maximize throughput. However, concurrent execution can be problematic for non-thread-safe services or services that are known to always conflict with themselves or other services, such as when they read and write the same shared data. See for details.
To prevent NSO from scheduling a service instance together with an instance of another service, declare a static conflict in the service model, using the ncs:conflicts-with extension. The following example shows a service with two declared static conflicts, one with itself and one with another service, named other-service.
This means each service instance will wait for other service instances that have started sooner than this one (and are of example-service or other-service type) to finish before proceeding.
FASTMAP knows that a particular piece of configuration belongs to a service instance, allowing NSO to revert the change as needed. But what happens when several service instances share a resource that may or may not exist before the first service instance is created? If the service implementation naively checks for existence and creates the resource when it is missing, then the resource will be tracked with the first service instance only. If, later on, this first instance is removed, then the shared resource is also removed, affecting all other instances.
A well-known solution to this kind of problem is reference counting. NSO uses reference counting by default with the XML templates and Python Maagic API, while in Java Maapi and Navu APIs, the sharedCreate(), sharedSet(), and sharedSetValues() functions need to be used.
When enabled, the reference counter allows FASTMAP algorithm to keep track of the usage and only delete data when the last service instance referring to this data is removed.
Furthermore, containers and list items created using the sharedCreate() and sharedSetValues() functions also get an additional attribute called backpointer. (But this functionality is currently not available for individual leafs.)
backpointer points back to the service instance that created the entity in the first place. This makes it possible to look at part of the configuration, say under /devices tree, and answer the question: which parts of the device configuration were created by which service?
To see reference counting in action, start the examples.ncs/implement-a-service/iface-v3 example with make demo and configure a service instance.
Then configure another service instance with the same parameters and use the display service-meta-data pipe to show the reference counts and backpointers:
Notice how commit dry-run produces no new device configuration but the system still tracks the changes. If you wish, remove the first instance and verify the GigabitEthernet 0/1 configuration is still there, but is gone when you also remove the second one.
But what happens if the two services produce different configurations for the same node? Say, one sets the IP address to 10.1.2.3 and the other to 10.1.2.4. Conceptually, these two services are incompatible, and instantiating both at the same time produces a broken configuration (instantiating the second service instance breaks the configuration for the first). What is worse is that the current configuration depends on the order the services were deployed or re-deployed. For example, re-deploying the first service will change the configuration from 10.1.2.4 back to 10.1.2.3 and vice versa. Such inconsistencies break the declarative configuration model and really should be avoided.
In practice, however, NSO does not prevent services from producing such configuration. But note that we strongly recommend against it and that there are associated limitations, such as service un-deploy not reverting configuration to that produced by the other instance (but when all services are removed, the original configuration is still restored).
The commit | debug service pipe command warns about any such conflict that it finds but may miss conflicts on individual leafs. The best practice is to use integration tests in the service development life cycle to ensure there are no conflicts, especially when multiple teams develop their own set of services that are to be deployed on the same NSO instance.
Much like a service in NSO can provision device configurations, it can also provision other, non-device data, as well as other services. We call the approach of services provisioning other services 'service stacking' and the services that are involved — 'stacked'.
Service stacking concepts usually come into play for bigger, more complex services. There are a number of reasons why you might prefer stacked services to a single monolithic one:
Smaller, more manageable services with simpler logic.
Separation of concerns and responsibility.
Clearer ownership across teams for (parts of) overall service.
Smaller services reusable as components across the solution.
Stacked services are also the basis for LSA, which takes this concept even further. See for details.
The standard naming convention with stacked services distinguishes between a Resource-Facing Service (RFS), that directly configures one or more devices, and a Customer-Facing Service (CFS), that is the top-level service, configuring only other services, not devices. There can be more than two layers of services in the stack, too.
While NSO does not prevent a single service from configuring devices as well as services, in the majority of cases this results in a less clean design and is best avoided.
Overall, creating stacked services is very similar to the non-stacked approach. First, you can design the RFS services as usual. Actually, you might take existing services and reuse those. These then become your lower-level services, since they are lower in the stack.
Then you create a higher-level service, say a CFS, that configures another service, or a few, instead of a device. You can even use a template-only service to do that, such as:
The preceding example references an existing iface service, such as the one in the examples.ncs/implement-a-service/iface-v3 example. The output shows hard-coded values but you can change those as you would for any other service.
In practice, you might find it beneficial to modularize your data model and potentially reuse parts in both, the lower- and higher-level service. This avoids duplication while still allowing you to directly expose some of the lower-level service functionality through the higher-level model.
The most important principle to keep in mind is that the data created by any service is owned by that service, regardless of how the mapping is done (through code or templates). If the user deletes a service instance, FASTMAP will automatically delete whatever the service created, including any other services. Likewise, if the operator directly manipulates service data that is created by another service, the higher-level service becomes out of sync. The check-sync service action checks this for services as well as devices.
In stacked service design, the lower-level service data is under the control of the higher-level service and must not be directly manipulated. Only the higher-level service may manipulate that data. However, two higher-level services may manipulate the same structures, since NSO performs reference counting (see ).
Designing services in NSO offers a great deal of flexibility with multiple approaches available to suit different needs. But what’s the best way to go about it? At its core, a service abstracts a network service or functionality, bridging user-friendly inputs with network configurations. This definition leaves the implementation open-ended, providing countless possibilities for designing and building services. However, there are certain techniques and best practices that can help enhance performance and simplify ongoing maintenance, making your services more efficient and easier to manage.
Regardless of the type of service chosen—whether Java, Python, or plain template services—there are certain design patterns that can be followed to improve their long-term effectiveness. Rather than diving into API-level specifics, we’ll focus on higher-level design principles, with an emphasis on leveraging the stacked service approach for maximum efficiency and scalability.
When designing a service, the first step is to identify the functionality of the network service and the corresponding device configurations it encompasses. The service should then be designed to generate those configurations. These configurations can either be static—hard-coded into the service if they remain consistent across all instances—or dynamic, represented as variables that adapt based on the service’s input parameters.
The flexibility in service design is virtually limitless, as both Java and Python can be used to define services, allowing for the generation of static or dynamic configurations based on minimal input. Ultimately, the goal is to have the service efficiently represent as much of the required device configuration as possible, while minimizing the number of input parameters.
When striving to achieve the goal of producing comprehensive device configurations, it's common to end up with a service that generates an extensive set of configurations. At first glance, this might seem ideal; however, it can introduce significant performance challenges.
As the volume of a service's device configurations increases, its performance often declines. Both creating and modifying the service take longer, regardless of whether the change involves a single line of configuration or the entire set. In fact, the execution time of the service remains consistent for all modifications and increases proportionally with the size of the configurations it generates.
The underlying reason for this behavior is tied to FASTMAP. Without delving too deeply into its mechanics, FASTMAP essentially runs the service logic anew with every deploy or re-deploy (modification), regenerating all the device configurations from scratch. This process not only re-executes user-defined logic—whether in Java, Python, or templates—but also tasks NSO with generating the reverse diffset for the service. As the size of the reverse diffset grows, so does the computational load, leading to slower performance.
From this, it's clear that writing efficient service logic is crucial. Optimizing the time complexity of operations within the service callbacks will naturally improve performance, just as with any other software. However, there's a less obvious yet equally important factor to consider: minimizing the service diffset. A smaller diffset results in better performance overall.
At first glance, this might seem to contradict the initial goal of representing as much configuration as possible with minimal input parameters. This apparent conflict is where the concept of stacked services comes into play, offering a way to balance these priorities effectively.
We want a service to generate as much configuration as possible, but it doesn’t need to handle everything on its own. While a single service becomes slower as it takes on more, distributing the workload across multiple services introduces a new dimension of optimization.
For example, consider a simple service that configures interface descriptions. While not a real network service, it serves as a useful illustration of the impact of heavy operations and large diffsets. Let's explore how this approach can help optimize performance.
Each service instance will take, as input, a list of devices to configure and the number of interfaces to be configured for each device.
The callback will then iterate through each provided device, creating interfaces and assigning descriptions in a loop.
When evaluating the service's performance, there are two key aspects to consider: the callback execution time and the time NSO takes to calculate the diffset. To analyze these, we can use NSO’s progress trace to gather statistics. Let’s start with an example involving three devices and 10 interfaces:
The two key events we need to focus on are the create event for the service, which provides the execution time of the create callback, and the "saving reverse diff-set and applying changes" event, which shows how long NSO took to calculate the reverse diff-set.
Let’s capture the same data for 100 and 1000 interfaces to compare the results.
We can observe that the time scales proportionally with the workload in the create callback as well as the size of the diffset. To demonstrate that the time remains consistent regardless of the size of the modification, we add one more interface to the 1000 interfaces already configured.
From the progress trace, we can see that adding one interface took about the same amount of time as adding 1000 interfaces.
Fastmap offers significant benefits to our solution, but this performance trade-off is an unavoidable cost. As a result, our service will remain consistently slow for all modifications as long as it handles large-scale device configurations. To address this, our focus must shift to reducing the size of the device configuration.
The solution lies in distributing the configurations across multiple services while assigning the main service the role of managing these individual services. By analyzing the current service's functionality, we can easily identify how to break it down—by device. Instead of having a single service provisioning multiple devices, we will transition to a setup where one main service provisions multiple sub-services, with each sub-service responsible for provisioning a single device. The resulting structure will look as follows.
We'll begin by renaming our python-service to upper-python-service. This distinction is purely for clarity and to differentiate the two service types. In practice, the naming itself is not critical, as long as it aligns with the desired naming conventions for the northbound API, which represents the customer-facing service. The upper-python-service will still function as the main service that users interact with to configure interfaces on multiple devices, just as in the previous example.
The upper-python-service however, will not provision any devices directly. Instead, it will delegate that responsibility to another layer of services by creating and managing those subordinate services.
The lower-python-service will be created by the upper-python-service and will ultimately handle provisioning the device. This service is designed to take only a single device as input, which corresponds to the device it will provision. The behavior and interaction between the two services can be observed in the Python callbacks that define their logic.
The upper service creates a lower service for each device, and each lower service is responsible for provisioning its assigned device and populating its interfaces. This approach distributes the workload, reducing the load on individual services. The upper service loops over the total number of devices and generates a diffset consisting of the input parameters for each lower service. Each lower service then loops over the interfaces for its specific device and creates a diffset covering all interfaces for that device.
All of this happens within a single NSO transaction, ensuring that, from the user’s perspective, the behavior remains identical to the previous design.
At this point, you might wonder: if this still occurs in a single transaction and the total number of loops and combined diffset size remain unchanged, how does this improve performance? That’s a valid observation. When creating a large dataset all at once, this approach doesn’t provide a performance gain—in fact, the addition of an extra service layer might introduce a minimal and negligible amount of overhead.
However, the real benefit becomes apparent in update scenarios, as we’ll illustrate below.
We begin by creating the service to configure 1000 interfaces for each device.
The execution time of the upper-python-service turned out to be relatively low, as expected. This is because it only involves a loop with three iterations, where data is passed from the input of the upper-python-service to each corresponding lower-python-service.
Similarly, calculating the diffset is also efficient. The reverse diffset for the upper-python-service only includes the configuration for the lower-python-services, which consists of just a few lines. This minimal complexity keeps both execution time and diffset calculation fast and lightweight.
In the same transaction, we also observe the execution of the three lower-python-services.
Each service callback took approximately 8 seconds to execute, and calculating the diffset took around 2.5 seconds per service. This results in a total callback execution time of about 24 seconds and a total diffset calculation time of around 8 seconds, which is less than the time required in the previous service design.
So, what’s the advantage of stacking services like this? The real benefit becomes evident during updates. Let’s add an interface to device CE-1, just as we did with the previous design, to illustrate this.
Observing the progress trace generated for this scenario would give a clearer understanding. From the trace, we see that the upper-python-service was invoked and executed just as quickly as it did during the initial deployment. The same applies to the callback execution and diffset calculation time for the lower-python-service handling CE-1.
But what about CE-2 and PE-1? Interestingly, there are no traces of these services in the log. That’s because they were never executed. The modification was passed only to the relevant lower-python-service for CE-1, while the other two services remained untouched.
And that is the power of stacked services.
Does this mean the more we stack, the better? Should every single line of configuration be split into its own service? The answer is no. In most real-world cases, the primary performance bottleneck is the diffset calculation rather than the callback execution time. Service callbacks typically aren't computationally intensive, nor should they be.
Stacked services are generally used to address issues with diffset calculation, and this strategy is only effective if we can reduce the diffset size of the "hottest" service. However, increasing the number of services managed by the upper service also increases the total configuration it must generate on each re-deploy. This trade-off needs careful consideration to strike the right balance.
When restructuring a service into a stacked service model, the first target should always be devices. If a service configures multiple devices, it’s a good practice to split it up by adding another layer of services, ensuring that no more than one device is provisioned by any service at the lowest layer. This approach reduces the service's complexity, making it easier to maintain.
Focusing on a single device per service also provides significant advantages in various scenarios, such as restoring consistency when a device goes out of sync, handling NED migrations, hardware upgrades, or even migrating a device between NSO instances.
The lower service we created uses the device name as its key. The primary reason for this is to ensure a clear separation of service instances based on the devices they are deployed on. One key benefit of this approach is the ability to easily identify all services deployed on a specific device by simply filtering for that device. For example, after adding a few more services, you could list all services associated with a particular device using a show command similar to the following.
While the complete distribution of the service looks like this:
This approach provides an excellent way to maintain an overview of services deployed on each device. However, introducing new service types presents a challenge: you wouldn’t be able to see all service types with a single show command. For instance, show lower-python-service ... will only display instances of the lower-python-service. But what happens when the device also has L2VPNs, L3VPNs, or other service types, as it would in a real network?
To address this, we can nest the services within another list. By organizing all services under a common structure, we enable the ability to view and manage multiple service types for a device in a unified manner, providing a comprehensive overview with a single command.
To illustrate this approach, we need to introduce another service type. Moving beyond the dummy example, let’s use a more realistic scenario: the example. We'll refactor this service to adopt the stacked service approach while maintaining the existing customer-facing interface.
After the refactor, the service will shift from provisioning multiple devices directly through a single instance to creating a separate service instance for each device, VPN, and endpoint, what we call resource-facing services. These resource-facing services will be structured so that all device-specific services are grouped under a node for each device.
This is accomplished by introducing a list of devices, modeled within a separate package. We’ll create this new package and call it resource-facing-services, with the following model definition:
This model allows us to organize services by device, providing a unified structure for managing and querying all services deployed on each device.
Each element in this list will represent a device and all the services deployed on it. The model itself is empty, which is intentional, as each resource-facing service (RFS) will be added to this list through augmentation from its respective package. The YANG model for the RFS version of our L3VPN service is designed specifically to integrate seamlessly into this structure.
We deploy an L3VPN to our network with two CE endpoints by creating the following l3vpn customer-facing service.
After deploying our service, we can quickly gain an overview of the services deployed on a device without needing to analyze or reverse-engineer its configurations. For example, we can see that the device PE-1 is acting as a PE for two different endpoints within a VPN.
CE-1 serves as a CE for that VPN.
And CE-2 serves as another CE for that VPN.
This section lists some specific advice for implementing services, as well as any known limitations you might run into.
You may also obtain some useful information by using the debug service commit pipe command, such as commit dry-run | debug service. The command display the net effect of the service create code, as well as issue warnings about potentially problematic usage of overlapping shared data.
Service callbacks must be deterministic: NSO invokes service callbacks in a number of situations, such as for dry-run, check sync, and actual provisioning. If a service does not create the same configuration from the same inputs, NSO sees it as being out of sync, resulting in a lot of configuration churn and making it incompatible with many NSO features. If you need to introduce some randomness or rely on some other nondeterministic source of data, make sure to cache the values across callback invocations, such as by using opaque properties (see ) or persistent operational data (see ) populated in a pre-modification callback.
Never overwrite service inputs: Service input parameters capture client intent and a service should never change its own configuration. Such behavior not only muddles the intent but is also temporary when done in the create callback, as the changes are reverted on the next invocation.
If you need to keep some additional data that cannot be easily computed each time, consider using opaque properties (see ) or persistent operational data (see ) populated in a pre-modification callback.
A very common situation, when NSO is deployed in an existing network, is that the network already has services implemented. These services may have been deployed manually or through an older provisioning system. To take full advantage of the new system, you should consider importing the existing services into NSO. The goal is to use NSO to manage existing service instances, along with adding new ones in the future.
The process of identifying services and importing them into NSO is called Service Discovery and can be broken down into the following high-level parts:
Implementing the service to match existing device configuration.
Enumerating service instances and their parameters.
Amend the service metadata references with reconciliation.
Ultimately, the problem that service discovery addresses is one of referencing or linking configuration to services. Since the network already contains target configuration, a new service instance in NSO produces no changes in the network. This means the new service in NSO by default does not own the network configuration. One side effect is that removing a service will not remove the corresponding device configuration, which is likely to interfere with service modification as well.
Some of the steps in the process can be automated, while others are mostly manual. The amount of work differs a lot depending on how structured and consistent the original deployment is.
A prerequisite (or possibly the product in an iterative approach) is an NSO service that supports all the different variants of the configuration for the service that are used in the network. This usually means there will be a few additional parameters in the service model that allow selecting the variant of device configuration produced, as well as some covering other non-standard configurations (if such configuration is present).
In the simplest case, there is only one variant and that is the one that the service needs to produce. Let's take the examples.ncs/implement-a-service/iface-v2-py example and consider what happens when a device already has an existing interface configuration.
Configuring a new service instance does not produce any new device configuration (notice that device c1 has no changes).
However, when committed, NSO records the changes, just like in the case of overlapping configuration (see ). The main difference is that there is only a single backpointer, to a newly configured service, but the refcount is 2. The other item, that contributes to the refcount, is the original device configuration. Which is why the configuration is not deleted when the service instance is.
A prerequisite for service discovery to work is that it is possible to construct a list of the already existing services. Such a list may exist in an inventory system, an external database, or perhaps just an Excel spreadsheet.
You can import the list of services in a number of ways. If you are reading it in from a spreadsheet, a Python script using NSO API directly () and a module to read Excel files is likely a good choice.
Or, you might generate an XML data file to import using the ncs_load command; use display xml filter to help you create a template:
Regardless of the way you implement the data import, you can run into two kinds of problems.
On one hand, the service list data may be incomplete. Suppose that the earliest service instances deployed did not take the network mask as a parameter. Moreover, for some specific reasons, a number of interfaces had to deviate from the default of 28 and that information was never populated back in the inventory for old services after the netmask parameter was added.
Now the only place where that information is still kept may be the actual device configuration. Fortunately, you can access it through NSO, which may allow you to extract the missing data automatically, for example:
On the other hand, some parameters may be NSO specific, such as those controlling which variant of configuration to produce. Again, you might be able to use a script to find this information, or it could turn out that the configuration is too complex to make such a script feasible.
In general, this can be the most tricky part of the service discovery process, making it very hard to automate. It all comes down to how good the existing data is. Keep in mind that this exercise is typically also a cleanup exercise, and every network will be different.
The last step is updating the metadata, telling NSO that a given service controls (owns) the device configuration that was already present when the NSO service was configured. This is called reconciliation and you achieve it using a special re-deploy reconcile action for the service.
Let's examine the effects of this action on the following data:
Having run the action, NSO has updated the refcount to remove the reference to the original device configuration:
What is more, the reconcile algorithm works even if multiple service instances share configuration. What if you had two instances of the iface service, instead of one?
Before reconciliation, the device configuration would show a refcount of three.
Invoking re-deploy reconcile on either one or both of the instances makes the services sole owners of the configuration.
This means the device configuration is removed only when you remove both service instances.
The reconcile operation only removes the references to the original configuration (without the service backpointer), so you can execute it as many times as you wish. Just note that it is part of a service re-deploy, with all the implications that brings, such as potentially deploying new configuration to devices when you change the service template.
As an alternative to the re-deploy reconcile, you can initially add the service configuration with a commit reconcile variant, performing reconciliation right away.
It is hard to design a service in one go when you wish to cover existing configurations that are exceedingly complex or have a lot of variance. In such cases, many prefer an iterative approach, where you tackle the problem piece-by-piece.
Suppose there are two variants of the service configured in the network; iface-v2-py and the newer iface-v3, which produces a slightly different configuration. This is a typical scenario when a different (non-NSO) automation system is used and the service gradually evolves over time. Or, when a Method of Procedure (MOP) is updated if manual provisioning is used.
We will tackle this scenario to show how you might perform service discovery in an iterative fashion. We shall start with the iface-v2-py as the first iteration of the iface service, which represents what configuration the service should produce to the best of our current knowledge.
There are configurations for two service instances in the network already: for interfaces 0/1 and 0/2 on the c1 device. So, configure the two corresponding iface instances.
You can also use the commit no-deploy variant to add service parameters when a normal commit would produce device changes, which you do not want.
Then use the re-deploy reconcile { discard-non-service-config } dry-run command to observe the difference between the service-produced configuration and the one present in the network.
For instance1, the config is the same, so you can safely reconcile it already.
But interface 0/2 (instance2), which you suspect was initially provisioned with the newer version of the service, produces the following:
The output tells you that the service is missing the ip dhcp snooping trust part of the interface configuration. Since the service does not generate this part of the configuration yet, running re-deploy reconcile { discard-non-service-config } (without dry-run) would remove the DHCP trust setting. This is not what we want.
One option, and this is the default reconcile mode, would be to use keep-non-service-config instead of discard-non-service-config. But that would result in the service taking ownership of only part of the interface configuration (the IP address).
Instead, the right approach is to add the missing part to the service template. There is, however, a little problem. Adding the DHCP snooping trust configuration unconditionally to the template can interfere with the other service instance, instance1.
In some cases, upgrading the old configuration to the new variant is viable, but in most situations, you likely want to avoid all device configuration changes. For the latter case, you need to add another parameter to the service model that selects the configuration variant. You must update the template too, producing the second iteration of the service.
With the updated configuration, you can now safely reconcile the service2 service instance:
Nevertheless, keep in mind that the discard-non-service-config reconcile operation only considers parts of the device configuration under nodes that are created with the service mapping. Even if all data there is covered in the mapping, there could still be other parts that belong to the service but reside in an entirely different section of the device configuration (say DNS configuration under ip name-server, which is outside the interface GigabitEthernet part) or even a different device. That kind of configuration the discard-non-service-config option cannot find on its own and you must add manually.
You can find the complete iface service as part of the examples.ncs/development-guide/services/discovery example.
Since there were only two service instances to reconcile, the process is now complete. In practice, you are likely to encounter multiple variants and many more service instances, requiring you to make additional iterations. But you can follow the iterative process shown here.
In some cases a service may need to rely on the actual device configurations to compute the changeset. It is often a requirement to pull the current device configurations from the network before executing such service. Doing a full sync-from on a number of devices is an expensive task, especially if it needs to be performed often. The alternative way in this case is using partial-sync-from.
In cases where a multitude of service instances touch a device that is not entirely orchestrated using NSO, i.e. relying on the partial-sync-from feature described above, and the device needs to be replaced then all services need to be re-deployed. This can be expensive depending on the number of service instances. Partial-sync-to enables the replacement of devices in a more efficient fashion.
Partial-sync-from and partial-sync-to actions allow to specify certain portions of the device's configuration to be pulled or pushed from or to the network, respectively, rather than the full config. These are more efficient operations on NETCONF devices and NEDs that support the partial-show feature. NEDs that do not support the partial-show feature will fall back to pulling or pushing the whole configuration.
Even though partial-sync-from and partial-sync-to allows to pull or push only a part of the device's configuration, the actions are not allowed to break the consistency of configuration in CDB or on the device as defined by the YANG model. Hence, extra consideration needs to be given to dependencies inside the device model. If some configuration item A depends on configuration item B in the device's configuration, pulling only A may fail due to unsatisfied dependency on B. In this case, both A and B need to be pulled, even if the service is only interested in the value of A.
It is important to note that partial-sync-from and partial-sync-to clear the transaction ID of the device in NSO unless the whole configuration has been selected (e.g. /ncs:devices/ncs:device[ncs:name='ex0']/ncs:config). This ensures NSO does not miss any changes to other parts of the device configuration but it does make the device out of sync.
sync-fromPulling the configuration from the network needs to be initiated outside the service code. At the same time, the list of configuration subtrees required by a certain service should be maintained by the service developer. Hence it is a good practice for such a service to implement a wrapper action that invokes the generic /devices/partial-sync-from action with the correct list of paths. The user or application that manages the service would only need to invoke the wrapper action without needing to know which parts of the configuration the service is interested in.
The snippet in the example below (Example of running partial-sync-from action via Java API) gives an example of running partial-sync-from action via Java, using router device from examples.ncs/getting-started/developing-with-ncs/0-router-network.
Learn the working aspects of YANG data modeling language in NSO.
YANG is a data modeling language used to model configuration and state data manipulated by a NETCONF agent. The YANG modeling language is defined in RFC 6020 (version 1) and RFC 7950 (version 1.1). YANG as a language will not be described in its entirety here - rather, we refer to the IETF RFC text at and .
In NSO, YANG is not only used for NETCONF data. On the contrary, YANG is used to describe the data model as a whole and used by all northbound interfaces.
NSO uses YANG for Service Models as well as for specifying device interfaces. Where do these models come from? When it comes to services, the YANG service model is specified as part of the service design activity. NSO ships several examples of service models that can be used as a starting point. For devices, it depends on the underlying device interface how the YANG model is derived. For native NETCONF/YANG devices the YANG model is of course given by the device. For SNMP devices, the NSO tool-chain generates the corresponding YANG modules, (SNMP NED). For CLI devices, the package for the device contains the YANG data model. This is shipped in text and can be modified to cater for upgrades. Customers can also write their own YANG data models to render the CLI integration (CLI NED). The situation for other interfaces is similar to CLI, a YANG model that corresponds to the device interface data model is written and bundled in the NED package.
kp: A HKeypathRef object with a key path of the affected service instance, such as /svc:my-service{instance1}.root: A Maagic node for the root of the data model.
service: A Maagic node for the service instance.
proplist: Opaque service properties, see Persistent Opaque Data.
path/svc:my-service{instance1}ncsRoot: A NavuNode for the root of the ncs data model.
service: A NavuNode for the service instance.
opaque: Opaque service properties, see Persistent Opaque Data.
Avoiding overlapping configuration between service instances causing conflicts, such as using one service instance per device (see examples in Designing for Maximal Transaction Throughput).
No service ordering in a transaction: NSO is a transactional system and as such does not have the concept of order inside a single transaction. That means NSO does not guarantee any specific order in which the service mapping code executes if the same transaction touches multiple service instances. Likewise, your code should not make any assumptions about running before or after other service code.
Return value of create callback: The create callback is not the exclusive user of the opaque object; the object can be chained in several different callbacks, such as pre- and post-modification. Therefore, returning None/null from create callback is not a good practice. Instead, always return the opaque object even if the create callback does not use it.
Avoid delete in service create: Unlike creation, deleting configuration does not support reference counting, as there is no data left to reference count. This means the deleted elements are tied to the service instance that deleted them.
Additionally, FASTMAP must store the entire deleted tree and restore it on every service change or re-deploy, only to be deleted again. Depending on the amount of deleted data, this is potentially an expensive operation.
So, a general rule of thumb is to never use delete in service create code. If an explicit delete is used, debug service may display the following warning:\
However, the service may also delete data implicitly, through when and choice statements in the YANG data model. If a when statement evaluates to false, the configuration tree below that node is deleted. Likewise, if a case is set in a choice statement, the previously set case is deleted. This has the same limitations as an explicit delete.
To avoid these issues, create a separate service, that only handles deletion, and use it in the main service through the stacked service design (see ). This approach allows you to reference count the deletion operation and contains the effect of restoring deleted data through a small, rarely-changing helper service. See examples.ncs/development-guide/services/shared-delete for an example.
Alternatively, you might consider pre- and post-modification callbacks for some specific cases.
Prefer shared*() functions: Non-shared create and set operations in the Java and Python low-level API do not add reference counts or backpointer information to changed elements. In case there is overlap with another service, unwanted removal can occur. See Reference Counting Overlapping Configuration for details.
In general, you should prefer sharedCreate(), sharedSet(), and sharedSetValues(). If non-shared variants are used in a shared context, service debug displays a warning, such as:\
Likewise, do not use MAAPI load_config variants from the service code. Use the sharedSetValues() function to load XML data from a file or a string.
Reordering ordered-by-user lists: If the service code rearranges an ordered-by-user list with items that were created by another service, that other service becomes out of sync. In some cases, you might be able to avoid out-of-sync scenarios by leveraging special XML template syntax (see Operations on ordered lists and leaf-lists) or using service stacking with a helper service.
In general, however, you should reconsider your design and try to avoid such scenarios.
Automatic upgrade of keys for existing services is unsupported: Service backpointers, described in Reference Counting Overlapping Configuration, rely on the keys that the service model defines to identify individual service instances. If you update the model by adding, removing, or changing the type of leafs used in the service list key, while there are deployed service instances, the backpointers will not be automatically updated. Therefore, it is best to not change the service list key.
A workaround, if the service key absolutely must change, is to first perform a no-networking undeploy of the affected service instances, then upgrade the model, and finally no-networking re-deploy the previously un-deployed services.
Avoid conflicting intents: Consider that a service is executed as part of a transaction. If, in the same transaction, the service gets conflicting intents, for example, it gets modified and deleted, the transaction is aborted. You must decide which intent has higher priority and design your services to avoid such situations.



NSO also relies on the revision statement in YANG modules for revision management of different versions of the same type of managed device, but running different software versions.
A YANG module can be directly transformed into a final schema (.fxs) file that can be loaded into NSO. Currently, all features of the YANG 1.0 language are supported where anyxml statement data is treated as a string. Most features of the YANG 1.1 language are supported. For a list of exceptions, please refer to the YANG 1.1 section of the ncsc man page.
The data models including the .fxs file along with any code are bundled into packages that can be loaded to NSO. This is true for service applications as well as for NEDs and other packages. The corresponding YANG can be found in the src/yang directory in the package.
This section is a brief introduction to YANG. The exact details of all language constructs are fully described in RFC 6020 and RFC 7950.
The NSO programmer must know YANG well since all APIs use various paths that are derived from the YANG data model.
A module contains three types of statements: module-header statements, revision statements, and definition statements. The module header statements describe the module and give information about the module itself, the revision statements give information about the history of the module, and the definition statements are the body of the module where the data model is defined.
A module may be divided into submodules, based on the needs of the module owner. The external view remains that of a single module, regardless of the presence or size of its submodules.
The include statement allows a module or submodule to reference material in submodules, and the import statement allows references to material defined in other modules.
YANG defines four types of nodes for data modeling. In each of the following subsections, the example shows the YANG syntax as well as a corresponding NETCONF XML representation.
A leaf node contains simple data like an integer or a string. It has exactly one value of a particular type and no child nodes.
With XML value representation for example:
An interesting variant of leaf nodes is typeless leafs.
With XML value representation for example:
A leaf-list is a sequence of leaf nodes with exactly one value of a particular type per leaf.
With XML value representation for example:
A container node is used to group related nodes in a subtree. It has only child nodes and no value and may contain any number of child nodes of any type (including leafs, lists, containers, and leaf-lists).
With XML value representation for example:
A list defines a sequence of list entries. Each entry is like a structure or a record instance and is uniquely identified by the values of its key leafs. A list can define multiple keys and may contain any number of child nodes of any type (including leafs, lists, containers, etc.).
With XML value representation for example:
These statements are combined to define the module:
YANG can model state data, as well as configuration data, based on the config statement. When a node is tagged with config false, its sub-hierarchy is flagged as state data, to be reported using NETCONF's get operation, not the get-config operation. Parent containers, lists, and key leafs are reported also, giving the context for the state data.
In this example, two leafs are defined for each interface, a configured speed, and an observed speed. The observed speed is not a configuration, so it can be returned with NETCONF get operations, but not with get-config operations. The observed speed is not configuration data, and cannot be manipulated using edit-config.
YANG has a set of built-in types, similar to those of many programming languages, but with some differences due to special requirements from the management domain. The following table summarizes the built-in types.
The table below lists YANG built-in types:
binary
Text
Any binary data
bits
Text/Number
A set of bits or flags
boolean
Text
true or false
decimal64
Number
64-bit fixed point real number
YANG can define derived types from base types using the typedef statement. A base type can be either a built-in type or a derived type, allowing a hierarchy of derived types. A derived type can be used as the argument for the type statement.
With XML value representation for example:
User-defined typedefs are useful when we want to name and reuse a type several times. It is also possible to restrict leafs inline in the data model as in:
Groups of nodes can be assembled into the equivalent of complex types using the grouping statement. grouping defines a set of nodes that are instantiated with the uses statement:
With XML value representation for example:
The grouping can be refined as it is used, allowing certain statements to be overridden. In this example, the description is refined:
YANG allows the data model to segregate incompatible nodes into distinct choices using the choice and case statements. The choice statement contains a set of case statements that define sets of schema nodes that cannot appear together. Each case may contain multiple nodes, but each node may appear in only one case under a choice.
When the nodes from one case are created, all nodes from all other cases are implicitly deleted. The device handles the enforcement of the constraint, preventing incompatibilities from existing in the configuration.
The choice and case nodes appear only in the schema tree, not in the data tree or XML encoding. The additional levels of hierarchy are not needed beyond the conceptual schema.
With XML value representation for example:
YANG allows a module to insert additional nodes into data models, including both the current module (and its submodules) or an external module. This is useful e.g. for vendors to add vendor-specific parameters to standard data models in an interoperable way.
The augment statement defines the location in the data model hierarchy where new nodes are inserted, and the when statement defines the conditions when the new nodes are valid.
This example defines a uid node that only is valid when the user's class is not wheel.
If a module augments another model, the XML representation of the data will reflect the prefix of the augmenting model. For example, if the above augmentation were in a module with the prefix other, the XML would look like:
YANG allows the definition of NETCONF RPCs. The method names, input parameters, and output parameters are modeled using YANG data definition statements.
YANG allows the definition of notifications suitable for NETCONF. YANG data definition statements are used to model the content of the notification.
Assume we have a small trivial YANG file test.yang:
There is an Emacs mode suitable for YANG file editing in the system distribution. It is called yang-mode.el.
We can use ncsc compiler to compile the YANG module.
The above command creates an output file test.fxs that is a compiled schema that can be loaded into the system. The ncsc compiler with all its flags is fully described in ncsc(1) in Manual Pages.
There exist several standards-based auxiliary YANG modules defining various useful data types. These modules, as well as their accompanying .fxs files can be found in the ${NCS_DIR}/src/confd/yang directory in the distribution.
The modules are:
ietf-yang-types: Defining some basic data types such as counters, dates, and times.
ietf-inet-types: Defining several useful types related to IP addresses.
Whenever we wish to use any of those predefined modules we need to not only import the module into our YANG module, but we must also load the corresponding .fxs file for the imported module into the system.
So, if we extend our test module so that it looks like:
Normally when importing other YANG modules we must indicate through the --yangpath flag to ncsc where to search for the imported module. In the special case of the standard modules, this is not required.
We compile the above as:
We see that the generated .fxs file has a dependency on the standard urn:ietf:params:xml:ns:yang:inet-types namespace. Thus if we try to start NSO we must also ensure that the fxs file for that namespace is loaded.
Failing to do so gives:
The remedy is to modify ncs.conf so that it contains the proper load path or to provide the directory containing the fxs file, alternatively, we can provide the path on the command line. The directory ${NCS_DIR}/etc/ncs contains pre-compiled versions of the standard YANG modules.
ncs.conf is the configuration file for NSO itself. It is described in the ncs.conf(5) in Manual Pages.
The YANG language has built-in declarative constructs for common integrity constraints. These constructs are conveniently specified as must statements.
A must statement is an XPath expression that must evaluate to true or a non-empty node-set.
An example is:
XPath is a very powerful tool here. It is often possible to express the most realistic validation constraints using XPath expressions. Note that for performance reasons, it is recommended to use the tailf:dependency statement in the must statement. The compiler gives a warning if a must statement lacks a tailf:dependency statement, and it cannot derive the dependency from the expression. The options --fail-on-warnings or -E TAILF_MUST_NEED_DEPENDENCY can be given to force this warning to be treated as an error. See tailf:dependency in tailf_yang_extensions(5) in Manual Pages for details.
Another useful built-in constraint checker is the unique statement.
With the YANG code:
We specify that the combination of IP and port must be unique. Thus the configuration is not valid:
The usage of leafrefs (See the YANG specification) ensures that we do not end up with configurations with dangling pointers. Leafrefs are also especially good, since the CLI and Web UI can render a better interface.
If other constraints are necessary, validation callback functions can be programmed in Java, Python, or Erlang. See tailf:validate in tailf_yang_extensions(5) in Manual Pages for details.
The when statement is used to make its parent statement conditional. If the XPath expression specified as the argument to this statement evaluates to false, the parent node cannot be given configured. Furthermore, if the parent node exists, and some other node is changed so that the XPath expression becomes false, the parent node is automatically deleted. For example:
This data model snippet says that b can only exist if a is true. If a is true, and b has a value, and a is set to false, b will automatically be deleted.
Since the XPath expression in theory can refer to any node in the data tree, it has to be re-evaluated when any node in the tree is modified. But this would have a disastrous performance impact, so to avoid this, NSO keeps track of dependencies for each when expression. In some simple cases, the confdc can figure out these dependencies by itself. In the example above, NSO will detect that b is dependent on a, and evaluate b's XPath expression only if a is modified. If confdc cannot detect the dependencies by itself, it requires a tailf:dependency statement in the when statement. See tailf:dependency in tailf_yang_extensions(5) in Manual Pages for details.
Tail-f has an extensive set of extensions to the YANG language that integrates YANG models in NSO. For example, when we have config false; data, we may wish to invoke user C code to deliver the statistics data in runtime. To do this we annotate the YANG model with a Tail-f extension called tailf:callpoint.
Alternatively, we may wish to invoke user code to validate the configuration, this is also controlled through an extension called tailf:validate.
All these extensions are handled as normal YANG extensions. (YANG is designed to be extended) We have defined the Tail-f proprietary extensions in a file ${NCS_DIR}/src/ncs/yang/tailf-common.yang
Continuing with our previous example, by adding a callpoint and a validation point, we get:
The above module contains a callpoint and a validation point. The exact syntax for all Tail-f extensions is defined in the tailf-common.yang file.
Note the import statement where we import tailf-common.
When we are using YANG specifications to generate Java classes for ConfM, these extensions are ignored. They only make sense on the device side. It is worth mentioning them though since EMS developers will certainly get the YANG specifications from the device developers, thus the YANG specifications may contain extensions
The man page tailf_yang_extensions(5) in Manual Pages describes all the Tail-f YANG extensions.
Sometimes it is convenient to specify all Tail-f extension statements in-line in the original YANG module. But in some cases, e.g. when implementing a standard YANG module, it is better to keep the Tail-f extension statements in a separate annotation file. When the YANG module is compiled to an fxs file, the compiler is given the original YANG module and any number of annotation files.
A YANG annotation file is a normal YANG module that imports the module to annotate. Then the tailf:annotate statement is used to annotate nodes in the original module. For example, the module test above can be annotated like this:
To compile the module with annotations, use the -a parameter to confdc:
Certain parts of a YANG model are used by northbound agents, e.g. CLI and Web UI, to provide the end-user with custom help texts and error messages.
A YANG statement can be annotated with a description statement which is used to describe the definition for a reader of the module. This text is often too long and too detailed to be useful as help text in a CLI. For this reason, NSO by default does not use the text in the description for this purpose. Instead, a tail-f-specific statement, tailf:info is used. It is recommended that the standard description statement contains a detailed description suitable for a module reader (e.g. NETCONF client or server implementor), and tailf:info contains a CLI help text.
As an alternative, NSO can be instructed to use the text in the description statement also for CLI help text. See the option --use-description in ncsc(1) in Manual Pages.
For example, CLI uses the help text to prompt for a value of this particular type. The CLI shows this information during tab/command completion or if the end-user explicitly asks for help using the ?-character. The behavior depends on the mode the CLI is running in.
The Web UI uses this information likewise to help the end-user.
The mtu definition below has been annotated to enrich the end-user experience:
Alternatively, we could have provided the help text in a typedef statement as in:
If there is an explicit help text attached to a leaf, it overrides the help text attached to the type.
A statement can have an optional error message statement. The northbound agents, for example, the CLI uses this to inform the end-user about a provided value that is not of the correct type. If no custom error message statement is available NSO generates a built-in error message, e.g. 1505 is too large.
All northbound agents use the extra information provided by an error-message statement.
The typedef statement below has been annotated to enrich the end-user experience when it comes to error information:
Say, for example, that we want to model the interface list on a Linux-based device. Running the ip link list command reveals the type of information we have to model
And, this is how we want to represent the above in XML:
An interface or a link has data associated with it. It also has a name, an obvious choice to use as the key - the data item that uniquely identifies an individual interface.
The structure of a YANG model is always a header, followed by type definitions, followed by the actual structure of the data. A YANG model for the interface list starts with a header:
A number of datatype definitions may follow the YANG module header. Looking at the output from /sbin/ip we see that each interface has a number of boolean flags associated with it, e.g. UP, and NOARP.
One way to model a sequence of boolean flags is as a sequence of statements:
A better way is to model this as:
We could choose to group these leafs together into a grouping. This makes sense if we wish to use the same set of boolean flags in more than one place. We could thus create a named grouping such as:
The output from /sbin/ip also contains Ethernet MAC addresses. These are best represented by the mac-address type defined in the ietf-yang-types.yang file. The mac-address type is defined as:
This defines a restriction on the string type, restricting values of the defined type mac-address to be strings adhering to the regular expression [0-9a-fA-F]{2}(:[0-9a-fA-F]{2}){5} Thus strings such as a6:17:b9:86:2c:04 will be accepted.
Queue disciplines are associated with each device. They are typically used for bandwidth management. Another string restriction we could do is to define an enumeration of the different queue disciplines that can be attached to an interface.
We could write this as:
There are a large number of queue disciplines and we only list a few here. The example serves to show that by using enumerations we can restrict the values of the data set in a way that ensures that the data entered always is valid from a syntactical point of view.
Now that we have a number of usable datatypes, we continue with the actual data structure describing a list of interface entries:
The key attribute on the leaf named "name" is important. It indicates that the leaf is the instance key for the list entry named link. All the link leafs are guaranteed to have unique values for their name leafs due to the key declaration.
If one leaf alone does not uniquely identify an object, we can define multiple keys. At least one leaf must be an instance key - we cannot have lists without a key.
List entries are ordered and indexed according to the value of the key(s).
A very common situation when modeling a device configuration is that we wish to model a relationship between two objects. This is achieved by means of the leafref statements. A leafref points to a child of a list entry which either is defined using a key or unique attribute.
The leafref statement can be used to express three flavors of relationships: extensions, specializations, and associations. Below we exemplify this by extending the link example from above.
Firstly, assume we want to put/store the queue disciplines from the previous section in a separate container - not embedded inside the links container.
We then specify a separate container, containing all the queue disciplines which each refers to a specific link entry. This is written as:
The linkName statement is both an instance key of the queueDiscipline list, and at the same time refers to a specific link entry. This way we can extend the amount of configuration data associated with a specific link entry.
Secondly, assume we want to express a restriction or specialization on Ethernet link entries, e.g. it should be possible to restrict interface characteristics such as 10Mbps and half duplex.
We then specify a separate container, containing all the specializations which each refers to a specific link:
The linkName leaf is both an instance key to the linkLimitation list, and at the same time refers to a specific link leaf. This way we can restrict or specialize a specific link.
Thirdly, assume we want to express that one of the link entries should be the default link. In that case, we enforce an association between a non-dynamic defaultLink and a certain link entry:
Key leafs are always unique. Sometimes we may wish to impose further restrictions on objects. For example, we can ensure that all link entries have a unique MAC address. This is achieved through the use of the unique statement:
In this example, we have two unique statements. These two groups ensure that each server has a unique index number as well as a unique IP and port pair.
A leaf can have a static or dynamic default value. Static default values are defined with the default statement in the data model. For example:
and:
A dynamic default value means that the default value for the leaf is the value of some other leaf in the data model. This can be used to make the default values configurable by the user. Dynamic default values are defined using the tailf:default-ref statement. For example, suppose we want to make the MTU default value configurable:
Now suppose we have the following data:
In the example above, link eth0 has the mtu 1500, and the link eth1 has the mtu 1000. Since eth1 does not have a mtu value set, it defaults to the value of ../../mtu, which is 1000 in this case.
With the default value mechanism an old configuration can be used even after having added new settings.
Another example where default values are used is when a new instance is created. If all leafs within the instance have default values, these need not be specified in, for example, a NETCONF create operation.
Here is the final interface YANG model with all constructs described above:
If the above YANG file is saved on disk, as links.yang, we can compile and link it using the confdc compiler:
We now have a ready-to-use schema file named links.fxs on disk. To run this example, we need to copy the compiled links.fxs to a directory where NSO can find it.
A leafref is used to model relationships in the data model, as described in Modeling Relationships. In the simplest case, the leafref is a single leaf that references a single key in a list:
But sometimes a list has more than one key, or we need to refer to a list entry within another list. Consider this example:
If we want to refer to a specific server on a host, we must provide three values; the host name, the server IP, and the server port. Using leafrefs, we can accomplish this by using three connected leafs:
The path specification for server-ip means the IP address of the server under the host with the same name as specified in server-host.
The path specification for server-port means the port number of the server with the same IP as specified in server-ip, under the host with the same name as specified in server-host.
This syntax quickly gets awkward and error-prone. NSO supports a shorthand syntax, by introducing an XPath function deref() (see XPATH FUNCTIONS in Manual Pages ). Technically, this function follows a leafref value and returns all nodes that the leafref refers to (typically just one). The example above can be written like this:
Note that using the deref function is syntactic sugar for the basic syntax. The translation between the two formats is trivial. Also note that deref() is an extension to YANG, and third-party tools might not understand this syntax. To make sure that only plain YANG constructs are used in a module, the parameter --strict-yang can be given to confdc -c.
There are several reasons for supporting multiple configuration namespaces. Multiple namespaces can be used to group common datatypes and hierarchies to be used by other YANG models. Separate namespaces can be used to describe the configuration of unrelated sub-systems, i.e. to achieve strict configuration data model boundaries between these sub-systems.
As an example, datatypes.yang is a YANG module that defines a reusable data type.
We compile and link datatypes.yang into a final schema file representing the http://example.com/ns/dt namespace:
To reuse our user defined countersType, we must import the datatypes module.
When compiling this new module that refers to another module, we must indicate to confdc where to search for the imported module:
confdc also searches for referred modules in the colon (:) separated path defined by the environment variable YANG_MODPATH and . (dot) is implicitly included.
We have three different entities that define our configuration data.
The module name. A system typically consists of several modules. In the future, we also expect to see standard modules in a manner similar to how we have standard SNMP modules.
It is highly recommended to have the vendor name embedded in the module name, similar to how vendors have their names in proprietary MIBs today.
The XML namespace. A module defines a namespace. This is an important part of the module header. For example, we have:
The namespace string must uniquely define the namespace. It is very important that once we have settled on a namespace we never change it. The namespace string should remain the same between revisions of a product. Do not embed revision information in the namespace string since that breaks manager-side NETCONF scripts.
The revision statement as in:
The revision is exposed to a NETCONF manager in the capabilities sent from the agent to the NETCONF manager in the initial hello message. The fine details of revision management are being worked on in the IETF NETMOD working group and are not finalized at the time of this writing.
What is clear though, is that a manager should base its version decisions on the information in the revision string.
A capabilities reply from a NETCONF agent to the manager may look as:
where the revision information for the http://example.com/ns/link namespace is encoded as ?revision=2007-06-09 using standard URI notation.
When we change the data model for a namespace, it is recommended to change the revision statement and never make any changes to the data model that are backward incompatible. This means that all leafs that are added must be either optional or have a default value. That way it is ensured that the old NETCONF client code will continue to function on the new data model. Section 10 of RFC 6020 and section 11 of RFC 7950 define exactly what changes can be made to a data model to not break old NETCONF clients.
Internally and in the programming APIs, NSO uses integer values to represent YANG node names and the namespace URI. This conserves space and allows for more efficient comparisons (including switch statements) in the user application code. By default, confdc automatically computes a hash value for the namespace URI and for each string that is used as a node name.
Conflicts can occur in the mapping between strings and integer values - i.e. the initial assignment of integers to strings is unable to provide a unique, bi-directional mapping. Such conflicts are extremely rare (but possible) when the default hashing mechanism is used.
The conflicts are detected either by confdc or by the NSO daemon when it loads the .fxs files.
If there are any conflicts reported they will pertain to XML tags (or the namespace URI),
There are two different cases:
Two different strings mapped to the same integer. This is the classical hash conflict - extremely rare due to the high quality of the hash function used. The resolution is to manually assign a unique value to one of the conflicting strings. The value should be greater than 2^31+2 but less than 2^32-1. This way it will be out of the range of the automatic hash values, which are between 0 and 2^31-1. The best way to choose a value is by using a random number generator, as in 2147483649 + rand:uniform(2147483645). The tailf:id-value should be placed as a substatement to the statement where the conflict occurs, or in the module statement in case of namespace URI conflict.
One string mapped to two different integers. This is even more rare than the previous case - it can only happen if a hash conflict was detected and avoided through the use of tailf:id-value on one of the strings, and that string also occurs somewhere else. The resolution is to add the same tailf:id-value to the second occurrence of the string.
When converting a string to an enumeration value, the order of types in the union is important when the types overlap. The first matching type will be used, so we recommend having the narrower (or more specific) types first.
Consider the example below:
Converting the string 42 to a typed value using the YANG model above, will always result in a string value even though it is the string representation of an int32. Trying to convert the string unbounded will also result in a string value instead of the enumeration because the enumeration is placed after the string.
Instead, consider the example below where the string (being a wider type) is placed last:
Converting the string 42 to the corresponding union value will result in a int32. Trying to convert the string unbounded will also result in the enumeration value as expected. The relative order of the int32 and enumeration does not matter as they do not overlap.
Using the C and Python APIs to convert a string to a given value is further limited by the lack of restriction matching on the types. Consider the following example:
Converting the string 42 will result in a string value, even though the pattern requires the string to begin with a character in the "a" to "z" range. This value will be considered invalid by NSO if used in any calls handled by NSO.
To avoid issues when working with unions place wider types at the end. As an example put string last, int8 before int16 etc.
When using user-defined types together with NSO the compiled schema does not contain the original type as specified in the YANG file. This imposes some limitations on the running system.
High-level APIs are unable to infer the correct type of a value as this information is left out when the schema is compiled. It is possible to work around this issue by specifying the type explicitly whenever setting values of a user-defined type.
The normal representation of a type empty leaf in XML is <leaf-name/>. However, there is an exception when a leaf is a union of type empty and for example type string. Consider the example below:
In this case, both <example>example</example> and </example> will represent empty being set.
Manage user authentication, authorization, and audit using NSO's AAA mechanism.
Users log into NSO through the CLI, NETCONF, RESTCONF, SNMP, or via the Web UI. In either case, users need to be authenticated. That is, a user needs to present credentials, such as a password or a public key to gain access. As an alternative, for RESTCONF, users can be authenticated via token validation.
Once a user is authenticated, all operations performed by that user need to be authorized. That is, certain users may be allowed to perform certain tasks, whereas others are not. This is called authorization. We differentiate between the authorization of commands and the authorization of data access.
Explore service development in detail.
To demonstrate the simplicity a pure model-to-model service mapping affords, let us consider the most basic approach to providing the mapping: the service XML template. The XML template is an XML-encoded file that tells NSO what configuration to generate when someone requests a new service instance.
The first thing you need is the relevant device configuration (or configurations if multiple devices are involved). Suppose you must configure 192.0.2.1 as a DNS server on the target device. Using the NSO CLI, you first enter the device configuration, then add the DNS server. For a Cisco IOS-based device:
Note here that the configuration is not yet committed and you can use the commit dry-run outformat xml
*** WARNING ***: delete in service create code is unsafe if data is
shared by other services*** WARNING ***: set in service create code is unsafe if data is
shared by other services @Service.pre_modification
def cb_pre_modification(self, tctx, op, kp, root, proplist): ...
@Service.create
def cb_create(self, tctx, root, service, proplist): ...
@Service.post_modification
def cb_post_modification(self, tctx, op, kp, root, proplist): ... @ServiceCallback(servicePoint = "...",
callType = ServiceCBType.PRE_MODIFICATION)
public Properties preModification(ServiceContext context,
ServiceOperationType operation,
ConfPath path,
Properties opaque)
throws DpCallbackException;
@ServiceCallback(servicePoint="...",
callType=ServiceCBType.CREATE)
public Properties create(ServiceContext context,
NavuNode service,
NavuNode ncsRoot,
Properties opaque)
throws DpCallbackException;
@ServiceCallback(servicePoint = "...",
callType = ServiceCBType.POST_MODIFICATION)
public Properties postModification(ServiceContext context,
ServiceOperationType operation,
ConfPath path,
Properties opaque)
throws DpCallbackException; @Service.create
def cb_create(self, tctx, root, service, proplist):
intf = None
# proplist is of type list[tuple[str, str]]
for pname, pvalue in proplist:
if pname == 'INTERFACE':
intf = pvalue
if intf is None:
intf = '...'
proplist.append('INTERFACE', intf)
return proplist public Properties create(ServiceContext context,
NavuNode service,
NavuNode ncsRoot,
Properties opaque)
throws DpCallbackException {
// In Java API, opaque is null when service instance is first created.
if (opaque == null) {
opaque = new Properties();
}
String intf = opaque.getProperty("INTERFACE");
if (intf == null) {
intf = "...";
opaque.setProperty("INTERFACE", intf);
}
return opaque;
} list example-service {
key name;
leaf name {
type string;
}
uses ncs:service-data;
ncs:servicepoint example-service {
ncs:conflicts-with example-service;
ncs:conflicts-with other-service;
}
}admin@ncs(config)# iface instance1 device c1 interface 0/1 ip-address 10.1.2.3 cidr-netmask 28
admin@ncs(config)# commitadmin@ncs(config)# iface instance2 device c1 interface 0/1 ip-address 10.1.2.3 cidr-netmask 28
admin@ncs(config)# commit dry-run
cli {
local-node {
data +iface instance2 {
+ device c1;
+ interface 0/1;
+ ip-address 10.1.2.3;
+ cidr-netmask 28;
+}
}
}
admin@ncs(config)# commit and-quit
admin@ncs# show running-config devices device c1 config interface\
GigabitEthernet 0/1 | display service-meta-data
devices device c1
config
! Refcount: 2
! Backpointer: [ /iface:iface[iface:name='instance1'] /iface:iface[iface:name='instance2'] ]
interface GigabitEthernet0/1
! Refcount: 2
ip address 10.1.2.3 255.255.255.240
! Refcount: 2
! Backpointer: [ /iface:iface[iface:name='instance1'] /iface:iface[iface:name='instance2'] ]
ip dhcp snooping trust
exit
!
!<config-template xmlns="http://tail-f.com/ns/config/1.0"
servicepoint="top-level-service">
<iface xmlns="http://com/example/iface">
<name>instance1</name>
<device>c1</device>
<interface>0/1</interface>
<ip-address>10.1.2.3</ip-address>
<cidr-netmask>28</cidr-netmask>
</iface>
</config>list python-service {
key name;
leaf name {
type string;
}
uses ncs:service-data;
ncs:servicepoint python-service-servicepoint;
list device {
key name;
leaf name {
type leafref {
path "/ncs:devices/ncs:device/ncs:name";
}
}
leaf number-of-interfaces {
type uint32;
}
}
}@Service.create
def cb_create(self, tctx, root, service, proplist):
self.log.info('Service create(service=', service._path, ')')
for d in service.device:
for i in range(d.number_of_interfaces):
root.ncs__devices.device[d.name].config.ios__interface.GigabitEthernet.create(i).description = 'Managed by NSO'admin@ncs(config)# python-service test
admin@ncs(config-python-service-test)# device CE-1 number-of-interfaces 10
admin@ncs(config-device-CE-1)# exit
admin@ncs(config-python-service-test)# device CE-2 number-of-interfaces 10
admin@ncs(config-device-CE-2)# exit
admin@ncs(config-python-service-test)# device PE-1 number-of-interfaces 10
admin@ncs(config-device-PE-1)# 2-Jan-2025::09:48:18.110 trace-id=8a94e614b426430ffcd34e0639b5cf40 span-id=c4a9037077c54402 parent-span-id=ff9ca4dccad15b30 usid=59 tid=132 datastore=running context=cli subsystem=service-manager service=/python-service[name='test'] create: ok (0.222 s)
2-Jan-2025::09:48:18.198 trace-id=8a94e614b426430ffcd34e0639b5cf40 span-id=2cdb960fde6f386e parent-span-id=ff9ca4dccad15b30 usid=59 tid=132 datastore=running context=cli subsystem=service-manager service=/python-service[name='test'] saving reverse diff-set and applying changes: ok (0.088 s)2-Jan-2025::09:49:00.909 trace-id=87b153d7edd0120f4810cd13fa207abd span-id=37188aea51359bd4 parent-span-id=f55947230241d550 usid=59 tid=214 datastore=running context=cli subsystem=service-manager service=/python-service[name='test'] create: ok (2.316 s)
2-Jan-2025::09:49:02.299 trace-id=87b153d7edd0120f4810cd13fa207abd span-id=6a9962e63805673e parent-span-id=f55947230241d550 usid=59 tid=214 datastore=running context=cli subsystem=service-manager service=/python-service[name='test'] saving reverse diff-set and applying changes: ok (1.389 s)2-Jan-2025::09:50:19.314 trace-id=4b144bc1f493a1c6f1f09df45be7a567 span-id=7e7a805a711ae483 parent-span-id=867f790fef787fca usid=59 tid=293 datastore=running context=cli subsystem=service-manager service=/python-service[name='test'] create: ok (28.082 s)
2-Jan-2025::09:50:34.261 trace-id=4b144bc1f493a1c6f1f09df45be7a567 span-id=28a617b1279e8c56 parent-span-id=867f790fef787fca usid=59 tid=293 datastore=running context=cli subsystem=service-manager service=/python-service[name='test'] saving reverse diff-set and applying changes: ok (14.946 s)admin@ncs(config)# commit dry-run
cli {
local-node {
data devices {
device CE-1 {
config {
interface {
+ GigabitEthernet 1000 {
+ description "Managed by NSO";
+ }
}
}
}
}
python-service test {
device CE-1 {
- number-of-interfaces 1000;
+ number-of-interfaces 1001;
}
}
}
}2-Jan-2025::09:57:40.581 trace-id=ab51722b3be82a83bc59d7b40bfdedd3 span-id=e9039240e794e819 parent-span-id=df585fdf73c00df3 usid=75 tid=425 datastore=running context=cli subsystem=service-manager service=/python-service[name='test'] create: ok (24.900 s)
2-Jan-2025::09:58:44.309 trace-id=ab51722b3be82a83bc59d7b40bfdedd3 span-id=1e841bcb07685884 parent-span-id=df585fdf73c00df3 usid=75 tid=425 datastore=running context=cli subsystem=service-manager service=/python-service[name='test'] saving reverse diff-set and applying changes: ok (15.727 s)list upper-python-service {
key name;
leaf name {
type string;
}
uses ncs:service-data;
ncs:servicepoint upper-python-service-servicepoint;
list device {
key name;
leaf name {
type leafref {
path "/ncs:devices/ncs:device/ncs:name";
}
}
leaf number-of-interfaces {
type uint32;
}
}
}list lower-python-service {
key "device name";
leaf name {
type string;
}
leaf device {
type leafref {
path "/ncs:devices/ncs:device/ncs:name";
}
}
uses ncs:service-data;
ncs:servicepoint lower-python-service-servicepoint;
leaf number-of-interfaces {
type uint32;
}
}class UpperServiceCallbacks(Service):
@Service.create
def cb_create(self, tctx, root, service, proplist):
self.log.info('Service create(service=', service._path, ')')
for d in service.device:
root.stacked_python_service__lower_python_service.create(d.name, service.name).number_of_interfaces = d.number_of_interfaces
class LowerServiceCallbacks(Service):
@Service.create
def cb_create(self, tctx, root, service, proplist):
self.log.info('Service create(service=', service._path, ')')
for i in range(service.number_of_interfaces):
root.ncs__devices.device[service.device].config.ios__interface.GigabitEthernet.create(i).description = 'Managed by NSO'admin@ncs(config)# upper-python-service test device CE-1 number-of-interfaces 1000
admin@ncs(config-device-CE-1)# top
admin@ncs(config)# upper-python-service test device CE-2 number-of-interfaces 1000
admin@ncs(config-device-CE-2)# top
admin@ncs(config)# upper-python-service test device PE-1 number-of-interfaces 1000
admin@ncs(config-device-PE-1)# commit 2-Jan-2025::10:14:27.682 trace-id=2dc929ca780db076154a16d0edc50d05 span-id=58c41383d602d7e4 parent-span-id=49f214d3c1e906fb usid=59 tid=132 datastore=running context=cli subsystem=service-manager service=/upper-python-service[name='test'] create: ok (0.012 s)
2-Jan-2025::10:14:27.706 trace-id=2dc929ca780db076154a16d0edc50d05 span-id=3dcdb68f79b38f78 parent-span-id=49f214d3c1e906fb usid=59 tid=132 datastore=running context=cli subsystem=service-manager service=/upper-python-service[name='test'] saving reverse diff-set and applying changes: ok (0.023 s)2-Jan-2025::10:14:35.205 trace-id=2dc929ca780db076154a16d0edc50d05 span-id=1aa5131f96e2b4fe parent-span-id=9da61057b7e18fae usid=59 tid=132 datastore=running context=cli subsystem=service-manager service=/lower-python-service[name='test'][device='CE-1'] create: ok (7.492 s)
2-Jan-2025::10:14:37.743 trace-id=2dc929ca780db076154a16d0edc50d05 span-id=3dce5f82d6f5558f parent-span-id=9da61057b7e18fae usid=59 tid=132 datastore=running context=cli subsystem=service-manager service=/lower-python-service[name='test'][device='CE-1'] saving reverse diff-set and applying changes: ok (2.538 s)
...
2-Jan-2025::10:14:46.126 trace-id=2dc929ca780db076154a16d0edc50d05 span-id=78201c416ffa5ca5 parent-span-id=056757c9dd26bb8e usid=59 tid=132 datastore=running context=cli subsystem=service-manager service=/lower-python-service[name='test'][device='CE-2'] create: ok (8.381 s)
2-Jan-2025::10:14:48.455 trace-id=2dc929ca780db076154a16d0edc50d05 span-id=5b4fd53af68d3233 parent-span-id=056757c9dd26bb8e usid=59 tid=132 datastore=running context=cli subsystem=service-manager service=/lower-python-service[name='test'][device='CE-2'] saving reverse diff-set and applying changes: ok (2.328 s)
...
2-Jan-2025::10:14:56.294 trace-id=2dc929ca780db076154a16d0edc50d05 span-id=374cecf183a5065a parent-span-id=e513c0823e29256c usid=59 tid=132 datastore=running context=cli subsystem=service-manager service=/lower-python-service[name='test'][device='PE-1'] create: ok (7.837 s)
2-Jan-2025::10:14:58.645 trace-id=2dc929ca780db076154a16d0edc50d05 span-id=b0d42c480167757d parent-span-id=e513c0823e29256c usid=59 tid=132 datastore=running context=cli subsystem=service-manager service=/lower-python-service[name='test'][device='PE-1'] saving reverse diff-set and applying changes: ok (2.351 s)admin@ncs(config)# upper-python-service test device CE-1 number-of-interfaces 1001
admin@ncs(config-device-CE-1)# commit dry-run
cli {
local-node {
data upper-python-service test {
device CE-1 {
- number-of-interfaces 1000;
+ number-of-interfaces 1001;
}
}
lower-python-service test CE-1 {
- number-of-interfaces 1000;
+ number-of-interfaces 1001;
}
devices {
device CE-1 {
config {
interface {
+ GigabitEthernet 1000 {
+ description "Managed by NSO";
+ }
}
}
}
}
}
}admin@ncs(config)# show full-configuration lower-python-service CE-1
lower-python-service CE-1 another-instance
number-of-interfaces 1
!
lower-python-service CE-1 test
number-of-interfaces 1001
!
lower-python-service CE-1 yet-another-instance
number-of-interfaces 1
!admin@ncs(config)# show full-configuration lower-python-service
lower-python-service CE-1 another-instance
number-of-interfaces 1
!
lower-python-service CE-1 test
number-of-interfaces 1001
!
lower-python-service CE-1 yet-another-instance
number-of-interfaces 1
!
lower-python-service CE-2 test
number-of-interfaces 1000
!
lower-python-service PE-1 test
number-of-interfaces 1000
! container resource-facing-services {
list device {
description "All services on a device";
key name;
leaf name {
type leafref {
path "/ncs:devices/ncs:device/ncs:name";
}
}
}
} augment "/rfs:resource-facing-services/rfs:device" {
list l3vpn-rfs {
key "name endpoint-id";
leaf name {
tailf:info "Unique service id";
tailf:cli-allow-range;
type string;
}
leaf endpoint-id {
tailf:info "Endpoint identifier";
type string;
}
uses ncs:service-data;
ncs:servicepoint l3vpn-rfs-servicepoint;
leaf role {
type enumeration {
enum "ce";
enum "pe";
}
}
container remote {
leaf device {
type leafref {
path "/rfs:resource-facing-services/rfs:device/rfs:name";
}
}
leaf ip-address {
type inet:ipv4-address;
}
}
leaf as-number {
description "AS used within all VRF of the VPN";
tailf:info "MPLS VPN AS number.";
mandatory true;
type uint32;
}
container local {
when "../role = 'ce'";
uses endpoint-grouping;
}
container link {
uses endpoint-grouping;
}
}
}admin@ncs(config)# show full-configuration vpn
vpn l3vpn volvo
endpoint c1
as-number 65001
ce device CE-1
ce local interface-name GigabitEthernet
ce local interface-number 0/9
ce local ip-address 192.168.0.1
ce link interface-name GigabitEthernet
ce link interface-number 0/2
ce link ip-address 10.1.1.1
pe device PE-1
pe link interface-name GigabitEthernet
pe link interface-number 0/0/0/1
pe link ip-address 10.1.1.2
!
endpoint c2
as-number 65001
ce device CE-2
ce local interface-name GigabitEthernet
ce local interface-number 0/3
ce local ip-address 192.168.1.1
ce link interface-name GigabitEthernet
ce link interface-number 0/1
ce link ip-address 10.2.1.1
pe device PE-1
pe link interface-name GigabitEthernet
pe link interface-number 0/0/0/2
pe link ip-address 10.2.1.2
!
!admin@ncs(config)# show full-configuration resource-facing-services device PE-1
resource-facing-services device PE-1
l3vpn-rfs volvo c1
role pe
as-number 65001
link interface-name GigabitEthernet
link interface-number 0/0/0/1
link ip-address 10.1.1.2
link remote ip-address 10.1.1.1
!
l3vpn-rfs volvo c2
role pe
as-number 65001
link interface-name GigabitEthernet
link interface-number 0/0/0/2
link ip-address 10.2.1.2
link remote ip-address 10.2.1.1
!
!admin@ncs(config)# show full-configuration resource-facing-services device CE-1
resource-facing-services device CE-1
l3vpn-rfs volvo c1
role ce
as-number 65001
local interface-name GigabitEthernet
local interface-number 0/9
local ip-address 192.168.0.1
link interface-name GigabitEthernet
link interface-number 0/2
link ip-address 10.1.1.1
link remote ip-address 10.1.1.2
!
!admin@ncs(config)# show full-configuration resource-facing-services device CE-2
resource-facing-services device CE-2
l3vpn-rfs volvo c2
role ce
as-number 65001
local interface-name GigabitEthernet
local interface-number 0/3
local ip-address 192.168.1.1
link interface-name GigabitEthernet
link interface-number 0/1
link ip-address 10.2.1.1
link remote ip-address 10.2.1.2
!
!admin@ncs# show running-config devices device c1 config\
interface GigabitEthernet 0/1
devices device c1
config
interface GigabitEthernet0/1
ip address 10.1.2.3 255.255.255.240
exit
!
!admin@ncs(config)# commit dry-run
cli {
local-node {
data +iface instance1 {
+ device c1;
+ interface 0/1;
+ ip-address 10.1.2.3;
+ cidr-netmask 28;
+}
}
}admin@ncs# show running-config devices device c1 config interface\
GigabitEthernet 0/1 | display service-meta-data
devices device c1
config
! Refcount: 2
! Backpointer: [ /iface:iface[iface:name='instance1'] ]
interface GigabitEthernet0/1
! Refcount: 2
! Originalvalue: 10.1.2.3
ip address 10.1.2.3 255.255.255.240
exit
!
!import ncs
from openpyxl import load_workbook
def main()
wb = load_workbook('services.xslx')
sheet = wb[wb.sheetnames[0]]
with ncs.maapi.single_write_trans('admin', 'python') as t:
root = ncs.maagic.get_root(t)
for sr in sheet.rows:
# Suppose columns in spreadsheet are:
# instance (A), device (B), interface (C), IP (D), mask (E)
name = sr[0].value
service = root.iface.create(name)
service.device = sr[1].value
service.interface = sr[2].value
service.ip_address = sr[3].value
service.cidr_netmask = sr[4].value
t.apply()
main()admin@ncs# show running-config iface | display xml
<config xmlns="http://tail-f.com/ns/config/1.0">
<iface xmlns="http://com/example/iface">
<name>instance1</name>
<device>c1</device>
<interface>0/1</interface>
<ip-address>10.1.2.3</ip-address>
<cidr-netmask>28</cidr-netmask>
</iface>
</config>devconfig = root.devices.device[service.device].config
intf = devconfig.interface.GigabitEthernet[service.interface]
netmask = intf.ip.address.primary.mask
cidr = IPv4Network(f'0.0.0.0/{netmask}').prefixlenadmin@ncs# show running-config devices device c1 config\
interface GigabitEthernet 0/1 | display service-meta-data
devices device c1
config
! Refcount: 2
! Backpointer: [ /iface:iface[iface:name='instance1'] ]
interface GigabitEthernet0/1
! Refcount: 2
! Originalvalue: 10.1.2.3
ip address 10.1.2.3 255.255.255.240
exit
!
!admin@ncs# iface instance1 re-deploy reconcile
admin@ncs# show running-config devices device c1 config\
interface GigabitEthernet 0/1 | display service-meta-data
devices device c1
config
! Refcount: 1
! Backpointer: [ /iface:iface[iface:name='instance1'] ]
interface GigabitEthernet0/1
! Refcount: 1
ip address 10.1.2.3 255.255.255.240
exit
!
!admin@ncs# show running-config devices device c1 config\
interface GigabitEthernet 0/1 | display service-meta-data
devices device c1
config
! Refcount: 3
! Backpointer: [ /iface:iface[iface:name='instance1'] /iface:iface[iface:name='instance2'] ]
interface GigabitEthernet0/1
! Refcount: 3
! Originalvalue: 10.1.2.3
ip address 10.1.2.3 255.255.255.240
exit
!
!admin@ncs# show running-config devices device c1 config\
interface GigabitEthernet 0/1 | display service-meta-data
devices device c1
config
! Refcount: 2
! Backpointer: [ /iface:iface[iface:name='instance1'] /iface:iface[iface:name='instance2'] ]
interface GigabitEthernet0/1
! Refcount: 2
ip address 10.1.2.3 255.255.255.240
exit
!
!admin@ncs(config)# no iface instance1
admin@ncs(config)# commit dry-run outformat native
native {
}
admin@ncs(config)# no iface instance2
admin@ncs(config)# commit dry-run outformat native
native {
device {
name c1
data no interface GigabitEthernet0/1
}
}admin@ncs(config)# commit dry-run
cli {
local-node {
data +iface instance1 {
+ device c1;
+ interface 0/1;
+ ip-address 10.1.2.3;
+ cidr-netmask 28;
+}
+iface instance2 {
+ device c1;
+ interface 0/2;
+ ip-address 10.2.2.3;
+ cidr-netmask 28;
+}
}
}
admin@ncs(config)# commitadmin@ncs# iface instance1 re-deploy reconcile\
{ discard-non-service-config } dry-run
cli {
}admin@ncs# iface instance1 re-deploy reconcileadmin@ncs# iface instance2 re-deploy reconcile\
{ discard-non-service-config } dry-run
cli {
local-node {
data devices {
device c1 {
config {
interface {
GigabitEthernet 0/2 {
ip {
dhcp {
snooping {
- trust;
}
}
}
}
}
}
}
}
}
}iface instance2
device c1
interface 0/2
ip-address 10.2.2.3
cidr-netmask 28
variant v3
!admin@ncs# iface instance2 re-deploy reconcile\
{ discard-non-service-config } dry-run
cli {
}
admin@ncs# iface instance2 re-deploy reconcile ConfXMLParam[] params = new ConfXMLParam[] {
new ConfXMLParamValue("ncs", "path", new ConfList(new ConfValue[] {
new ConfBuf("/ncs:devices/ncs:device[ncs:name='ex0']/"
+ "ncs:config/r:sys/r:interfaces/r:interface[r:name='eth0']"),
new ConfBuf("/ncs:devices/ncs:device[ncs:name='ex1']/"
+ "ncs:config/r:sys/r:dns/r:server")
})),
new ConfXMLParamLeaf("ncs", "suppress-positive-result")};
ConfXMLParam[] result =
maapi.requestAction(params, "/ncs:devices/ncs:partial-sync-from"); module acme-system {
namespace "http://acme.example.com/system";
.....leaf host-name {
type string;
description "Hostname for this system";
}<host-name>my.example.com</host-name>leaf enabled {
type empty;
description "Enable the interface";
}<enabled/>leaf-list domain-search {
type string;
description "List of domain names to search";
}<domain-search>high.example.com</domain-search>
<domain-search>low.example.com</domain-search>
<domain-search>everywhere.example.com</domain-search>container system {
container login {
leaf message {
type string;
description
"Message given at start of login session";
}
}
}<system>
<login>
<message>Good morning, Dave</message>
</login>
</system>list user {
key "name";
leaf name {
type string;
}
leaf full-name {
type string;
}
leaf class {
type string;
}
}<user>
<name>glocks</name>
<full-name>Goldie Locks</full-name>
<class>intruder</class>
</user>
<user>
<name>snowey</name>
<full-name>Snow White</full-name>
<class>free-loader</class>
</user>
<user>
<name>rzull</name>
<full-name>Repun Zell</full-name>
<class>tower</class>
</user>// Contents of "acme-system.yang"
module acme-system {
namespace "http://acme.example.com/system";
prefix "acme";
organization "ACME Inc.";
contact "[email protected]";
description
"The module for entities implementing the ACME system.";
revision 2007-06-09 {
description "Initial revision.";
}
container system {
leaf host-name {
type string;
description "Hostname for this system";
}
leaf-list domain-search {
type string;
description "List of domain names to search";
}
container login {
leaf message {
type string;
description
"Message given at start of login session";
}
list user {
key "name";
leaf name {
type string;
}
leaf full-name {
type string;
}
leaf class {
type string;
}
}
}
}
}list interface {
key "name";
config true;
leaf name {
type string;
}
leaf speed {
type enumeration {
enum 10m;
enum 100m;
enum auto;
}
}
leaf observed-speed {
type uint32;
config false;
}
}typedef percent {
type uint16 {
range "0 .. 100";
}
description "Percentage";
}
leaf completed {
type percent;
}<completed>20</completed>leaf completed {
type uint16 {
range "0 .. 100";
}
description "Percentage";
}grouping target {
leaf address {
type inet:ip-address;
description "Target IP address";
}
leaf port {
type inet:port-number;
description "Target port number";
}
}
container peer {
container destination {
uses target;
}
}<peer>
<destination>
<address>192.0.2.1</address>
<port>830</port>
</destination>
</peer>container connection {
container source {
uses target {
refine "address" {
description "Source IP address";
}
refine "port" {
description "Source port number";
}
}
}
container destination {
uses target {
refine "address" {
description "Destination IP address";
}
refine "port" {
description "Destination port number";
}
}
}
}container food {
choice snack {
mandatory true;
case sports-arena {
leaf pretzel {
type empty;
}
leaf beer {
type empty;
}
}
case late-night {
leaf chocolate {
type enumeration {
enum dark;
enum milk;
enum first-available;
}
}
}
}
}<food>
<chocolate>first-available</chocolate>
</food>augment /system/login/user {
when "class != 'wheel'";
leaf uid {
type uint16 {
range "1000 .. 30000";
}
}
}<user>
<name>alicew</name>
<full-name>Alice N. Wonderland</full-name>
<class>drop-out</class>
<other:uid>1024</other:uid>
</user>rpc activate-software-image {
input {
leaf image-name {
type string;
}
}
output {
leaf status {
type string;
}
}
}<rpc message-id="101"
xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<activate-software-image xmlns="http://acme.example.com/system">
<name>acmefw-2.3</name>
</activate-software-image>
</rpc>
<rpc-reply message-id="101"
xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<status xmlns="http://acme.example.com/system">
The image acmefw-2.3 is being installed.
</status>
</rpc-reply>notification link-failure {
description "A link failure has been detected";
leaf if-name {
type leafref {
path "/interfaces/interface/name";
}
}
leaf if-admin-status {
type ifAdminStatus;
}
}<notification xmlns="urn:ietf:params:netconf:capability:notification:1.0">
<eventTime>2007-09-01T10:00:00Z</eventTime>
<link-failure xmlns="http://acme.example.com/system">
<if-name>so-1/2/3.0</if-name>
<if-admin-status>up</if-admin-status>
</link-failure>
</notification>module test {
namespace "http://tail-f.com/test";
prefix "t";
container top {
leaf a {
type int32;
}
leaf b {
type string;
}
}
}$ ncsc -c test.yangmodule test {
namespace "http://tail-f.com/test";
prefix "t";
import ietf-inet-types {
prefix inet;
}
container top {
leaf a {
type int32;
}
leaf b {
type string;
}
leaf ip {
type inet:ipv4-address;
}
}
}$ ncsc -c test.yang
$ ncsc --get-info test.fxs
fxs file
Ncsc version: "3.0_2"
uri: http://tail-f.com/test
id: http://tail-f.com/test
prefix: "t"
flags: 6
type: cs
mountpoint: undefined
exported agents: all
dependencies: ['http://www.w3.org/2001/XMLSchema',
'urn:ietf:params:xml:ns:yang:inet-types']
source: ["test.yang"]$ ncs -c ncs.conf --foreground --verbose
The namespace urn:ietf:params:xml:ns:yang:inet-types (referenced by http://tail-f.com/test) could not be found in the loadPath.
Daemon died status=21$ ncs -c ncs.conf --addloadpath ${NCS_DIR}/etc/ncs --foreground --verbose container interface {
leaf ifType {
type enumeration {
enum ethernet;
enum atm;
}
}
leaf ifMTU {
type uint32;
}
must "ifType != 'ethernet' or "
+ "(ifType = 'ethernet' and ifMTU = 1500)" {
error-message "An ethernet MTU must be 1500";
}
must "ifType != 'atm' or "
+ "(ifType = 'atm' and ifMTU <= 17966 and ifMTU >= 64)" {
error-message "An atm MTU must be 64 .. 17966";
}
}list server {
key "name";
unique "ip port";
leaf name {
type string;
}
leaf ip {
type inet:ip-address;
}
leaf port {
type inet:port-number;
}
}<server>
<name>smtp</name>
<ip>192.0.2.1</ip>
<port>25</port>
</server>
<server>
<name>http</name>
<ip>192.0.2.1</ip>
<port>25</port>
</server>leaf a {
type boolean;
}
leaf b {
type string;
when "../a = 'true'";
}module test {
namespace "http://tail-f.com/test";
prefix "t";
import ietf-inet-types {
prefix inet;
}
import tailf-common {
prefix tailf;
}
container top {
leaf a {
type int32;
config false;
tailf:callpoint mycp;
}
leaf b {
tailf:validate myvalcp {
tailf:dependency "../a";
}
type string;
}
leaf ip {
type inet:ipv4-address;
}
}
}module test {
namespace "http://tail-f.com/test";
prefix "t";
import ietf-inet-types {
prefix inet;
}
container top {
leaf a {
type int32;
config false;
}
leaf b {
type string;
}
leaf ip {
type inet:ipv4-address;
}
}
}module test-ann {
namespace "http://tail-f.com/test-ann";
prefix "ta";
import test {
prefix t;
}
import tailf-common {
prefix tailf;
}
tailf:annotate "/t:top/t:a" {
tailf:callpoint mycp;
}
tailf:annotate "/t:top" {
tailf:annotate "t:b" { // recursive annotation
tailf:validate myvalcp {
tailf:dependency "../t:a";
}
}
}
}confdc -c -a test-ann.yang test.yangleaf mtu {
type uint16 {
range "1 .. 1500";
}
description
"MTU is the largest frame size that can be transmitted
over the network. For example, an Ethernet MTU is 1,500
bytes. Messages longer than the MTU must be divided
into smaller frames.";
tailf:info
"largest frame size";
} typedef mtuType {
type uint16 {
range "1 .. 1500";
}
description
"MTU is the largest frame size that can be transmitted over the
network. For example, an Ethernet MTU is 1,500
bytes. Messages longer than the MTU must be
divided into smaller frames.";
tailf:info
"largest frame size";
}
leaf mtu {
type mtuType;
}typedef mtuType {
type uint32 {
range "1..1500" {
error-message
"The MTU must be a positive number not "
+ "larger than 1500";
}
}
}$ /sbin/ip link list
1: eth0: <BROADCAST,MULTICAST,UP>; mtu 1500 qdisc pfifo_fast qlen 1000
link/ether 00:12:3f:7d:b0:32 brd ff:ff:ff:ff:ff:ff
2: lo: <LOOPBACK,UP>; mtu 16436 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
3: dummy0: <BROADCAST,NOARP> mtu 1500 qdisc noop
link/ether a6:17:b9:86:2c:04 brd ff:ff:ff:ff:ff:ff<?xml version="1.0"?>
<config xmlns="http://example.com/ns/link">
<links>
<link>
<name>eth0</name>
<flags>
<UP/>
<BROADCAST/>
<MULTICAST/>
</flags>
<addr>00:12:3f:7d:b0:32</addr>
<brd>ff:ff:ff:ff:ff:ff</brd>
<mtu>1500</mtu>
</link>
<link>
<name>lo</name>
<flags>
<UP/>
<LOOPBACK/>
</flags>
<addr>00:00:00:00:00:00</addr>
<brd>00:00:00:00:00:00</brd>
<mtu>16436</mtu>
</link>
</links>
</config>module links {
namespace "http://example.com/ns/links";
prefix link;
revision 2007-06-09 {
description "Initial revision.";
}
...leaf UP {
type boolean;
default false;
}
leaf NOARP {
type boolean;
default false;
}leaf UP {
type empty;
}
leaf NOARP {
type empty;
}grouping LinkFlags {
leaf UP {
type empty;
}
leaf NOARP {
type empty;
}
leaf BROADCAST {
type empty;
}
leaf MULTICAST {
type empty;
}
leaf LOOPBACK {
type empty;
}
leaf NOTRAILERS {
type empty;
}
}typedef mac-address {
type string {
pattern '[0-9a-fA-F]{2}(:[0-9a-fA-F]{2}){5}';
}
description
"The mac-address type represents an IEEE 802 MAC address.
This type is in the value set and its semantics equivalent to
the MacAddress textual convention of the SMIv2.";
reference
"IEEE 802: IEEE Standard for Local and Metropolitan Area
Networks: Overview and Architecture
RFC 2579: Textual Conventions for SMIv2";
}typedef QueueDisciplineType {
type enumeration {
enum pfifo_fast;
enum noqueue;
enum noop;
enum htp;
}
}container links {
list link {
key name;
unique addr;
max-elements 1024;
leaf name {
type string;
}
container flags {
uses LinkFlags;
}
leaf addr {
type yang:mac-address;
mandatory true;
}
leaf brd {
type yang:mac-address;
mandatory true;
}
leaf qdisc {
type QueueDisciplineType;
mandatory true;
}
leaf qlen {
type uint32;
mandatory true;
}
leaf mtu {
type uint32;
mandatory true;
}
}
}container queueDisciplines {
list queueDiscipline {
key linkName;
max-elements 1024;
leaf linkName {
type leafref {
path "/config/links/link/name";
}
}
leaf type {
type QueueDisciplineType;
mandatory true;
}
leaf length {
type uint32;
}
}
}container linkLimitations {
list LinkLimitation {
key linkName;
max-elements 1024;
leaf linkName {
type leafref {
path "/config/links/link/name";
}
}
container limitations {
leaf only10Mbs { type boolean;}
leaf onlyHalfDuplex { type boolean;}
}
}
}leaf defaultLink {
type leafref {
path "/config/links/link/name";
}
}container servers {
list server {
key name;
unique "ip port";
unique "index";
max-elements 64;
leaf name {
type string;
}
leaf index {
type uint32;
mandatory true;
}
leaf ip {
type inet:ip-address;
mandatory true;
}
leaf port {
type inet:port-number;
mandatory true;
}
}
}leaf mtu {
type int32;
default 1500;
}leaf UP {
type boolean;
default true;
}container links {
leaf mtu {
type uint32;
}
list link {
key name;
leaf name {
type string;
}
leaf mtu {
type uint32;
tailf:default-ref '../../mtu';
}
}
}<links>
<mtu>1000</mtu>
<link>
<name>eth0</name>
<mtu>1500</mtu>
</link>
<link>
<name>eth1</name>
</link>
</links>module links {
namespace "http://example.com/ns/link";
prefix link;
import ietf-yang-types {
prefix yang;
}
grouping LinkFlagsType {
leaf UP {
type empty;
}
leaf NOARP {
type empty;
}
leaf BROADCAST {
type empty;
}
leaf MULTICAST {
type empty;
}
leaf LOOPBACK {
type empty;
}
leaf NOTRAILERS {
type empty;
}
}
typedef QueueDisciplineType {
type enumeration {
enum pfifo_fast;
enum noqueue;
enum noop;
enum htb;
}
}
container config {
container links {
list link {
key name;
unique addr;
max-elements 1024;
leaf name {
type string;
}
container flags {
uses LinkFlagsType;
}
leaf addr {
type yang:mac-address;
mandatory true;
}
leaf brd {
type yang:mac-address;
mandatory true;
}
leaf mtu {
type uint32;
default 1500;
}
}
}
container queueDisciplines {
list queueDiscipline {
key linkName;
max-elements 1024;
leaf linkName {
type leafref {
path "/config/links/link/name";
}
}
leaf type {
type QueueDisciplineType;
mandatory true;
}
leaf length {
type uint32;
}
}
}
container linkLimitations {
list linkLimitation {
key linkName;
leaf linkName {
type leafref {
path "/config/links/link/name";
}
}
container limitations {
leaf only10Mbps {
type boolean;
default false;
}
leaf onlyHalfDuplex {
type boolean;
default false;
}
}
}
}
container defaultLink {
leaf linkName {
type leafref {
path "/config/links/link/name";
}
}
}
}
}$ confdc -c links.yanglist host {
key "name";
leaf name {
type string;
}
...
}
leaf host-ref {
type leafref {
path "../host/name";
}
}list host {
key "name";
leaf name {
type string;
}
list server {
key "ip port";
leaf ip {
type inet:ip-address;
}
leaf port {
type inet:port-number;
}
...
}
}leaf server-host {
type leafref {
path "/host/name";
}
}
leaf server-ip {
type leafref {
path "/host[name=current()/../server-host]/server/ip";
}
}
leaf server-port {
type leafref {
path "/host[name=current()/../server-host]"
+ "/server[ip=current()/../server-ip]/../port";
}
}leaf server-host {
type leafref {
path "/host/name";
}
}
leaf server-ip {
type leafref {
path "deref(../server-host)/../server/ip";
}
}
leaf server-port {
type leafref {
path "deref(../server-ip)/../port";
}
}module datatypes {
namespace "http://example.com/ns/dt";
prefix dt;
grouping countersType {
leaf recvBytes {
type uint64;
mandatory true;
}
leaf sentBytes {
type uint64;
mandatory true;
}
}
}$ confdc -c datatypes.yangmodule test {
namespace "http://tail-f.com/test";
prefix "t";
import datatypes {
prefix dt;
}
container stats {
uses dt:countersType;
}
}$ confdc -c test.yang --yangpath /path/to/dtleaf example {
type union {
type string; // NOTE: widest type first
type int32;
type enumeration {
enum "unbounded";
}
}
}leaf example {
type union {
type enumeration {
enum "unbounded";
}
type int32;
type string; // NOTE: widest type last
}
}leaf example {
type union {
type string {
pattern "[a-z]+[0-9]+";
}
type int32;
}
}leaf example {
type union {
type empty;
type string;
}
}empty
Empty
A leaf that does not have any value
enumeration
Text/Number
Enumerated strings with associated numeric values
identityref
Text
A reference to an abstract identity
instance-identifier
Text
References a data tree node
int8
Number
8-bit signed integer
int16
Number
16-bit signed integer
int32
Number
32-bit signed integer
int64
Number
64-bit signed integer
leafref
Text/Number
A reference to a leaf instance
string
Text
Human readable string
uint8
Number
8-bit unsigned integer
uint16
Number
16-bit unsigned integer
uint32
Number
32-bit unsigned integer
uint64
Number
64-bit unsigned integer
union
Text/Number
Choice of member types
module acme-system {
namespace "http://acme.example.com/system";
prefix "acme";
revision 2007-06-09;
.....<?xml version="1.0" encoding="UTF-8"?>
<hello xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<capabilities>
<capability>urn:ietf:params:netconf:base:1.0</capability>
<capability>urn:ietf:params:netconf:capability:writable-running:1.0</capability>
<capability>urn:ietf:params:netconf:capability:candidate:1.0</capability>
<capability>urn:ietf:params:netconf:capability:confirmed-commit:1.0</capability>
<capability>urn:ietf:params:netconf:capability:xpath:1.0</capability>
<capability>urn:ietf:params:netconf:capability:validate:1.0</capability>
<capability>urn:ietf:params:netconf:capability:rollback-on-error:1.0</capability>
<capability>http://example.com/ns/link?revision=2007-06-09</capability>
....The NSO daemon manages device configuration including AAA information. NSO manages AAA information as well as uses it. The AAA information describes which users may log in, what passwords they have, and what they are allowed to do. This is solved in NSO by requiring a data model to be both loaded and populated with data. NSO uses the YANG module tailf-aaa.yang for authentication, while ietf-netconf-acm.yang (NETCONF Access Control Model (NACM), RFC 8341) as augmented by tailf-acm.yang is used for group assignment and authorization.
The NACM data model is targeted specifically towards access control for NETCONF operations and thus lacks some functionality that is needed in NSO, in particular, support for the authorization of CLI commands and the possibility to specify the context (NETCONF, CLI, etc.) that a given authorization rule should apply to. This functionality is modeled by augmentation of the NACM model, as defined in the tailf-acm.yang YANG module.
The ietf-netconf-acm.yang and tailf-acm.yang modules can be found in $NCS_DIR/src/ncs/yang directory in the release, while tailf-aaa.yang can be found in the $NCS_DIR/src/ncs/aaa directory.
NACM options related to services are modeled by augmentation of the NACM model, as defined in the tailf-ncs-acm.yang YANG module. The tailf-ncs-acm.yang can be found in $NCS_DIR/src/ncs/yang directory in the release.
The complete AAA data model defines a set of users, a set of groups, and a set of rules. The data model must be populated with data that is subsequently used by by NSO itself when it authenticates users and authorizes user data access. These YANG modules work exactly like all other fxs files loaded into the system with the exception that NSO itself uses them. The data belongs to the application, but NSO itself is the user of the data.
Since NSO requires a data model for the AAA information for its operation, it will report an error and fail to start if these data models cannot be found.
NSO itself is configured through a configuration file - ncs.conf. In that file, we have the following items related to authentication and authorization:
/ncs-config/aaa/ssh-server-key-dir: If SSH termination is enabled for NETCONF or the CLI, the NSO built-in SSH server needs to have server keys. These keys are generated by the NSO install script and by default end up in $NCS_DIR/etc/ncs/ssh.
It is also possible to use OpenSSH to terminate NETCONF or the CLI. If OpenSSH is used to terminate SSH traffic, this setting has no effect.
/ncs-config/aaa/ssh-pubkey-authentication: If SSH termination is enabled for NETCONF or the CLI, this item controls how the NSO SSH daemon locates the user keys for public key authentication. See Public Key Login for details.
/ncs-config/aaa/local-authentication/enabled: The term 'local user' refers to a user stored under /aaa/authentication/users. The alternative is a user unknown to NSO, typically authenticated by PAM. By default, NSO first checks local users before trying PAM or external authentication.
Local authentication is practical in test environments. It is also useful when we want to have one set of users that are allowed to log in to the host with normal shell access and another set of users that are only allowed to access the system using the normal encrypted, fully authenticated, northbound interfaces of NSO.
If we always authenticate users through PAM, it may make sense to set this configurable to false. If we disable local authentication, it implicitly means that we must use either PAM authentication or external authentication. It also means that we can leave the entire data trees under /aaa/authentication/users and, in the case of external authentication, also /nacm/groups (for NACM) or /aaa/authentication/groups (for legacy tailf-aaa) empty.
/ncs-config/aaa/pam: NSO can authenticate users using PAM (Pluggable Authentication Modules). PAM is an integral part of most Unix-like systems.
PAM is a complicated - albeit powerful - subsystem. It may be easier to have all users stored locally on the host, However, if we want to store users in a central location, PAM can be used to access the remote information. PAM can be configured to perform most login scenarios including RADIUS and LDAP. One major drawback with PAM authentication is that there is no easy way to extract the group information from PAM. PAM authenticates users, it does not also assign a user to a set of groups. PAM authentication is thoroughly described later in this chapter.
/ncs-config/aaa/default-group: If this configuration parameter is defined and if the group of a user cannot be determined, a logged-in user ends up in the given default group.
/ncs-config/aaa/external-authentication: NSO can authenticate users using an external executable. This is further described later in . As an alternative, you may consider using package authentication.
/ncs-config/aaa/external-validation: NSO can authenticate users by validation of tokens using an external executable. This is further described later in . Where external authentication uses a username and password to authenticate a user, external validation uses a token. The validation script should use the token to authenticate a user and can, optionally, also return a new token to be returned with the result of the request. It is currently only supported for RESTCONF.
/ncs-config/aaa/external-challenge: NSO has support for multi-factor authentication by sending challenges to a user. Challenges may be sent from any of the external authentication mechanisms but are currently only supported by JSON-RPC and CLI over SSH. This is further described later in .
/ncs-config/aaa/package-authentication: NSO can authenticate users using package authentication. It extends the concept of external authentication by allowing multiple packages to be used for authentication instead of a single executable. This is further described in .
/ncs-config/aaa/single-sign-on: With this setting enabled, NSO invokes Package Authentication on all requests to HTTP endpoints with the /sso prefix. This way, Package Authentication packages that require custom endpoints can expose them under the /sso base route.
For example, a SAMLv2 Single Sign-On (SSO) package needs to process requests to an AssertionConsumerService endpoint, such as /sso/saml/acs, and therefore requires enabling this setting.
This is a valid authentication method for WEB UI and JSON-RPC interfaces and needs Package Authentication to be enabled as well.
/ncs-config/aaa/single-sign-on/enable-automatic-redirect: If only one Single Sign-On package is configured (a package with single-sign-on-url set in package-meta-data.xml) and also this setting is enabled, NSO automatically redirects all unauthenticated access attempts to the configured single-sign-on-url.
Depending on the northbound management protocol, when a user session is created in NSO, it may or may not be authenticated. If the session is not yet authenticated, NSO's AAA subsystem is used to perform authentication and authorization, as described below. If the session already has been authenticated, NSO's AAA assigns groups to the user as described in Group Membership, and performs authorization, as described in Authorization.
The authentication part of the data model can be found in tailf-aaa.yang:
AAA authentication is used in the following cases:
When the built-in SSH server is used for NETCONF and CLI sessions.
For Web UI sessions and REST access.
When the method Maapi.Authenticate() is used.
NSO's AAA authentication is not used in the following cases:
When NETCONF uses an external SSH daemon, such as OpenSSH.
In this case, the NETCONF session is initiated using the program netconf-subsys, as described in NETCONF Transport Protocols in Northbound APIs.
When NETCONF uses TCP, as described in NETCONF Transport Protocols in Northbound APIs, e.g. through the command netconf-console.
When accessing the CLI by invoking the ncs_cli, e.g. through an external SSH daemon, such as OpenSSH, or a telnet daemon.
An important special case here is when a user has shell access to the host and runs ncs_cli from the shell. This command, as well as direct access to the IPC socket, allows for authentication bypass. It is crucial to consider this case for your deployment. If non-trusted users have shell access to the host, IPC access must be restricted. See .
When SNMP is used, SNMP has its own authentication mechanisms. See in Northbound APIs.
When the method Maapi.startUserSession() is used without a preceding call of Maapi.authenticate().
When a user logs in over NETCONF or the CLI using the built-in SSH server, with a public key login, the procedure is as follows.
The user presents a username in accordance with the SSH protocol. The SSH server consults the settings for /ncs-config/aaa/ssh-pubkey-authentication and /ncs-config/aaa/local-authentication/enabled .
If ssh-pubkey-authentication is set to local, and the SSH keys in /aaa/authentication/users/user{$USER}/ssh_keydir match the keys presented by the user, authentication succeeds.
Otherwise, if ssh-pubkey-authentication is set to system, local-authentication is enabled, and the SSH keys in /aaa/authentication/users/user{$USER}/ssh_keydir match the keys presented by the user, authentication succeeds.
Otherwise, if ssh-pubkey-authentication is set to system and the user /aaa/authentication/users/user{$USER} does not exist, but the user does exist in the OS password database, the keys in the user's $HOME/.ssh directory are checked. If these keys match the keys presented by the user, authentication succeeds.
Otherwise, authentication fails.
In all cases the keys are expected to be stored in a file called authorized_keys (or authorized_keys2 if authorized_keys does not exist), and in the native OpenSSH format (i.e. as generated by the OpenSSH ssh-keygen command). If authentication succeeds, the user's group membership is established as described in Group Membership.
This is exactly the same procedure that is used by the OpenSSH server with the exception that the built-in SSH server also may locate the directory containing the public keys for a specific user by consulting the /aaa/authentication/users tree.
We need to provide a directory where SSH keys are kept for a specific user and give the absolute path to this directory for the /aaa/authentication/users/user/ssh_keydir leaf. If a public key login is not desired at all for a user, the value of the ssh_keydir leaf should be set to "", i.e. the empty string. Similarly, if the directory does not contain any SSH keys, public key logins for that user will be disabled.
The built-in SSH daemon supports DSA, RSA, and ED25519 keys. To generate and enable RSA keys of size 4096 bits for, say, user "bob", the following steps are required.
On the client machine, as user "bob", generate a private/public key pair as:
Now we need to copy the public key to the target machine where the NETCONF or CLI SSH client runs.
Assume we have the following user entry:
We need to copy the newly generated file id_rsa.pub, which is the public key, to a file on the target machine called /var/system/users/bob/.ssh/authorized_keys.
Password login is triggered in the following cases:
When a user logs in over NETCONF or the CLI using the built-in SSH server, with a password. The user presents a username and a password in accordance with the SSH protocol.
When a user logs in using the Web UI. The Web UI asks for a username and password.
When the method Maapi.authenticate() is used.
In this case, NSO will by default try local authentication, PAM, external authentication, and package authentication in that order, as described below. It is possible to change the order in which these are tried, by modifying the ncs.conf. parameter /ncs-config/aaa/auth-order. See ncs.conf(5) in Manual Pages for details.
If /aaa/authentication/users/user{$USER} exists and the presented password matches the encrypted password in /aaa/authentication/users/user{$USER}/password, the user is authenticated.
If the password does not match or if the user does not exist in /aaa/authentication/users, PAM login is attempted, if enabled. See PAM for details.
If all of the above fails and external authentication is enabled, the configured executable is invoked. See External Authentication for details.
If authentication succeeds, the user's group membership is established as described in Group Membership.
On operating systems supporting PAM, NSO also supports PAM authentication. Using PAM, authentication with NSO can be very convenient since it allows us to have the same set of users and groups having access to NSO as those that have access to the UNIX/Linux host itself.
If we use PAM, we do not have to have any users or any groups configured in the NSO aaa namespace at all.
To configure PAM we typically need to do the following:
Remove all users and groups from the AAA initialization XML file.
Enable PAM in ncs.conf by adding the following to the AAA section in ncs.conf. The service name specifies the PAM service, typically a file in the directory /etc/pam.d, but may alternatively, be an entry in a file /etc/pam.conf depending on OS and version. Thus, it is possible to have a different login procedure for NSO than for the host itself.
If PAM is enabled and we want to use PAM for login, the system may have to run as root. This depends on how PAM is configured locally. However, the default system authentication will typically require root, since the PAM libraries then read /etc/shadow. If we don't want to run NSO as root, the solution here is to change the owner of a helper program called $NCS_DIR/lib/ncs/lib/core/pam/priv/epam and also set the setuid bit.
As an example, say that we have a user test in /etc/passwd, and furthermore:
Thus, the test user is part of the admin and the operator groups and logging in to NSO as the test user through CLI SSH, Web UI, or NETCONF, renders the following in the audit log.
Thus, the test user was found and authenticated from /etc/passwd, and the crucial group assignment of the test user was done from /etc/group.
If we wish to be able to also manipulate the users, their passwords, etc on the device, we can write a private YANG model for that data, store that data in CDB, set up a normal CDB subscriber for that data, and finally when our private user data is manipulated, our CDB subscriber picks up the changes and changes the contents of the relevant /etc files.
A common situation is when we wish to have all authentication data stored remotely, not locally, for example on a remote RADIUS or LDAP server. This remote authentication server typically not only stores the users and their passwords but also the group information.
If we wish to have not only the users but also the group information stored on a remote server, the best option for NSO authentication is to use external authentication.
If this feature is configured, NSO will invoke the executable configured in /ncs-config/aaa/external-authentication/executable in ncs.conf , and pass the username and the clear text password on stdin using the string notation: "[user;password;]\n".
For example, if the user bob attempts to log in over SSH using the password 'secret', and external authentication is enabled, NSO will invoke the configured executable and write "[bob;secret;]\n" on the stdin stream for the executable. The task of the executable is then to authenticate the user and also establish the username-to-groups mapping.
For example, the executable could be a RADIUS client which utilizes some proprietary vendor attributes to retrieve the groups of the user from the RADIUS server. If authentication is successful, the program should write accept followed by a space-separated list of groups that the user is a member of, and additional information as described below. Again, assuming that bob's password indeed was 'secret', and that bob is a member of the admin and the lamers groups, the program should write accept admin lamers $uid $gid $supplementary_gids $HOME on its standard output and then exit.
Thus, the format of the output from an externalauth program when authentication is successful should be:
"accept $groups $uid $gid $supplementary_gids $HOME\n"
Where:
$groups is a space-separated list of the group names the user is a member of.
$uid is the UNIX integer user ID that NSO should use as a default when executing commands for this user.
$gid is the UNIX integer group ID that NSO should use as a default when executing commands for this user.
$supplementary_gids is a (possibly empty) space-separated list of additional UNIX group IDs the user is also a member of.
$HOME is the directory that should be used as HOME for this user when NSO executes commands on behalf of this user.
It is further possible for the program to return a token on successful authentication, by using "accept_token" instead of "accept":
"accept_token $groups $uid $gid $supplementary_gids $HOME $token\n"
Where:
$token is an arbitrary string. NSO will then, for some northbound interfaces, include this token in responses.
It is also possible for the program to return additional information on successful authentication, by using "accept_info" instead of "accept":
"accept_info $groups $uid $gid $supplementary_gids $HOME $info\n"
Where:
$info is some arbitrary text. NSO will then just append this text to the generated audit log message (CONFD_EXT_LOGIN).
Yet another possibility is for the program to return a warning that the user's password is about to expire, by using "accept_warning" instead of "accept":
"accept_warning $groups $uid $gid $supplementary_gids $HOME $warning\n"
Where:
$warning is an appropriate warning message. The message will be processed by NSO according to the setting of /ncs-config/aaa/expiration-warning in ncs.conf.
There is also support for token variations of "accept_info" and "accept_warning" namely "accept_token_info" and "accept_token_warning". Both "accept_token_info" and "accept_token_warning" expect the external program to output exactly the same as described above with the addition of a token after $HOME:
"accept_token_info $groups $uid $gid $supplementary_gids $HOME $token $info\n"
"accept_token_warning $groups $uid $gid $supplementary_gids $HOME $token $warning\n"
If authentication failed, the program should write "reject" or "abort", possibly followed by a reason for the rejection, and a trailing newline. For example, "reject Bad password\n" or just "abort\n". The difference between "reject" and "abort" is that with "reject", NSO will try subsequent mechanisms configured for /ncs-config/aaa/auth-order in ncs.conf (if any), while with "abort", the authentication fails immediately. Thus "abort" can prevent subsequent mechanisms from being tried, but when external authentication is the last mechanism (as in the default order), it has the same effect as "reject".
Supported by some northbound APIs, such as JSON-RPC and CLI over SSH, the external authentication may also choose to issue a challenge:
"challenge $challenge-id $challenge-prompt\n"
For more information on multi-factor authentication, see External Multi-Factor Authentication.
When external authentication is used, the group list returned by the external program is prepended by any possible group information stored locally under the /aaa tree. Hence when we use external authentication it is indeed possible to have the entire /aaa/authentication tree empty. The group assignment performed by the external program will still be valid and the relevant groups will be used by NSO when the authorization rules are checked.
When username and password authentication is not feasible, authentication by token validation is possible. Currently, only RESTCONF supports this mode of authentication. It shares all properties of external authentication, but instead of a username and password, it takes a token as input. The output is also almost the same, the only difference is that it is also expected to output a username.
If this feature is configured, NSO will invoke the executable configured in /ncs-config/aaa/external-validation/executable in ncs.conf , and pass the token on stdin using the string notation: "[token;]\n".
For example if the user bob attempts to log over RESTCONF using the token topsecret, and external validation is enabled, NSO will invoke the configured executable and write "[topsecret;]\n" on the stdin stream for the executable.
The task of the executable is then to validate the token, thereby authenticating the user and also establishing the username and username-to-groups mapping.
For example, the executable could be a FUSION client that utilizes some proprietary vendor attributes to retrieve the username and groups of the user from the FUSION server. If token validation is successful, the program should write accept followed by a space-separated list of groups that the user is a member of, and additional information as described below. Again, assuming that bob's token indeed was topsecret, and that bob is a member of the admin and the lamers groups, the program should write accept admin lamers $uid $gid $supplementary_gids $HOME $USER on its standard output and then exit.
Thus the format of the output from an externalvalidation program when token validation authentication is successful should be:
"accept $groups $uid $gid $supplementary_gids $HOME $USER\n"
Where:
$groups is a space-separated list of the group names the user is a member of.
$uid is the UNIX integer user ID NSO should use as a default when executing commands for this user.
$gid is the UNIX integer group ID NSO should use as a default when executing commands for this user.
$supplementary_gids is a (possibly empty) space-separated list of additional UNIX group IDs the user is also a member of.
$HOME is the directory that should be used as HOME for this user when NSO executes commands on behalf of this user.
$USER is the user derived from mapping the token.
It is further possible for the program to return a new token on successful token validation authentication, by using "accept_token" instead of "accept":
"accept_token $groups $uid $gid $supplementary_gids $HOME $USER $token\n"
Where:
$token is an arbitrary string. NSO will then, for some northbound interfaces, include this token in responses.
It is also possible for the program to return additional information on successful token validation authentication, by using "accept_info" instead of "accept":
"accept_info $groups $uid $gid $supplementary_gids $HOME $USER $info\n"
Where:
$info is some arbitrary text. NSO will then just append this text to the generated audit log message (CONFD_EXT_LOGIN).
Yet another possibility is for the program to return a warning that the user's password is about to expire, by using "accept_warning" instead of "accept":
"accept_warning $groups $uid $gid $supplementary_gids $HOME $USER $warning\n"
Where:
$warning is an appropriate warning message. The message will be processed by NSO according to the setting of /ncs-config/aaa/expiration-warning in ncs.conf.
There is also support for token variations of "accept_info" and "accept_warning" namely "accept_token_info" and "accept_token_warning". Both "accept_token_info" and "accept_token_warning" expect the external program to output exactly the same as described above with the addition of a token after $USER:
"accept_token_info $groups $uid $gid $supplementary_gids $HOME $USER $token $info\n"
"accept_token_warning $groups $uid $gid $supplementary_gids $HOME $USER $token $warning\n"
If token validation authentication fails, the program should write "reject" or "abort", possibly followed by a reason for the rejection and a trailing newline. For example "reject Bad password\n" or just "abort\n". The difference between "reject" and "abort" is that with "reject", NSO will try subsequent mechanisms configured for /ncs-config/aaa/validation-order in ncs.conf (if any), while with "abort", the token validation authentication fails immediately. Thus "abort" can prevent subsequent mechanisms from being tried. Currently, the only available token validation authentication mechanism is the external one.
Supported by some northbound APIs, such as JSON-RPC and CLI over SSH, the external validation may also choose to issue a challenge:
"challenge $challenge-id $challenge-prompt\n"
For more information on multi-factor authentication, see External Multi-Factor Authentication.
When username, password, or token authentication is not enough, a challenge may be sent from any of the external authentication mechanisms to the user. A challenge consists of a challenge ID and a base64 encoded challenge prompt, and a user is supposed to send a response to the challenge. Currently, only JSONRPC and CLI over SSH support multi-factor authentication. Responses to challenges of multi-factor authentication have the same output as the token authentication mechanism.
If this feature is configured, NSO will invoke the executable configured in /ncs-config/aaa/external-challenge/executable in ncs.conf , and pass the challenge ID and response on stdin using the string notation: "[challenge-id;response;]\n".
For example, a user bob has received a challenge from external authentication, external validation, or external challenge and then attempts to log in over JSON-RPC with a response to the challenge using challenge ID "22efa",response:"ae457b". The external challenge mechanism is enabled, NSO will invoke the configured executable and write "[22efa;ae457b;]\n" on the stdin stream for the executable.
The task of the executable is then to validate the challenge ID, and response combination, thereby authenticating the user and also establishing the username and username-to-groups mapping.
For example, the executable could be a RADIUS client which utilizes some proprietary vendor attributes to retrieve the username and groups of the user from the RADIUS server. If challenge ID, response validation is successful, the program should write "accept " followed by a space-separated list of groups the user is a member of, and additional information as described below. Again, assuming that bob's challenge ID, the response combination indeed was "22efa", "ae457b" and that bob is a member of the admin and the lamers groups, the program should write "accept admin lamers $uid $gid $supplementary_gids $HOME $USER\n" on its standard output and then exit.
Thus the format of the output from an externalchallenge program when challenge-based authentication is successful should be:
"accept $groups $uid $gid $supplementary_gids $HOME $USER\n"
Where:
$groups is a space-separated list of the group names the user is a member of.
$uid is the UNIX integer user ID NSO should use as a default when executing commands for this user.
$gid is the UNIX integer group ID NSO should use as a default when executing commands for this user.
$supplementary_gids is a (possibly empty) space-separated list of additional UNIX group IDs the user is also a member of.
$HOME is the directory that should be used as HOME for this user when NSO executes commands on behalf of this user.
$USER is the user derived from mapping the challenge ID, response.
It is further possible for the program to return a token on successful authentication, by using "accept_token" instead of "accept":
"accept_token $groups $uid $gid $supplementary_gids $HOME $USER $token\n"
Where:
$token is an arbitrary string. NSO will then, for some northbound interfaces, include this token in responses.
It is also possible for the program to return additional information on successful authentication, by using "accept_info" instead of "accept":
"accept_info $groups $uid $gid $supplementary_gids $HOME $USER $info\n"
Where:
$info is some arbitrary text. NSO will then just append this text to the generated audit log message (CONFD_EXT_LOGIN).
Yet another possibility is for the program to return a warning that the user's password is about to expire, by using "accept_warning" instead of "accept":
"accept_warning $groups $uid $gid $supplementary_gids $HOME $USER $warning\n"
Where:
$warning is an appropriate warning message. The message will be processed by NSO according to the setting of /ncs-config/aaa/expiration-warning in ncs.conf.
There is also support for token variations of "accept_info" and "accept_warning" namely "accept_token_info" and "accept_token_warning". Both "accept_token_info" and "accept_token_warning" expects the external program to output exactly the same as described above with the addition of a token after $USER:
"accept_token_info $groups $uid $gid $supplementary_gids $HOME $USER $token $info\n"
"accept_token_warning $groups $uid $gid $supplementary_gids $HOME $USER $token $warning\n"
If authentication fails, the program should write "reject" or "abort", possibly followed by a reason for the rejection and a trailing newline. For example "reject Bad challenge response\n" or just "abort\n". The difference between "reject" and "abort" is that with "reject", NSO will try subsequent mechanisms configured for /ncs-config/aaa/challenge-order in ncs.conf (if any), while with "abort", the challenge-response authentication fails immediately. Thus "abort" can prevent subsequent mechanisms from being tried. Currently, the only available challenge-response authentication mechanism is the external one.
Supported by some northbound APIs, such as JSON-RPC and CLI over SSH, the external challenge may also choose to issue a new challenge:
"challenge $challenge-id $challenge-prompt\n"
The Package Authentication functionality allows for packages to handle the NSO authentication in a customized fashion. Authentication data can e.g. be stored remotely, and a script in the package is used to communicate with the remote system.
Compared to external authentication, the Package Authentication mechanism allows specifying multiple packages to be invoked in the order they appear in the configuration. NSO provides implementations for LDAP, SAMLv2, and TACACS+ protocols with packages available in $NCS_DIR/packages/auth/. Additionally, you can implement your own authentication packages as detailed below.
Authentication packages are NSO packages with the required content of an executable file scripts/authenticate. This executable basically follows the same API, and limitations, as the external auth script, but with a different input format and some additional functionality. Other than these requirements, it is possible to customize the package arbitrarily.
Package authentication is enabled by setting the ncs.conf options /ncs-config/aaa/package-authentication/enabled to true, and adding the package by name in the /ncs-config/aaa/package-authentication/packages list. The order of the configured packages is the order that the packages will be used when attempting to authenticate a user. See ncs.conf(5) in Manual Pages for details.
If this feature is configured in ncs.conf, NSO will for each configured package invoke script/authenticate, and pass username, password, and original HTTP request (i.e. the user-supplied next query parameter), HTTP request, HTTP headers, HTTP body, client source IP, client source port, northbound API context, and protocol on stdin using the string notation: "[user;password;orig_request;request;headers;body;src-ip;src-port;ctx;proto;]\n".
For example, if an unauthenticated user attempts to start a single sign-on process over northbound HTTP-based APIs with the cisco-nso-saml2-auth package, package authentication is enabled and configured with packages, and also single sign-on is enabled, NSO will, for each configured package, invoke the executable scripts/authenticate and write "[;;;R0VUIC9zc28vc2FtbC9sb2dpbi8gSFRUUC8xLjE=;;;127.0.0.1;59226;webui;https;]\n". on the stdin stream for the executable.
For clarity, the base64 decoded contents sent to stdin presented: "[;;;GET /sso/saml/login/ HTTP/1.1;;;127.0.0.1;54321;webui;https;]\n".
The task of the package is then to authenticate the user and also establish the username-to-groups mapping.
For example, the package could support a SAMLv2 authentication protocol which communicates with an Identity Provider (IdP) for authentication. If authentication is successful, the program should write either "accept", or "accept_username", depending on whether the authentication is started with a username or if an external entity handles the entire authentication and supplies the username for a successful authentication. (SAMLv2 uses accept_username, since the IdP handles the entire authentication.) The "accept_username " is followed by a username and then followed by a space-separated list of groups the user is a member of, and additional information as described below. If authentication is successful and the authenticated user bob is a member of the groups admin and wheel, the program should write "accept_username bob admin wheel 1000 1000 100 /home/bob\n" on its standard output and then exit.
Thus the format of the output from a packageauth program when authentication is successful should be either the same as from externalauth (see External Authentication) or the following:
"accept_username $USER $groups $uid $gid $supplementary_gids $HOME\n"
Where:
$USER is the user derived during the execution of the "packageauth" program.
$groups is a space-separated list of the group names the user is a member of.
$uid is the UNIX integer user ID NSO should use as a default when executing commands for this user.
$gid is the UNIX integer group ID NSO should use as a default when executing commands for this user.
$supplementary_gids is a (possibly empty) space-separated list of additional UNIX group IDs the user is also a member of.
$HOME is the directory that should be used as HOME for this user when NSO executes commands on behalf of this user.
In addition to the externalauth API, the authentication packages can also return the following responses:
unknown 'reason' - (reason being plain-text) if they can't handle authentication for the supplied input.
redirect 'url' - (url being base64 encoded) for an HTTP redirect.
content 'content-type' 'content' - (content-type being plain-text mime-type and content being base64 encoded) to relay supplied content.
accept_username_redirect url $USER $groups $uid $gid $supplementary_gids $HOME - which combines the accept_username and redirect.
It is also possible for the program to return additional information on successful authentication, by using "accept_info" instead of "accept":
"accept_info $groups $uid $gid $supplementary_gids $HOME $info\n"
Where:
$info is some arbitrary text. NSO will then just append this text to the generated audit log message (NCS_PACKAGE_AUTH_SUCCESS).
Yet another possibility is for the program to return a warning that the user's password is about to expire, by using "accept_warning" instead of "accept":
"accept_warning $groups $uid $gid $supplementary_gids $HOME $warning\n"
Where:
$warning is an appropriate warning message. The message will be processed by NSO according to the setting of /ncs-config/aaa/expiration-warning in ncs.conf.
If authentication fails, the program should write "reject" or "abort", possibly followed by a reason for the rejection and a trailing newline. For example "reject 'Bad password'\n" or just "abort\n". The difference between "reject" and "abort" is that with "reject", NSO will try subsequent mechanisms configured for /ncs-config/aaa/auth-order, and packages configured for /ncs-config/aaa/package-authentication/packages in ncs.conf (if any), while with "abort", the authentication fails immediately. Thus "abort" can prevent subsequent mechanisms from being tried, but when external authentication is the last mechanism (as in the default order), it has the same effect as "reject".
When package authentication is used, the group list returned by the package executable is prepended by any possible group information stored locally under the /aaa tree. Hence when package authentication is used, it is indeed possible to have the entire /aaa/authentication tree empty. The group assignment performed by the external program will still be valid and the relevant groups will be used by NSO when the authorization rules are checked.
Package authentication will invoke the scripts/authenticate when a user tries to authenticate using CLI. In this case, only the username, password, client source IP, client source port, northbound API context, and protocol will be passed to the script.
When this is enabled, /ncs-config/aaa/package-authentication/package-challenge/enabled is set to true, packages will also be used to try to resolve challenges sent to the server and are only supported by CLI over SSH. The script script/challenge will be invoked passing challenge ID, response, client source IP, client source port, northbound API context, and protocol on stdin using the string notation: "[challengeid;response;src-ip;src-port;ctx;proto;]\n" . The output should follow that of the authenticate script.
NSO communicates with clients (client libraries, ncs_cli, and similar) using the NSO IPC socket. The protocol used allows the client to provide user and group information to use for authorization in NSO, effectively delegating authentication to the client.
By default, only local connections to the IPC socket are allowed. If all local clients are considered trusted, the socket can provide unauthenticated access, with the client-supplied user name. This is what the --user option of ncs_cli does. For example:
connects to NSO as the user admin. The same is possible for the group. This unauthenticated access is currently the default.
The main condition here is that all clients connecting to the socket are trusted to use the correct user and group information. That is often not the case, such as untrusted users having shell access to the host to run ncs_cli or otherwise initiate local connections to the IPC socket. Then access to the socket must be restricted.
In general, authenticating access to the IPC socket is a security best practice and should always be used. NSO implements it as an access check, where every IPC client must prove that it has access to a pre-shared key. See Restricting Access to the IPC Port on how to enable it.
Once a user is authenticated, group membership must be established. A single user can be a member of several groups. Group membership is used by the authorization rules to decide which operations a certain user is allowed to perform. Thus the NSO AAA authorization model is entirely group-based. This is also sometimes referred to as role-based authorization.
All groups are stored under /nacm/groups, and each group contains a number of usernames. The ietf-netconf-acm.yang model defines a group entry:
The tailf-acm.yang model augments this with a gid leaf:
A valid group entry could thus look like:
The above XML data would then mean that users bob and joe are members of the admin group. The users need not necessarily exist as actual users under /aaa/authentication/users in order to belong to a group. If for example PAM authentication is used, it does not make sense to have all users listed under /aaa/authentication/users.
By default, the user is assigned to groups by using any groups provided by the northbound transport (e.g. via the ncs_cli or netconf-subsys programs), by consulting data under /nacm/groups, by consulting the /etc/group file, and by using any additional groups supplied by the authentication method. If /nacm/enable-external-groups is set to "false", only the data under /nacm/groups is consulted.
The resulting group assignment is the union of these methods, if it is non-empty. Otherwise, the default group is used, if configured ( /ncs-config/aaa/default-group in ncs.conf).
A user entry has a UNIX uid and UNIX gid assigned to it. Groups may have optional group IDs. When a user is logged in, and NSO tries to execute commands on behalf of that user, the uid/gid for the command execution is taken from the user entry. Furthermore, UNIX supplementary group IDs are assigned according to the gid's in the groups where the user is a member.
Once a user is authenticated and group membership is established, when the user starts to perform various actions, each action must be authorized. Normally the authorization is done based on rules configured in the AAA data model as described in this section.
The authorization procedure first checks the value of /nacm/enable-nacm. This leaf has a default of true, but if it is set to false, all access is permitted. Otherwise, the next step is to traverse the rule-list list:
If the group leaf-list in a rule-list entry matches any of the user's groups, the cmdrule list entries are examined for command authorization, while the rule entries are examined for RPC, notification, and data authorization.
The tailf-acm.yang module augments the rule-list entry in ietf-netconf-acm.yang with a cmdrule list:
Each rule has seven leafs. The first is the name list key, the following three leafs are matching leafs. When NSO tries to run a command, it tries to match the command towards the matching leafs and if all of context, command, and access-operations match, the fifth field, i.e. the action, is applied.
name: name is the name of the rule. The rules are checked in order, with the ordering given by the YANG ordered-by user semantics, i.e. independent of the key values.
context: context is either of the strings cli, webui, or * for a command rule. This means that we can differentiate authorization rules for which access method is used. Thus if command access is attempted through the CLI, the context will be the string cli whereas for operations via the Web UI, the context will be the string webui.
command: This is the actual command getting executed. If the rule applies to one or several CLI commands, the string is a space-separated list of CLI command tokens, for example request system reboot. If the command applies to Web UI operations, it is a space-separated string similar to a CLI string. A string that consists of just * matches any command.
In general, we do not recommend using command rules to protect the configuration. Use rules for data access as described in the next section to control access to different parts of the data. Command rules should be used only for CLI commands and Web UI operations that cannot be expressed as data rules.
The individual tokens can be POSIX extended regular expressions. Each regular expression is implicitly anchored, i.e. an ^ is prepended and a $ is appended to the regular expression.
access-operations: access-operations is used to match the operation that NSO tries to perform. It must be one or both of the "read" and "exec" values from the access-operations-type bits type definition in ietf-netconf-acm.yang, or "*" to match any operation.
action: If all of the previous fields match, the rule as a whole matches and the value of action will be taken. I.e. if a match is found, a decision is made whether to permit or deny the request in its entirety. If action is permit, the request is permitted, if action is deny, the request is denied and an entry is written to the developer log.
log-if-permit: If this leaf is present, an entry is written to the developer log for a matching request also when action is permit. This is very useful when debugging command rules.
comment: An optional textual description of the rule.
For the rule processing to be written to the devel log, the /ncs-config/logs/developer-log-level entry in ncs.conf must be set to trace.
If no matching rule is found in any of the cmdrule lists in any rule-list entry that matches the user's groups, this augmentation from tailf-acm.yang is relevant:
If read access is requested, the value of /nacm/cmd-read-default determines whether access is permitted or denied.
If exec access is requested, the value of /nacm/cmd-exec-default determines whether access is permitted or denied.
If access is permitted due to one of these default leafs, the /nacm/log-if-default-permithas the same effect as the log-if-permit leaf for the cmdrule lists.
The rules in the rule list are used to control access to rpc operations, notifications, and data nodes defined in YANG models. Access to invocation of actions (tailf:action) is controlled with the same method as access to data nodes, with a request for exec access. ietf-netconf-acm.yang defines a rule entry as:
tailf-acm augments this with two additional leafs:
Similar to the command access check, whenever a user through some agent tries to access an RPC, a notification, a data item, or an action, access is checked. For a rule to match, three or four leafs must match and when a match is found, the corresponding action is taken.
We have the following leafs in the rule list entry.
name: The name of the rule. The rules are checked in order, with the ordering given by the YANG ordered-by user semantics, i.e. independent of the key values.
module-name: The module-name string is the name of the YANG module where the node being accessed is defined. The special value * (i.e. the default) matches all modules.\
rpc-name / notification-name / path: This is a choice between three possible leafs that are used for matching, in addition to the module-name:
rpc-name: The name of an RPC operation, or * to match any RPC.
notification-name: the name of a notification, or * to match any notification.
path: A restricted XPath expression leading down into the populated XML tree. A rule with a path specified matches if it is equal to or shorter than the checked path. Several types of paths are allowed.
Tagpaths that do not contain any keys. For example /ncs/live-device/live-status.
Instantiated key: as in /devices/device[name="x1"]/config/interface matches the interface configuration for managed device "x1" It's possible to have partially instantiated paths only containing some keys instantiated - i.e. combinations of tagpaths and keypaths. Assuming a deeper tree, the path
context: context is either of the strings cli, netconf, webui, snmp, or * for a data rule. Furthermore, when we initiate user sessions from MAAPI, we can choose any string we want. Similarly to command rules, we can differentiate access depending on which agent is used to gain access.
access-operations: access-operations is used to match the operation that NSO tries to perform. It must be one or more of the "create", "read", "update", "delete" and "exec" values from the access-operations-type bits type definition in ietf-netconf-acm.yang, or "*" to match any operation.
action: This leaf has the same characteristics as the action leaf for command access.
log-if-permit: This leaf has the same characteristics as the log-if-permit leaf for command access.
comment: An optional textual description of the rule.
If no matching rule is found in any of the rule lists in any rule-list entry that matches the user's groups, the data model node for which access is requested is examined for the presence of the NACM extensions:
If the nacm:default-deny-all extension is specified for the data model node, the access is denied.
If the nacm:default-deny-write extension is specified for the data model node, and create, update, or delete access is requested, the access is denied.
If examination of the NACM extensions did not result in access being denied, the value (permit or deny) of the relevant default leaf is examined:
If read access is requested, the value of /nacm/read-default determines whether access is permitted or denied.
If create, update, or delete access is requested, the value of /nacm/write-default determines whether access is permitted or denied.
If exec access is requested, the value of /nacm/exec-default determines whether access is permitted or denied.
If access is permitted due to one of these default leafs, this augmentation from tailf-acm.yang is relevant:
I.e. it has the same effect as the log-if-permit leaf for the rule lists, but for the case where the value of one of the default leafs permits access.
When NSO executes a command, the command rules in the authorization database are searched, The rules are tried in order, as described above. When a rule matches the operation (command) that NSO is attempting, the action of the matching rule is applied - whether permit or deny.
When actual data access is attempted, the data rules are searched. E.g. when a user attempts to execute delete aaa in the CLI, the user needs delete access to the entire tree /aaa.
Another example is if a CLI user writes show configuration aaa TAB it suffices to have read access to at least one item below /aaa for the CLI to perform the TAB completion. If no rule matches or an explicit deny rule is found, the CLI will not TAB complete.
Yet another example is if a user tries to execute delete aaa authentication users, we need to perform a check on the paths /aaa and /aaa/authentication before attempting to delete the sub-tree. Say that we have a rule for path /aaa/authentication/users which is a permit rule and we have a subsequent rule for path /aaa which is a deny rule. With this rule set the user should indeed be allowed to delete the entire /aaa/authentication/users tree but not the /aaa tree nor the /aaa/authentication tree.
We have two variations on how the rules are processed. The easy case is when we actually try to read or write an item in the configuration database. The execution goes like this:
The second case is when we execute TAB completion in the CLI. This is more complicated. The execution goes like this:
The idea is that as we traverse (through TAB) down the XML tree, as long as there is at least one rule that can possibly match later, once we have more data, we must continue. For example, assume we have:
"/system/config/foo" --> permit
"/system/config" --> deny
If we in the CLI stand at "/system/config" and hit TAB we want the CLI to show foo as a completion, but none of the other nodes that exist under /system/config. Whereas if we try to execute delete /system/config the request must be rejected.
By default, NACM rules are configured for the entire tailf:action or YANG 1.1 action statements, but not for input statement child leafs. To override this behavior, and enable NACM rules on input leafs, set the following parameter to 'true': /ncs-config/aaa/action-input-rules/enabled. When enabled all action input leafs given to an action will be validated for NACM rules. If broad 'deny' NACM rules are used, you might need to add 'permit' rules for the affected action input leafs to allow actions to be used with parameters.
By design NACM rules are ignored for changes done by services - FASTMAP, Reactive FASTMAP, or Nano services. The reasoning behind this is that a service package can be seen as a controlled way to provide limited access to devices for a user group that is not allowed to apply arbitrary changes on the devices.
However, there are NSO installations where this behavior is not desired, and NSO administrators want to enforce NACM rules even on changes done by services. For this purpose, the leaf called /nacm/enforce-nacm-on-services is provided. By default, it is set to false.
Note however that currently, even with this leaf set to true, there are limitations. Namely, the post-actions for nano-services are run in a user session without any access checks. Besides that, NACM rules are not enforced on the read operations performed in the service callbacks.
It might be desirable to deny everything for a user group and only allow access to a specific service. This pattern could be used to allow an operator to provision the service, but deny everything else. While this pattern works for a normal FASTMAP service, there are some caveats for stacked services, Reactive FASTMAP, and Nano services. For these kinds of services, in addition to the service itself, access should be provided to the user group for the following paths:
In case of stacked services, the user group needs read and write access to the leaf private/re-deploy-counter under the bottom service. Otherwise, the user will not be able to redeploy the service.
In the case of Reactive FASTMAP or Nano services, the user group needs read and write access to the following:
/zombies
/side-effect-queue
/kickers
In deployments with many devices, it can become cumbersome to handle data authorization per device. To help with this there is a rule type that works on device group membership (for more on device groups, see Device Groups). To do this, devices are added to different device groups, and the rule type device-group-rule is used.
The IETF NACM rule type is augmented with a new rule type named device-group-rule which contains a leafref to the device groups. See the following example.
In the example below, we configure two device groups based on different regions and add devices to them.
In the example below, we configure an operator for the us_east region:
In the example below, we configure the device group rules and refer to the device group and the us_east group.
In summary device group authorization gives a more compact configuration for deployments where devices can be grouped and authorization can be done on a device group basis.
Modifications on the device-group subtree are recommended to be controlled by a limited set of users.
Assume that we have two groups, admin and oper. We want admin to be able to see and edit the XML tree rooted at /aaa, but we do not want users who are members of the oper group to even see the /aaa tree. We would have the following rule list and rule entries. Note, here we use the XML data from tailf-aaa.yang to exemplify. The examples apply to all data, for all data models loaded into the system.
If we do not want the members of oper to be able to execute the NETCONF operation edit-config, we define the following rule list and rule entries:
To spell it out, the above defines four elements to match. If NSO tries to perform a netconf operation, which is the operation edit-config, and the user who runs the command is a member of the oper group, and finally it is an exec (execute) operation, we have a match. If so, the action is deny.
The path leaf can be used to specify explicit paths into the XML tree using XPath syntax. For example the following:
Explicitly allows the admin group to change the password for precisely the bob user when the user is using the CLI. Had path been /aaa/authentication/users/user/password the rule would apply to all password elements for all users. Since the path leaf completely identifies the nodes that the rule applies to, we do not need to give tailf-aaa for the module-name leaf.
NSO applies variable substitution, whereby the username of the logged-in user can be used in a path. Thus:
The above rule allows all users that are part of the admin group to change their own passwords only.
A member of oper is able to execute NETCONF operation action if that member has exec access on NETCONF RPC action operation, read access on all instances in the hierarchy of data nodes that identifies the specific action in the data store, and exec access on the specific action. For example, an action is defined as below.
To be able to execute double action through NETCONF RPC, the members of oper need the following rule list and rule entries.
Or, a simpler rule set as the following.
Finally, if we wish members of the oper group to never be able to execute the request system reboot command, also available as a reboot NETCONF rpc, we have:
In this section, we list some tips to make it easier to troubleshoot NACM rules.
Use log-if-permit and log-if-default-permit together with the developer log level set to trace.
Use the tailf-acm.yang module augmentation log-if-permit leaf for rules with action permit. When those rules trigger a permit action a trace entry is added to the developer log. To see trace entries make sure the /ncs-config/logs/developer-log-level is set to trace.
If you have a default rule with action permit you can use the log-if-default-permit leaf instead.
NACM rules are read at the start of the session and are used throughout the session.
When a user session is created it will gather the authorization rules that are relevant for that user's group(s). The rules are used throughout the user session lifetime. When we update the AAA rules the active sessions are not affected. For example, if an administrator updates the NACM rules in one session the update will not apply to any other currently active sessions. The updates will apply to new sessions created after the update.
Explicitly state NACM groups when starting the CLI. For example ncs_cli -u oper -g oper.
It is the user's group membership that determines what rules apply. Starting the CLI using the ncs_cli command without explicitly setting the groups, defaults to the actual UNIX groups the user is a member of. On Darwin, one of the default groups is usually admin, which can lead to the wrong group being used.
Be careful with namespaces in rulepaths.
Unless a rulepath is made explicit by specifying namespace it will apply to that specific path in all namespaces. Below we show parts of an example from RFC 8341, where the path element has an xmlns attribute and the path is namespaced. If these would not have been namespaced, the rules would not behave as expected.
In the example above (Excerpt from RFC 8341 Appendix A.4), the path is namespaced.
NSO's AAA subsystem will cache the AAA information in order to speed up the authorization process. This cache must be updated whenever there is a change to the AAA information. The mechanism for this update depends on how the AAA information is stored, as described in the following two sections.
To start NSO, the data models for AAA must be loaded. The defaults in the case that no actual data is loaded for these models allow all read and exec access, while write access is denied. Access may still be further restricted by the NACM extensions, though - e.g. the /nacm container has nacm:default-deny-all, meaning that not even read access is allowed if no data is loaded.
The NSO installation ships with an XML initialization file containing AAA configuration. The file is called aaa_init.xml and is, by default, copied to the CDB directory by the NSO install scripts.
The local installation variant, targeting development only, defines two users, admin and oper with passwords set to admin and oper respectively for authentication. The two users belong to user groups with NACM rules restricting their authorization level. The system installation aaa_init.xml variant, targeting production deployment, defines NACM rules only as users are, by default, authenticated using PAM. The NACM rules target two user groups, ncsadmin and ncsoper. Users belonging to the ncsoper group are limited to read-only access.
Normally the AAA data will be stored as configuration in CDB. This allows for changes to be made through NSO's transaction-based configuration management. In this case, the AAA cache will be updated automatically when changes are made to the AAA data. If changing the AAA data via NSO's configuration management is not possible or desirable, it is alternatively possible to use the CDB operational data store for AAA data. In this case, the AAA cache can be updated either explicitly e.g. by using the maapi_aaa_reload() function, see the confd_lib_maapi(3) in the Manual Pages manual page, or by triggering a subscription notification by using the subscription lock when updating the CDB operational data store, see Using CDB in Development.
Some applications may not want to expose the AAA data to end users in the CLI or the Web UI. Two reasonable approaches exist here and both rely on the tailf:export statement. If a module has tailf:export none it will be invisible to all agents. We can then either use a transform whereby we define another AAA model and write a transform program that maps our AAA data to the data that must exist in tailf-aaa.yang and ietf-netconf-acm.yang. This way we can choose to export and and expose an entirely different AAA model.
Yet another very easy way out, is to define a set of static AAA rules whereby a set of fixed users and fixed groups have fixed access to our configuration data. Possibly the only field we wish to manipulate is the password field.
The interesting portion is the part between <devices> and </devices> tags.
Another way to get the XML output is to list the existing device configuration in NSO by piping it through the display xml filter:
If there is a lot of data, it is easy to save the output to a file using the save pipe in the CLI, instead of copying and pasting it by hand:
The last command saves the configuration for a device in the dns-template.xml file using XML format. To use it in a service, you need a service package.
You create an empty, skeleton service with the ncs-make-package command, such as:
The command generates the minimal files necessary for a service package, here named dns. One of the files is dns/templates/dns-template.xml, which is where the configuration in the XML format goes.
If you look closely, there is one significant difference from the show running-config output: the template uses the config-template XML root tag, instead of config. This tag also has the servicepoint attribute. Other than that, you can use the XML formatted configuration from the CLI as-is.
Bringing the two XML documents together gives the final dns/templates/dns-template.xml XML template:
The service is now ready to use in NSO. Start the examples.ncs/implement-a-service/dns-v1 example to set up a live NSO system with such a service and inspect how it works. Try configuring two different instances of the dns service.
The problem with this service is that it always does the same thing because it always generates exactly the same configuration. It would be much better if the service could configure different devices. The updated version, v1.1, uses a slightly modified template:
The changed part is <name>{/name}</name>, which now uses the {/name} code instead of a hard-coded c1 value. The curly braces indicate that NSO should evaluate the enclosed expression and use the resulting value in its place. The /name expression is an XPath expression, referencing the service YANG model. In the model, name is the name you give each service instance. In this case, the instance name doubles for identifying the target device.
In the output, the instance name used was c2 and that is why the service performs DNS configuration for the c2 device.
The template actually allows a decent amount of programmability through XPath and special XML processing instructions. For example:
In the preceding printout, the XPath starts-with() function is used to check if the device name starts with a specific prefix. Then one set of configuration items is used, and a different one otherwise. For additional available instructions and the complete set of template features, see Templates.
However, most provisioning tasks require some kind of input to be useful. Fortunately, you can define any number of input parameters in the service model that you can then reference from the template; either to use directly in the configuration or as something to base provisioning decisions on.
The YANG service model specifies the input parameters a service in NSO takes. For a specific service model think of the parameters that a northbound system sends to NSO or the parameters that a network engineer needs to enter in the NSO CLI.
Even a service as simple as the DNS configuration service usually needs some parameters, such as the target device. The service model gives each parameter a name and defines validation rules, ensuring the client-provided values fit what the service expects.
Suppose you want to add a parameter for the target device to the simple DNS configuration service. You need to construct an appropriate service model, adding a YANG leaf to capture this input.
The service model is located in the src/yang/servicename.yang file in the package. It typically resembles the following structure:
The list named after the package (servicename in the example) is the interesting part.
The uses ncs:service-data and ncs:servicepoint statements differentiate this list from any standard YANG list and make it a service. Each list item in NSO represents a service instance of this type.
The uses ncs:service-data part allows the system to store internal state and provide common service actions, such as re-deploy and get-modifications for each service instance.
The ncs:servicepoint identifies which part of the system is responsible for the service mapping. For a template-only service, it is the XML template that uses the same service point value in the config-template element.
The name leaf serves as the key of the list and is primarily used to distinguish service instances from each other.
The remaining statements describe the functionality and input parameters that are specific to this service. This is where you add the new leaf for the target device parameter of the DNS service:
Use the examples.ncs/implement-a-service/dns-v2 example to explore how this model works and try to discover what deficiencies it may have.
In its current form, the model allows you to specify any value for target-device, including none at all! Obviously, this is not good as it breaks the provisioning of the service. But even more importantly, not validating the input may allow someone to use the service in the way you have not intended and perhaps bring down the network.
You can guard against invalid input with the help of additional YANG statements. For example:
Now this parameter is mandatory for every service instance and must be one of the string literals: c0, c1, or c2. This format is defined by the regular expression in the pattern statement. In this particular case, the length restriction is redundant but demonstrates how you can combine multiple restrictions. You can even add multiple pattern statements to handle more complex cases.
What if you wanted to make the DNS server address configurable too? You can add another leaf to the service model:
There are three notable things about this leaf:
There is no mandatory statement, meaning the value for this leaf is optional. The XML template will be designed to provide some default value if none is given.
The type of the leaf is inet:ipv4-address, which restricts the value for this leaf to an IP address.
The inet:ipv4-address type is further restricted using a regular expression to only allow IP addresses from the 192.0.2.0/24 range.
YANG is very powerful and allows you to model all kinds of values and restrictions on the data. In addition to the ones defined in the YANG language (RFC 7950, section 9), predefined types describing common networking concepts, such as those from the inet namespace (RFC 6991), are available to you out of the box. It is much easier to validate the inputs when so many options are supported.
The one missing piece for the service is the XML template. You can take the Example Static DNS Configuration Template as a base and tweak it to reference the defined inputs.
Using the code {XYZ} or {/XYZ} in the template, instructs NSO to look for the value in the service instance data, in the node with the name XYZ. So, you can refer to the target-device input parameter as defined in YANG with the {/target-device} code in the XML template.
The final, improved version of the DNS service template that takes into account the new model, is:
The following figure captures the relationship between the YANG model and the XML template that ultimately produces the desired device configuration.
The complete service is available in the examples.ncs/implement-a-service/dns-v2.1 example. Feel free to investigate on your own how it differs from the initial, no-validation service.
When the service is simple, constructing the YANG model and creating the service mapping (the XML template) is straightforward. Since the two components are mostly independent, you can start your service design with either one.
If you write the YANG model first, you can load it as a service package into NSO (without having any mapping defined) and iterate on it. This way, you can try the model, which is the interface to the service, with network engineers or northbound systems before investing the time to create the mapping. This model-first approach is also sometimes called top-down.
The alternative is to create the mapping first. Especially for developers new to NSO, the template-first, or bottom-up, approach is often easier to implement. With this approach, you templatize the configuration and extract the required service parameters from the template.
Experienced NSO developers naturally combine the two approaches, without much thinking. However, if you have trouble modeling your service at first, consider following the template-first approach demonstrated here.
For the following example, suppose you want the service to configure IP addressing on an ethernet interface. You know what configuration is required to do this manually for a particular ethernet interface. For a Cisco IOS-based device you would use the commands, such as:
To transform this configuration into a reusable service, complete the following steps:
Create an XML template with hard-coded values.
Replace each value specific to this instance with a parameter reference.
Add each parameter to the YANG model.
Add parameter validation.
Consolidate and clean up the YANG model as necessary.
Start by generating the configuration in the XML format, making use of the display xml filter. Note that the XML output will not necessarily be a one-to-one mapping of the CLI commands; the XML reflects the device YANG model which can be more complex but the commands on the CLI can hide some of this complexity.
The transformation to a template also requires you to change the root tag, which produces the resulting XML template:
However, this template has all the values hard-coded and only configures one specific interface on one specific device.
Now you must replace all the dynamic parts that vary from service instance to service instance with references to the relevant parameters. In this case, it is data specific to each device: which interface and which IP address to use.
Suppose you pick the following names for the variable parameters:
device: The network device to configure.
interface: The network interface on the selected device.
ip-address: The IP address to use on the selected interface.
Generally, you can make up any name for a parameter but it is best to follow the same rules that apply for naming variables in programming languages, such as making the name descriptive but not excessively verbose. It is customary to use a hyphen (minus sign) to concatenate words and use all-lowercase (“kebab-case”), which is the convention used in the YANG language standards.
The corresponding template then becomes:
Having completed the template, you can add all the parameters, three in this case, to the service model.
The partially completed model is now:
Missing are the data type and other validation statements. At this point, you could fill out the model with generic type string statements, akin to the name leaf. This is a useful technique to test out the service in early development. But here you can complete the model directly, as it contains only three parameters.
You can use a leafref type leaf to refer to a device by its name in the NSO. This type uses dynamic lookup at the specified path to enumerate the available values. For the device leaf, it lists every value for a device name that NSO knows about. If there are two devices managed by NSO, named rtr-sjc-01 and rtr-sto-01, either “rtr-sjc-01” or “rtr-sto-01” are valid values for such a leaf. This is a common way to refer to devices in NSO services.
In a similar fashion, restrict the valid values of the other two parameters.
You would typically create the service package skeleton with the ncs-make-package command and update the model in the .yang file. The model in the skeleton might have some additional example leafs that you do not need and should remove to finalize the model. That gives you the final, full-service model:
The examples.ncs/implement-a-service/iface-v1 example contains the complete YANG module with this service model in the packages/iface-v1/src/yang/iface.yang file, as well as the corresponding service template in packages/iface-v1/templates/iface-template.xml.
The YANG model and the mapping (the XML template) are the two main components required to implement a service in NSO. The hidden part of the system that makes such an approach feasible is called FASTMAP.
FASTMAP covers the complete service life cycle: creating, changing, and deleting the service. It requires a minimal amount of code for mapping from a service model to a device model.
FASTMAP is based on generating changes from an initial create operation. When the service instance is created the reverse of the resulting device configuration is stored together with the service instance. If an NSO user later changes the service instance, NSO first applies (in an isolated transaction) the reverse diff of the service, effectively undoing the previous create operation. Then it runs the logic to create the service again and finally performs a diff against the current configuration. Only the result of the diff is then sent to the affected devices.
It is therefore very important that the service create code produces the same device changes for a given set of input parameters every time it is executed. See Persistent Opaque Data for techniques to achieve this.
If the service instance is deleted, NSO applies the reverse diff of the service, effectively removing all configuration changes the service did on the devices.
Assume we have a service model that defines a service with attributes X, Y, and Z. The mapping logic calculates that attributes A, B, and C must be set on the devices. When the service is instantiated, the previous values of the corresponding device attributes A, B, and C are stored with the service instance in the CDB. This allows NSO to bring the network back to the state before the service was instantiated.
Now let us see what happens if one service attribute is changed. Perhaps the service attribute Z is changed. NSO will execute the mapping as if the service was created from scratch. The resulting device configurations are then compared with the actual configuration and the minimal diff is sent to the devices. Note that this is managed automatically, there is no code to handle the specific "change Z" operation.
When a user deletes a service instance, NSO retrieves the stored device configuration from the moment before the service was created and reverts to it.
For a complex service, you may realize that the input parameters for a service are not sufficient to render the device configuration. Perhaps the northbound system only provides a subset of the required parameters. For example, the other system wants NSO to pick an IP address and does not pass it as an input parameter. Then, additional logic or API calls may be necessary but XML templates provide no such functionality on their own.
The solution is to augment XML templates with custom code. Or, more accurately, create custom provisioning code that leverages XML templates. Alternatively, you can also implement the mapping logic completely in the code and not use templates at all. The latter, forgoing the templates altogether, is less common, since templates have a number of beneficial properties.
Templates separate the way parameters are applied, which depends on the type of target device, from calculating the parameter values. For example, you would use the same code to find the IP address to apply on a device, but the actual configuration might differ whether it is a Cisco IOS (XE) device, an IOS XR, or another vendor entirely.
Moreover, if you use templates, NSO can automatically validate the templates being compatible with the used NEDs, which allows you to sidestep whole groups of bugs.
NSO offers multiple programming languages to implement the code. The --service-skeleton option of the ncs-make-package command influences the selection of the programming language and if the generated code should contain sample calls for applying an XML template.
Suppose you want to extend the template-based ethernet interface addressing service to also allow specifying the netmask. You would like to do this in the more modern, CIDR-based single number format, such as is used in the 192.168.5.1/24 format (the /24 after the address). However, the generated device configuration takes the netmask in the dot-decimal format, such as 255.255.255.0, so the service needs to perform some translation. And that requires a custom service code.
Such a service will ultimately contain three parts: the service YANG model, the translation code, and the XML template. The model and the template serve the same purpose as before, while custom code provides fine-grained control over how templates are applied and the data available to them.
Since the service is based on the previous interface addressing service, you can save yourself a lot of work by starting with the existing YANG model and XML template.
The service YANG model needs an additional cidr-netmask leaf to hold the user-provided netmask value:
This leaf stores a small number (of uint8 type), with values between 0 and 32. It also specifies a default of 24, which is used when the client does not supply a value for this parameter.
The previous XML template also requires only minor tweaks. A small but important change is the removal of the servicepoint attribute on the top element. Since it is gone, NSO does not apply the template directly for each service instance. Instead, your custom code registers itself on this servicepoint and is responsible for applying the template.
The reason for it being this way is that the code will supply the value for the additional variable, here called NETMASK. This is the other change that is necessary in the template: referencing the NETMASK variable for the netmask value:
Unlike references to other parameters, NETMASK does not represent a data path but a variable. It must start with a dollar character ($) to distinguish it from a path. As shown here, variables are often written in all-uppercase, making it easier to quickly tell whether something is a variable or a data path.
Variables get their values from different sources but the most common one is the service code. You implement the service code using a programming language, such as Java or Python.
The following two procedures create an equivalent service that acts identically from a user's perspective. They only differ in the language used; they use the same logic and the same concepts. Still, the final code differs quite a bit due to the nature of each programming language. Generally, you should pick one language and stick with it. If you are unsure which one to pick, you may find Python slightly easier to understand because it is less verbose.
The usual way to start working on a new service is to first create a service skeleton with the ncs-make-package command. To use Python code for service logic and XML templates for applying configuration, select the python-and-template option. For example:
To use the prepared YANG model and XML template, save them into the iface/src/yang/iface.yang and iface/templates/iface-template.xml files. This is exactly the same as for the template-only service.
What is different, is the presence of the python/ directory in the package file structure. It contains one or more Python packages (not to be confused with NSO packages) that provide the service code.
The function of interest is the cb_create() function, located in the main.py file that the package skeleton created. Its purpose is the same as that of the XML template in the template-only service: generate configuration based on the service instance parameters. This code is also called 'the create code'.
The create code usually performs the following tasks:
Read service instance parameters.
Prepare configuration variables.
Apply one or more XML templates.
Reading instance parameters is straightforward with the help of the service function parameter, using the Maagic API. For example:
Note that the hyphen in cidr-netmask is replaced with the underscore in service.cidr_netmask as documented in Python API Overview.
The way configuration variables are prepared depends on the type of the service. For the interface addressing service with netmask, the netmask must be converted into dot-decimal format:
The code makes use of the built-in Python ipaddress package for conversion.
Finally, the create code applies a template, with only minimal changes to the skeleton-generated sample; the names and values for the vars.add() function, which are specific to this service.
If required, your service code can call vars.add() multiple times, to add as many variables as the template expects.
The first argument to the template.apply() call is the file name of the XML template, without the .xml suffix. It allows you to apply multiple, different templates for a single service instance. Separating the configuration into multiple templates based on functionality, called feature templates, is a great practice with bigger, complex configurations.
The complete create code for the service is:
You can test it out in the examples.ncs/implement-a-service/iface-v2-py example.
The usual way to start working on a new service is to first create a service skeleton with the ncs-make-package command. To use Java code for service logic and XML templates for applying the configuration, select the java-and-template option. For example:
To use the prepared YANG model and XML template, save them into the iface/src/yang/iface.yang and iface/templates/iface-template.xml files. This is exactly the same as for the template-only service.
What is different, is the presence of the src/java directory in the package file structure. It contains a Java package (not to be confused with NSO packages) that provides the service code and build instructions for the ant tool to compile the Java code.
The function of interest is the create() function, located in the ifaceRFS.java file that the package skeleton created. Its purpose is the same as that of the XML template in the template-only service: generate configuration based on the service instance parameters. This code is also called 'the create code'.
The create code usually performs the following tasks:
Read service instance parameters.
Prepare configuration variables.
Apply one or more XML templates.
Reading instance parameters is done with the help of the service function parameter, using NAVU API. For example:
The way configuration variables are prepared depends on the type of the service. For the interface addressing service with netmask, the netmask must be converted into dot-decimal format:
The create code applies a template, with only minimal changes to the skeleton-generated sample; the names and values for the myVars.putQuoted() function are different since they are specific to this service.
If required, your service code can call myVars.putQuoted() multiple times, to add as many variables as the template expects.
The second argument to the Template constructor is the file name of the XML template, without the .xml suffix. It allows you to instantiate and apply multiple, different templates for a single service instance. Separating the configuration into multiple templates based on functionality, called feature templates, is a great practice with bigger, complex configurations.
Finally, you must also return the opaque object and handle various exceptions for the function. If exceptions are propagated out of the create code, you should transform them into NSO specific ones first, so the UI can present the user with a meaningful error message.
The complete create code for the service is then:
You can test it out in the examples.ncs/implement-a-service/iface-v2-java example.
A service instance may require configuration on more than just a single device. In fact, it is quite common for a service to configure multiple devices.
There are a few ways in which you can achieve this for your services:
In code: Using API, such as Python Maagic or Java NAVU, navigate the data model to individual device configurations under each devices device DEVNAME config and set the required values.
In code with templates: Apply the template multiple times with different values, such as the device name.
With templates only: use foreach or automatic (implicit) loops.
The generally recommended approach is to use either code with templates or templates with foreach loops. They are explicit and also work well when you configure devices of different types. Using only code extends less well to the latter case, as it requires additional logic and checks for each device type.
Automatic, implicit loops in templates are harder to understand since the syntax looks like the one for normal leafs. A common example is a device definition as a leaf-list in the service YANG model, such as:
Because it is a leaf-list, the following template applies to all the selected devices, using an implicit loop:
It performs the same as the one, which loops through the devices explicitly:
Being explicit, the latter is usually much easier to understand and maintain for most developers. The examples.ncs/implement-a-service/dns-v3 demonstrates this syntax in the XML template.
Applying the same template works fine as long as you have a uniform network with similar devices. What if two different devices can provide the same service but require different configuration? Should you create two different services in NSO? No. Services allow you to abstract and hide the device specifics through a device-independent service model, while still allowing customization of device configuration per device type.
One way to do this is to apply a different XML template from the service code, depending on the device type. However, the same is also possible through XML templates alone.
When NSO applies configuration elements in the template, it checks the XML namespaces that are used. If the target device does not support a particular namespace, NSO simply skips that part of the template. Consequently, you can put configuration for different device types in the same XML template and only the relevant parts will be applied.
Consider the following example:
Due to the xmlns="urn:ios" attribute, the first part of the template (the interface GigabitEthernet) will only apply to Cisco IOS-based device. While the second part (the sys interfaces interface) will only apply to the netsim-based router-nc-type devices, as defined by the xmlns attribute on the sys element.
In case you need to further limit what configuration applies where and namespace-based filtering is too broad, you can also use the if-ned-id XML processing instruction. Each NED package in NSO defines a unique NED-ID, which distinguishes between different device types (and possibly firmware versions). Based on the configured ned-id of the device, you can apply different parts of the XML template. For example:
The preceding template applies configuration for the interface only if the selected device uses the cisco-ios-cli-3.0 NED-ID. You can find the full code as part of the examples.ncs/implement-a-service/iface-v3 example.
In the previous sections, we have looked at service mapping when the input parameters are enough to generate the corresponding device configurations. In many situations, this is not the case. The service mapping logic may need to reach out to other data in order to generate the device configuration. This is common in the following scenarios:
Policies: Often a set of policies is defined that is shared between service instances. The policies, such as QoS, have data models of their own (not service models) and the mapping code reads data from those.
Topology information: the service mapping might need to know how devices are connected, such as which network switches lie between two routers.
Resources such as VLAN IDs or IP addresses, which might not be given as input parameters. They may be modeled separately in NSO or fetched from an external system.
It is important to design the service model considering the above requirements: what is input and what is available from other sources. In the latter case, in terms of implementation, an important distinction is made between accessing the existing data and allocating new resources. You must take special care for resource allocation, such as VLAN or IP address assignment, as discussed later on. For now, let us focus on using pre-existing shared data.
One example of such use is to define QoS policies "on the side." Only a reference to an existing QoS policy is supplied as input. This is a much better approach than giving all QoS parameters to every service instance. But note that, if you modify the QoS definitions the services are referring to, this will not immediately change the existing deployed service instances. In order to have the service implement the changed policies, you need to perform a re-deploy of the service.
A simpler example is a modified DNS configuration service that allows selecting from a predefined set of DNS servers, instead of supplying the DNS server directly as a service parameter. The main benefit in this case is that clients have no need to be aware of the actual DNS servers (and their IPs). In addition, this approach simplifies the management for the network operator, as all the servers are kept in a single place.
What is required to implement such as service? There are two parts. The first is the model and data that defines the available DNS server options, which are shared (used) across all the DNS service instances. The second is a modification to the service inputs and mapping logic to use this data.
For the first part, you must create a data model. If the shared data is specific to one service type, such as the DNS configuration, you can define it alongside the service instance model, in the service package. But sometimes this data may be shared between multiple types of service. Then it makes more sense to create a separate package for the shared data models.
In this case, define a new top-level container in the service's YANG file as:
Note that the container is defined outside the service list because this data is not specific to individual service instances:
The dns-options container includes a list of dns-option items. Each item defines a set of DNS servers (leaf-list) and a name for this set.
Once the shared data model is compiled and loaded into NSO, you can define the available DNS server sets:
You must also update the service instance model to allow clients to pick one of these DNS servers:
Different ways exist to model the service input for dns-servers. The first option you might think about might be using a string type and a pattern to limit the inputs to one of lon, sto, or sjc. Another option would be to use a YANG enum type. But both of these have the drawback that you need to change the YANG model if you add or remove available dns-option items.
Using a leafref allows NSO to validate inputs for this leaf by comparing them to the values, returned by the path XPath expression. So, whenever you update the /dns-options/dns-option items, the change is automatically reflected in the valid dns-server values.
At the same time, you must also update the mapping to take advantage of this service input parameter. The service XML template is very similar to the previous one. The main difference is the way in which the DNS addresses are read from the CDB, using the special deref() XPath function:
The deref() function “jumps” to the item selected by the leafref. Here, leafref's path points to /dns-options/dns-option/name, so this is where deref(/dns-servers) ends: at the name leaf of the selected dns-option item.
The following code, which performs the same thing but in a more verbose way, further illustrates how the DNS server value is obtained:
The complete service is available in the examples.ncs/implement-a-service/dns-v3 example.
NSO provides some service actions out of the box, such as re-deploy or check-sync. You can also add others. A typical use case is to implement some kind of a self-test action that tries to verify the service is operational. The latter could use ping or similar network commands, as well as verify device operational data, such as routing table entries.
This action supplements the built-in check-sync or deep-check-sync action, which checks for the required device configuration.
For example, a DNS configuration service might perform a domain lookup to verify the Domain Name System is working correctly. Likewise, an interface configuration service could ping an IP address or check the interface status.
The action consists of the YANG model for action inputs and outputs, as well as the action code that is executed when a client invokes the action.
Typically, such actions are defined per service instance, so you model them under the service list:
The action needs no special inputs; because it is defined on the service instance, it can find the relevant interface to query. The output has a single leaf, called status, which uses an enumeration type for explicitly defining all the possible values it can take (up, down, or unknown).
Note that using the action statement requires you to also use the yang-version 1.1 statement in the YANG module header (see Actions).
NSO Python API contains a special-purpose base class, ncs.dp.Action, for implementing actions. In the main.py file, add a new class that inherits from it, and implements an action callback:
The callback receives a number of arguments, one of them being kp. It contains a keypath value, identifying the data model path, to the service instance in this case, it was invoked on.
The keypath value uniquely identifies each node in the data model and is similar to an XPath path, but encoded a bit differently. You can use it with the ncs.maagic.cd() function to navigate to the target node.
The newly defined service variable allows you to access all of the service data, such as device and interface parameters. This allows you to navigate to the configured device and verify the status of the interface. The method likely depends on the device type and is not shown in this example.
The action class implementation then resembles the following:
Finally, do not forget to register this class on the action point in the Main application.
You can test the action in the examples.ncs/implement-a-service/iface-v4-py example.
Using the Java programming language, all callbacks, including service and action callback code, are defined using annotations on a callback class. The class NSO looks for is specified in the package-meta-data.xml file. This class should contain an @ActionCallback() annotated method that ties it back to the action point in the YANG model:
The callback receives a number of arguments, one of them being kp. It contains a keypath value, identifying the data model path, to the service instance in this case, it was invoked on.
The keypath value uniquely identifies each node in the data model and is similar to an XPath path, but encoded a bit differently. You can use it with the com.tailf.navu.KeyPath2NavuNode class to navigate to the target node.
The newly defined service variable allows you to access all of the service data, such as device and interface parameters. This allows you to navigate to the configured device and verify the status of the interface. The method likely depends on the device type and is not shown in this example.
The complete implementation requires you to supply your own Maapi read transaction and resembles the following:
You can test the action in the examples.ncs/implement-a-service/iface-v4-java example.
In addition to device configuration, services may also provide operational status or statistics. This is operational data, modeled with config false statements in YANG, and cannot be directly set by clients. Instead, clients can only read this data, for example to check service health.
What kind of data a service exposes depends heavily on what the service does. Perhaps the interface configuration service needs to provide information on whether a network interface was enabled and operational at the time of the last check (because such a check could be expensive).
Taking iface service as a base, consider how you can extend the instance model with another operational leaf to hold the interface status data as of the last check.
The new leaf last-test-result is designed to store the same data as the test-enabled action returns. Importantly, it also contains a config false substatement, making it operational data.
When faced with duplication of type definitions, as seen in the preceding code, the best practice is to consolidate the definition in a single place and avoid potential discrepancies in the future. You can use a typedef statement to define a custom YANG data type.
Once defined, you can use the new type as you would any other YANG type. For example:
Users can then view operational data with the help of the show command. The data is also available through other NB interfaces, such as NETCONF and RESTCONF.
But where does the operational data come from? The service application code provides this data. In this example, the last-test-status leaf captures the result of the enabled check, which is implemented as a custom action. So, here it is the action code that sets the leaf's value.
This approach works well when operational data is updated based on some event, such as a received notification or a user action, and NSO is used to cache its value.
For cases, where this is insufficient, NSO also allows producing operational data on demand, each time a client requests it, through the Data Provider API. See DP API for this alternative approach.
Unlike configuration data, which always requires a transaction, you can write operational data to NSO with or without a transaction. Using a transaction allows you to easily compose multiple writes into a single atomic operation but has some small performance penalty due to transaction overhead.
If you avoid transactions and write data directly, you must use the low-level CDB API, which requires manual connection management and does not support Maagic API for data model navigation.
The alternative, transaction-based approach uses high-level MAAPI and Maagic objects:
When used as part of the action, the action code might be as follows:
Note that you have to start a new transaction in the action code, even though trans is already supplied, since trans is read-only and cannot be used for writes.
Another thing to keep in mind with operational data is that NSO by default does not persist it to storage, only keeps it in RAM. One way for the data to survive NSO restarts is to use the tailf:persistent statement, such as:
You can also register a function with the service application class to populate the data on package load, if you are not using tailf:persistent.
The examples.ncs/implement-a-service/iface-v5-py example implements such code.
Unlike configuration data, which always requires a transaction, you can write operational data to NSO with or without a transaction. Using a transaction allows you to easily compose multiple writes into a single atomic operation but has some small performance penalty due to transaction overhead.
If you avoid transactions and write data directly, you must use the low-level CDB API, which does not support NAVU for data model navigation.
The alternative, transaction-based approach uses high-level MAAPI and NAVU objects:
Note the use of the context.startOperationalTrans() function to start a new transaction against the operational data store. In other respects, the code is the same as for writing configuration data.
Another thing to keep in mind with operational data is that NSO by default does not persist it to storage, only keeps it in RAM. One way for the data to survive NSO restarts is to model the data with the tailf:persistent statement, such as:
You can also register a custom com.tailf.ncs.ApplicationComponent class with the service application to populate the data on package load, if you are not using tailf:persistent. Please refer to The Application Component Type for details.
The examples.ncs/implement-a-service/iface-v5-java example implements such code.
A FASTMAP service cannot perform explicit function calls with side effects. The only action a service is allowed to take is to modify the configuration of the current transaction. For example, a service may not invoke an action to generate authentication key files or start a virtual machine. All such actions must occur before the service is created and provided as input parameters. This restriction is because the FASTMAP code may be executed as part of a commit dry-run, or the commit may fail, in which case the side effects would have to be undone.
Nano services use a technique called reactive FASTMAP (RFM) and provide a framework to safely execute actions with side effects by implementing the service as several smaller (nano) steps or stages. Reactive FASTMAP can also be implemented directly using the CDB subscribers, but nano services offer a more streamlined and robust approach for staged provisioning.
The services discussed previously in this section were modeled to give all required parameters to the service instance. The mapping logic code could immediately do its work. Sometimes this is not possible. Two examples that require staged provisioning where a nano service step executing an action is the best practice solution:
Allocating a resource from an external system, such as an IP address, or generating an authentication key file using an external command. It is impossible to do this allocation from within the normal FASTMAP create() code since there is no way to deallocate the resource on commit, abort, or failure and when deleting the service. Furthermore, the create() code runs within the transaction lock. The time spent in services create() code should be as short as possible.
The service requires the start of one or more Virtual Machines, Virtual Network Functions. The VMs do not yet exist, and the create() code needs to trigger something that starts the VMs, and then later, when the VMs are operational, configure them.
The basic concepts of nano services are covered in detail by Nano Services for Staged Provisioning. The example in examples.ncs/development-guide/nano-services/netsim-sshkey implements SSH public key authentication setup using a nano service. The nano service uses the following steps in a plan that produces the generated, distributed, and configured states:
Generates the NSO SSH client authentication key files using the OpenSSH ssh-keygen utility from a nano service side-effect action implemented in Python.
Distributes the public key to the netsim (ConfD) network elements to be stored as an authorized key using a Python service create() callback.
Configures NSO to use the public key for authentication with the netsim network elements using a Python service create() callback and service template.
Test the connection using the public key through a nano service side-effect executed by the NSO built-in connect action.
Upon deletion of the service instance, NSO restores the configuration. The only delete step in the plan is the generated state side-effect action that deletes the key files. The example is described in more detail in Developing and Deploying a Nano Service.
The basic-vrouter, netsim-vrouter, and mpls-vpn-vrouter examples in the examples.ncs/development-guide/nano-services directory start, configure, and stop virtual devices. In addition, the mpls-vpn-vrouter example manages Layer3 VPNs in a service provider MPLS network consisting of physical and virtual devices. Using a Network Function Virtualization (NFV) setup, the L3VPN nano service instructs a VM manager nano service to start a virtual device in a multi-step process consisting of the following:
When the L3VPN nano service pe-create state step create or delete a /vm-manager/start service configuration instance, the VM manager nano service instructs a VNF-M, called ESC, to start or stop the virtual device.
Wait for the ESC to start or stop the virtual device by monitoring and handling events. Here NETCONF notifications.
Mount the device in the NSO device tree.
Fetch the ssh-keys and perform a sync-from on the newly created device.
See the mpls-vpn-vrouter example for details on how the l3vpn.yang YANG model l3vpn-plan pe-created state and vm-manager.yang vm-plan for more information. vm-manager plan states with a nano-callback have their callbacks implemented by the escstart.java escstart class. Nano services are documented in Nano Services for Staged Provisioning.
Service troubleshooting is an inevitable part of any NSO development process and eventually a part of their operational tasks as well. By their nature, NSO services are composed primarily out of user-defined code, models, and templates. This gives you plenty of opportunities to make unintended mistakes in mapping code, use incorrect indentations, create invalid configuration templates, and much more. Not only that, they also rely on southbound communication with devices of many different versions and vendors, which presents you with yet another domain that can cause issues in your NSO services.
This is why it is important to have a systematic approach when debugging and troubleshooting your services:
Understand the problem - First, you need to make sure that you fully understand the issue you are trying to troubleshoot. Why is this issue happening? When did it first occur? Does it happen only on specific deployments or devices? What is the error message like? Is it consistent and can it be replicated? What do the logs say?
Identify the root cause - When you understand the issues, their triggers, conditions, and any additional insights that NSO allows you to inspect, you can start breaking down the problem to identify its root cause.
Form and implement the solution - Once the root cause (or several of them) is found, you can focus on producing a suitable solution. This might be a simple NSO operation, modification of service package codebase, a change in southbound connectivity of managed devices, and any other action or combination required to achieve a working service.
You can use these general steps to give you a high-level idea of how to approach troubleshooting your NSO services:
Ensure that your NSO instance is installed and running properly. You can verify the overall status with ncs --status shell command. To find out more about installation problems and potential runtime issues, check Troubleshooting in Administration.
If you encounter a blank CLI when you connect to NSO you must also make sure that your user is added to the correct NACM group (for example ncsadmin) and that the rules for this group allow the user to view and edit your service through CLI. You can find out more about groups and authorization rules in AAA Infrastructure in Administration.
Verify that you are using the latest version of your packages. This means copying the latest packages into load path, recompiling the package YANG models and code with the make command, and reloading the packages. In the end, you must expect the NSO packages to be successfully reloaded to proceed with troubleshooting. You can read more about loading packages in Loading Packages. If nothing else, successfully reloading packages will at least make sure that you can use and try to create service instances through NSO.
Compiling packages uses the ncsc compiler internally, which means that this part of the process reveals any syntax errors that might exist in YANG models or Java code. You do not need to rely on ncsc for compile-level errors though and should use specialized tools such as pyang or yanger for YANG, and one of the many IDEs and syntax validation tools for Java.
Additionally, reloading packages can also supply you with some valuable information. For example, it can tell you that the package requires a higher version of NSO which is specified in the package-meta-data.xml file, or about any Python-related syntax errors.
Last but not least, package reloading also provides some information on the validity of your XML configuration templates based on the NED namespace you are using for a specific part of the configuration, or just general syntactic errors in your template.
Examine what the template and XPath expressions evaluate to. If some service instance parameters are missing or are mapped incorrectly, there might be an error in the service template parameter mapping or in their XPath expressions. Use the CLI pipe command debug template to show all the XPath expression results from your service configuration templates or debug xpath to output all XPath expression results for the current transaction (e.g., as a part of the YANG model as well).
In addition, you can use the xpath eval command in CLI configuration mode to test and evaluate arbitrary XPath expressions. The same can be done with ncs_cmd from the command shell. To see all the XPath expression evaluations in your system, you can also enable and inspect the xpath.trace log. You can read more about debugging templates and XPath in . If you are using multiple versions of the same NED, make sure that you are using the correct processing instructions as described in when applying different bits of configuration to different versions of devices.
Validate that your custom service code is performing as intended. Depending on your programming language of choice, there might be different options to do that. If you are using Java, you can find out more on how to configure logging for the internal Java VM Log4j in . You can use a debugger as well, to see the service code execution line by line. To learn how to use Eclipse IDE to debug Java package code, read . The same is true for Python. NSO uses the standard logging module for logging, which can be configured as per instructions in . Python debugger can be set up as well with debugpy or pydevd-pycharm modules.
Inspect NSO logs for hints. NSO features extensive logging functionality for different components, where you can see everything from user interactions with the system to low-level communications with managed devices. For best results, set the logging level to DEBUG or lower. To learn what types of logs there are and how to enable them, consult in Administration.
Another useful option is to append a custom trace ID to your service commits. The trace ID can be used to follow the request in logs from its creation all the way to the configuration changes that get pushed to the device. In case no trace ID is specified, NSO will generate a random one, but custom trace IDs are useful for focused troubleshooting sessions.
Trace ID can also be provided as a commit parameter in your service code, or as a RESTCONF query parameter. See examples.ncs/development-guide/commit-parameters for an example.
Measuring the time it takes for specific commands to complete can also give you some hints about what is going on. You can do this by using the timecmd, which requires the dev tools to be enabled.
Another useful tool to examine how long a specific event or command takes is the progress trace. See how it is used in .
Double-check your service points in the model, templates, and in code. Since configuration templates don't get applied if the servicepoint attribute doesn't match the one defined in the service model or are not applied from the callbacks registered to specific service points, make sure they match and that they are not missing. Otherwise, you might notice errors such as the following ones.
Verify YANG imports and namespaces. If your service depends on NED or other YANG files, make sure their path is added to where the compiler can find them. If you are using the standard service package skeleton, you can add to that path by editing your service package Makefile and adding the following line.
Likewise, when you use data types from other YANG namespaces in either your service model definition or by referencing them in XPath expressions.
Trace the southbound communication. If the service instance creation results in a different configuration than would be expected from the NSO point of view, especially with custom NED packages, you can try enabling the southbound tracing (either per device or globally).
Next Steps
Description of northbound NETCONF implementation in NSO.
This section describes the northbound NETCONF implementation in NSO. As of this writing, the server supports the following specifications:
RFC 4741: NETCONF Configuration Protocol
RFC 4742: Using the NETCONF Configuration Protocol over Secure Shell (SSH)
RFC 5277: NETCONF Event Notifications
: Partial Lock Remote Procedure Call (RPC) for NETCONF
: YANG - A Data Modeling Language for the Network Configuration Protocol (NETCONF)
: Common YANG Data Types
: YANG Module for NETCONF Monitoring
: Network Configuration Protocol (NETCONF)
: Using the NETCONF Configuration Protocol over Secure Shell (SSH)
: With-defaults capability for NETCONF
: NETCONF Base Notifications
: NETCONF Access Control Model
: Common YANG Data Types
: YANG Module Library
: The YANG 1.1 Data Modeling Language
: NETCONF Call Home and RESTCONF Call Home
: Network Management Datastore Architecture (NMDA)
: YANG Library
: YANG Schema Mount
: NETCONF Extensions to Support the Network Management Datastore Architecture
: Subscription to YANG Notifications
: Dynamic Subscription to YANG Events and Datastores over NETCONF
: Subscription to YANG Notifications for Datastore Updates
NSO NETCONF northbound API can be used by arbitrary NETCONF clients. A simple Python-based NETCONF client called netconf-console is shipped as source code in the distribution. See for details. Other NETCONF clients will work too, as long as they adhere to the NETCONF protocol. If you need a Java client, the open-source client can be used.
When integrating NSO into larger OSS/NMS environments, the NETCONF API is a good choice of integration point.
The NETCONF server in NSO supports the following capabilities in both NETCONF 1.0 () and NETCONF 1.1 ().
The following list of optional standard capabilities is also supported:
In addition to the protocol capabilities listed above, NSO also implements a set of YANG modules that are closely related to the protocol.
ietf-netconf-nmda: This module from defines the NMDA extension to NETCONF. It defines the following features:
origin: Indicates that the server supports the origin annotation. It is not advertised by default. The support for origin can be enabled in ncs.conf (see in Manual Pages ). If it is enabled, the origin feature is advertised.
In addition to this, NSO does not support pre-configuration or monitoring of subtree filters, and thus advertises a deviation module that deviates /filters/stream-filter/filter-spec/stream-subtree-filter and /subscriptions/subscription/target/stream/stream-filter/within-subscription/filter-spec/stream-subtree-filter as "not-supported".
NSO does not generate subscription-modified notifications when the parameters of a subscription change, and there is currently no mechanism to suspend notifications so subscription-suspended and subscription-resumed notifications are never generated.
There is basic support for monitoring subscriptions via the /subscriptions container. Currently, it is possible to view dynamic subscriptions' attributes: subscription-id, stream, encoding, receiver, stop-time, and stream-xpath-filter. Unsupported attributes are: stream-subtree-filter, receiver/sent-event-records, receiver/excluded-event-records, and receiver/state.
ietf-yang-push: This module from extends operations, data nodes, and operational state defined in ietf-subscribed-notifications; and also introduces continuous and customizable notification subscriptions for updates from running and operational datastores. It defines the same features as ietf-subscribed-notifications and also the following feature:
on-change: Indicates that on-change triggered notifications are supported. This feature is advertised by NSO but only supported on the running datastore.
In addition to this, NSO does not support pre-configuration or monitoring of subtree filters and thus advertises a deviation module that deviates /filters/selection-filter/filter-spec/datastore-subtree-filter and /subscriptions/subscription/target/datastore/selection-filter/within-subscription/filter-spec/datastore-subtree-filter as "not-supported".
The monitoring of subscriptions via the subscriptions container does currently not support the attributes: periodic/period, periodic/state, on-change/dampening-period, on-change/sync-on-start, on-change/excluded-change.
All enabled NETCONF capabilities are advertised in the hello message that the server sends to the client.
A YANG module is supported by the NETCONF server if its fxs file is found in NSO's loadPath, and if the fxs file is exported to NETCONF.
The following YANG modules are built-in, which means that their fxs files need not be present in the loadPath. If they are found in the loadPath they are skipped.
ietf-netconf
ietf-netconf-with-defaults
ietf-yang-library
All built-in modules are always supported by the server.
All YANG version 1 modules supported by the server are advertised in the hello message, according to the rules defined in .
All YANG version 1 and version 1.1 modules supported by the server are advertised in the YANG library.
If a YANG module (any version) is supported by the server, and its .yang or .yin file is found in the fxs file or in the loadPath, then the module is also advertised in the schema list defined in ietf-netconf-monitoring, made available for download with the RPC operation get-schema, and if RESTCONF is enabled, also advertised in the schema leaf in ietf-yang-library. See .
NSO uses to mount the data models for the devices. There are two mount points, one for the configuration (in /devices/device/config), and one for operational state data (in /devices/device/live-status). As defined in , a client can read the module list from the YANG library in each of these mount points to learn which YANG models each device supports via NSO.
For example, to get the YANG library data for the device x0, we can do:
The set of modules reported for a device is the set of modules that NSO knows, i.e., the set of modules compiled for the specific device type. This means that all devices of the same device type will report the same set of modules. Also, note that the device may support other modules that are not known to NSO. Such modules are not reported here.
The NETCONF server natively supports the mandatory SSH transport, i.e., SSH is supported without the need for an external SSH daemon (such as sshd). It also supports integration with OpenSSH.
NSO is delivered with a program netconf-subsys which is an OpenSSH subsystem program. It is invoked by the OpenSSH daemon after successful authentication. It functions as a relay between the ssh daemon and NSO; it reads data from the ssh daemon from standard input and writes the data to NSO over a loopback socket, and vice versa. This program is delivered as source code in $NCS_DIR/src/ncs/netconf/netconf-subsys.c. It can be modified to fit the needs of the application. For example, it could be modified to read the group names for a user from an external LDAP server.
When using OpenSSH, the users are authenticated by OpenSSH, i.e., the user names are not stored in NSO. To use OpenSSH, compile the netconf-subsys program, and put the executable in e.g. /usr/local/bin. Then add the following line to the ssh daemon's config file, sshd_config:
The connection from netconf-subsys to NSO can be arranged in one of two different ways:
Make sure NSO is configured to listen to TCP traffic on localhost, port 2023, and disable SSH in ncs.conf (see in Manual Pages ). (Re)start sshd and NSO. Or:
Compile netconf-subsys to use a connection to the IPC port instead of the NETCONF TCP transport (see the netconf-subsys.c source for details), and disable both TCP and SSH in ncs.conf. (Re)start sshd and NSO. This method may be preferable since it makes it possible to use the IPC Access Check (see ) to restrict the unauthenticated access to NSO that is needed by
By default, the netconf-subsys program sends the names of the UNIX groups the authenticated user belongs to. To test this, make sure that NSO is configured to give access to the group(s) the user belongs to. The easiest for test is to give access to all groups.
NSO itself is configured through a configuration file called ncs.conf. For a description of the parameters in this file, please see the in Manual Pages man page.
When NSO processes <get>, <get-config>, and <copy-config> requests, the resulting data set can be very large. To avoid buffering huge amounts of data, NSO streams the reply to the client as it traverses the data tree and calls data provider functions to retrieve the data.
If a data provider fails to return the data it is supposed to return, NSO can take one of two actions. Either it simply closes the NETCONF transport (default), or it can reply with an inline RPC error and continue to process the next data element. This behavior can be controlled with the /ncs-config/netconf/rpc-errors configuration parameter (see in Manual Pages).
An inline error is always generated as a child element to the parent of the faulty element. For example, if an error occurs when retrieving the leaf element mac-address of an interface the error might be:
If a get_next call fails in the processing of a list, a reply might look like this:
netconf-consoleThe netconf-console program is a simple NETCONF client. It is delivered as Python source code and can be used as-is or modified.
When NSO has been started, we can use netconf-console to query the configuration of the NETCONF Access Control groups:
With the -x flag an XPath expression can be specified, to retrieve only data matching that expression. This is a very convenient way to extract portions of the configuration from the shell or from shell scripts.
defines a YANG module, ietf-netconf-monitoringfor monitoring of the NETCONF server. It contains statistics objects such as the number of RPCs received, status objects such as user sessions, and an operation to retrieve data models from the NETCONF server.
This data model defines an RPC operation, get-schema, which is used to retrieve YANG modules from the NETCONF server. NSO will report the YANG modules for all fxs files that are reported as capabilities, and for which the corresponding YANG or YIN file is stored in the fxs file or found in the loadPath. If a file is found in the loadPath, it has priority over a file stored in the fxs file. Note that by default, the module and its submodules are stored in the fxs file by the compiler.
If the YANG (or YIN files) are copied into the loadPath, they can be stored as is or compressed with gzip. The filename extension MUST be .yang, .yin, .yang.gz, or .yin.gz.
Also available is a Tail-f-specific data model, tailf-netconf-monitoring, which augments ietf-netconf-monitoring with additional data about files available for usage with the <copy-config> command with a file <url> source or target. /ncs-config/netconf-north-bound/capabilities/url/enabled and /ncs-config/netconf-north-bound/capabilities/url/file/enabled must both be set to true. If rollbacks are enabled, those files are listed as well, and they can be loaded using <copy-config>.
This data model also adds data about which notification streams are present in the system and data about sessions that subscribe to the streams.
This section describes how NETCONF notifications are implemented within NSO, and how the applications generate these events.
Central to NETCONF notifications is the concept of a stream. The stream serves two purposes. It works like a high-level filtering mechanism for the client. For example, if the client subscribes to notifications on the security stream, it can expect to get security-related notifications only. Second, each stream may have its own log mechanism. For example, by keeping all debug notifications in a debug stream, they can be logged separately from the security stream.
NSO has built-in support for the well-known stream NETCONF, defined in and . NSO supports the notifications defined in on this stream. If the application needs to send any additional notifications on this stream, it can do so.
NSO can be configured to listen to notifications from devices and send those notifications to northbound NETCONF clients. The stream device-notifications is used for this purpose. To enable this, the stream device-notifications must be configured in ncs.conf, and additionally, subscriptions must be created in /ncs:devices/device/notifications.
It is up to the application to define which streams it supports. In NSO, this is done in ncs.conf (see in Manual Pages). Each stream must be listed, and whether it supports replay or not. The following example enables the built-in stream device-notifications with replay support, and an additional, application-specific stream debug without replay support:
The well-known stream NETCONF does not have to be listed, but if it isn't listed, it will not support replay.
NSO has built-in support for logging of notifications, i.e., if replay support has been enabled for a stream, NSO automatically stores all notifications on disk ready to be replayed should a NETCONF client ask for logged notifications. In the ncs.conf fragment above the security stream has been set up to use the built-in notification log/replay store. The replay store uses a set of wrapping log files on a disk (of a certain number and size) to store the security stream notifications.
The reason for using a wrap log is to improve replay performance whenever a NETCONF client asks for notifications in a certain time range. Any problems with log files not being properly closed due to hard power failures etc. are also kept to a minimum, i.e., automatically taken care of by NSO.
This section describes how Subscribed Notifications are implemented for NETCONF within NSO.
Subscribed Notifications is defined in and the NETCONF transport binding is defined in . Subscribed Notifications build upon NETCONF notifications defined in and have a number of key improvements:
Multiple subscriptions on a single transport session
Support for dynamic and configured subscriptions
Modification of an existing subscription in progress
Per-subscription operational counters
Both NETCONF notifications and Subscribed Notifications can be used at the same time and are configured the same way in ncs.conf. However, there are some differences and limitations.
For Subscribed Notifications, a new subscription is requested by invoking the RPC establish-subscription. For NETCONF notifications, the corresponding RPC is create-subscription.
A NETCONF session can only have either the subscribers started with create-subscription or establish-subscription simultaneously.
If a session has subscribers established with establish-subscription and receives a request to create subscriptions with create-subscription, an <rpc-error> is sent containing <error-tag> operation-not-supported.
If a session has subscribers created with create-subscription and receives a request to establish subscriptions with establish-subscription, an <rpc-error> is sent containing <error-tag>
Dynamic subscriptions send all notifications on the transport session where they were established.
Existing subscriptions and their configuration can be found in the /subscriptions container.
For example, for viewing all established subscriptions, we can do:
It is not possible to establish a subscription with a stored filter from /filters.
The support for monitoring subscriptions has basic functionality. It is possible to read subscription-id, stream, stream-xpath-filter, replay-start-time, stop-time, encoding, receivers/receiver/name, and receivers/receiver/state.
The leaf stream-subtree-filter is deviated as "not-supported", hence can not be read.
The unsupported leafs in the subscriptions container are the following: stream-subtree-filter, receiver/sent-event-records, and receiver/excluded-event-records.
This section describes how YANG-Push is implemented for NETCONF within NSO.
YANG-Push is defined in and the NETCONF transport binding is defined in . YANG-Push implementation in NSO introduces a subscription service that provides updates from a datastore. This implementation supports dynamic subscriptions on updates of datastore nodes. A subscribed receiver is provided with update notifications according to the terms of the subscription. There are two types of notification messages defined to provide updates and these are used according to subscription terms.
push-update notification is a complete, filtered update that reflects the data of the subscribed datastore. It is the type of notification that is used for periodic subscriptions. A push-update notification can also be used for the on-change subscriptions in case of a receiver asks for synchronization, either at the start of a new subscription or by sending a resync request for an established subscription.
An example push-update notification:
push-change-update
For periodic subscriptions, updates are triggered periodically according to specified time interval. Optionally a reference anchor-time can be provided for a specified period.
For on-change subscriptions, updates are triggered whenever a change is detected on the subscribed information. In the case of rapidly changing data, instead of receiving frequent notifications for every change, a receiver may specify a dampening-period to receive update notifications in a lower frequency. A receiver may request for synchronization at the start of a subscription by using sync-on-start option. A receiver may filter out specific types of changes by providing a list of excluded-change parameters.
To provide updates for on-change subscriptions on operational datastore, data provider applications are required to implement push-on-change callbacks. For more details, see the in the Manual Pages section of in Manual Pages.
In addition to RPCs defined in subscribed notifications, YANG-Push defines resync-subscription RPC. Upon receipt of resync-subscription, if the subscription is an on-change triggered type, a push-update notification is sent to the receiver according to the terms of the subscription. Otherwise, an appropriate error response is sent.
resync-subscription
YANG-Push subscriptions can be monitored in a similar way to Subscribed Notifications through /subscriptions container. For more information, see .
YANG-Push filters differ from the filters of Subscribed Notifications and they are specified as datastore-xpath-filter and datastore-subtree-filter. The leaf datastore-subtree-filter is deviated as "not-supported", and hence can not be monitored. Also, YANG-Push specific update trigger parameters periodic/period, periodic/anchor-time, on-change/dampening-period, on-change/sync-on-start and on-change/excluded-change are not supported for monitoring.
modify-subscriptions operation does not support changing a subscriptions update trigger type from periodic to on-change or vice versa.
on-change subscriptions do not work for changes that are made through the CDB-API.
on-change subscriptions do not work on internal callpoints such as
This capability introduces a new RPC operation that is used to invoke actions defined in the data model. When an action is invoked, the instance on which the action is invoked is explicitly identified by a hierarchy of configuration or state data.
Here is a simple example that invokes the action sync-from on the device ce1. It uses the netconf-console command:
The actions capability is identified by the following capability string:
This capability introduces four new RPC operations that are used to control a two-phase commit transaction on the NETCONF server. The normal <edit-config> operation is used to write data in the transaction, but the modifications are not applied until an explicit <commit-transaction> is sent.
This capability is formally defined in the YANG module tailf-netconf-transactions. It is recommended that this module be enabled.
A typical sequence of operations looks like this:
None.
The transactions capability is identified by the following capability string:
<start-transaction>Starts a transaction towards a configuration datastore. There can be a single ongoing transaction per session at any time.
When a transaction has been started, the client can send any NETCONF operation, but any <edit-config> or <copy-config> operation sent from the client must specify the same <target> as the <start-transaction>, and any <get-config> must specify the same <source> as <start-transaction>.
If the server receives an <edit-config> or <copy-config> with another <target>, or a <get-config> with another <source>, an error must be returned with an <error-tag> set to invalid-value.
The modifications sent in the <edit-config> operations are not immediately applied to the configuration datastore. Instead, they are kept in the transaction state of the server. The transaction state is only applied when a <commit-transaction> is received.
The client sends a <prepare-transaction> when all modifications have been sent.
target:
Name of the configuration datastore towards which the transaction is started.
with-inactive:
If this parameter is given, the transaction will handle the inactive and active attributes. If given, it must also be given in the <edit-config> and <get-config> invocations in the transaction.
If the device can satisfy the request, an <rpc-reply> is sent that contains an <ok> element.
An <rpc-error> element is included in the <rpc-reply> if the request cannot be completed for any reason.
If there is an ongoing transaction for this session already, an error must be returned with <error-app-tag> set to bad-state.
<prepare-transaction>Prepares the transaction state for commit. The server may reject the prepare request for any reason, for example, due to lack of resources or if the combined changes would result in an invalid configuration datastore.
After a successful <prepare-transaction>, the next transaction-related RPC operation must be <commit-transaction> or <abort-transaction>. Note that an <edit-config> cannot be sent before the transaction is either committed or aborted.
Care must be taken by the server to make sure that if <prepare-transaction> succeeds then the <commit-transaction> should not fail, since this might result in an inconsistent distributed state. Thus, <prepare-transaction> should allocate any resources needed to make sure the <commit-transaction> will succeed.
None.
If the device was able to satisfy the request, an <rpc-reply> is sent that contains an <ok> element.
An <rpc-error> element is included in the <rpc-reply> if the request cannot be completed for any reason.
If there is no ongoing transaction in this session, or if the ongoing transaction already has been prepared, an error must be returned with <error-app-tag> set to bad-state.
<commit-transaction>Applies the changes made in the transaction to the configuration datastore. The transaction is closed after a <commit-transaction>.
None.
If the device was able to satisfy the request, an <rpc-reply> is sent that contains an <ok> element.
An <rpc-error> element is included in the <rpc-reply> if the request cannot be completed for any reason.
If there is no ongoing transaction in this session, or if the ongoing transaction already has not been prepared, an error must be returned with <error-app-tag> set to bad-state.
<abort-transaction>Aborts the ongoing transaction, and all pending changes are discarded. <abort-transaction> can be given at any time during an ongoing transaction.
None.
If the device was able to satisfy the request, an <rpc-reply> is sent that contains an <ok> element.
An <rpc-error> element is included in the <rpc-reply> if the request cannot be completed for any reason.
If there is no ongoing transaction in this session, an error must be returned with <error-app-tag> set to bad-state.
The <edit-config> operation is modified so that if it is received during an ongoing transaction, the modifications are not immediately applied to the configuration target. Instead, they are kept in the transaction state of the server. The transaction state is only applied when a <commit-transaction> is received.
Note that it doesn't matter if the <test-option> is 'set' or 'test-then-set' in the <edit-config>, since nothing is actually set when the <edit-config> is received.
This capability is used by the NETCONF server to indicate that it supports marking nodes as being inactive. A node that is marked as inactive exists in the data store but is not used by the server. Any node can be marked as inactive.
To not confuse clients who do not understand this attribute, the client has to instruct the server to display and handle the inactive nodes. An inactive node is marked with an inactive XML attribute, and to make it active, the active XML attribute is used.
This capability is formally defined in the YANG module tailf-netconf-inactive.
None.
The inactive capability is identified by the following capability string:
None.
A new parameter, <with-inactive>, is added to the <get>, <get-config>, <edit-config>, <copy-config>, and <start-transaction> operations.
The <with-inactive> element is defined in the http://tail-f.com/ns/netconf/inactive/1.0 namespace, and takes no value.
If this parameter is present in <get>, <get-config>, or <copy-config>, the NETCONF server will mark inactive nodes with the inactive attribute.
If this parameter is present in <edit-config> or <copy-config>, the NETCONF server will treat inactive nodes as existing so that an attempt to create a node that is inactive will fail, and an attempt to delete a node that is inactive will succeed. Further, the NETCONF server accepts the inactive and active attributes in the data hierarchy, to make nodes inactive or active, respectively.
If the parameter is present in <start-transaction>, it must also be present in any <edit-config>, <copy-config>, <get>, or <get-config> operations within the transaction. If it is not present in <start-transaction>, it must not be present in any <edit-config> operation within the transaction.
The inactive and active attributes are defined in the http://tail-f.com/ns/netconf/inactive/1.0 namespace. The inactive attribute's value is the string inactive, and the active attribute's value is the string active.
This request creates an inactive interface:
This request shows the inactive interface:
This request shows that inactive data is not returned unless the client asks for it:
This request activates the interface:
This request creates an inactive interface:
This module extends existing operations with a with-rollback-id parameter which will, when set, extend the result with information about the rollback that was generated for the operation if any.
The rollback ID returned is the ID from within the rollback file which is stable with regards to new rollbacks being created.
None.
The transactions capability is identified by the following capability string:
This module adds a parameter with-rollback-id to the following RPCs:
If with-rollback-id is given, rollbacks are enabled, and the operation results in a rollback file being created the response will contain a rollback reference.
NETCONF supports the IETF standard draft , that is an adaption of the standard. Trace Context standardizes the format of trace-id, parent-id, and key-value pairs sent between distributed entities. The parent-id will become the parent-span-id for the next generated span-id in NSO.
Trace Context consists of two XML attributes traceparent and tracestate corresponding to the capabilities urn:ietf:params:xml:ns:yang:traceparent:1.0 and urn:ietf:params:xml:ns:yang:tracestate:1.0 respectively. The attributes belong to the start XML element rpc in a NETCONF request.
Attribute traceparent must be of the format:
where version = "00" and flags = "01". The support for the values of version and flags may change in the future depending on the extension of the standard or functionality.
Attribute tracestate is a vendor-specific list of key-value pairs and must be of the format:
Where a value may contain space characters but not end with a space.
Here is an example of the usage of the attributes traceparent and tracestate:
NSO implements Trace Context alongside the legacy way of handling trace-id found in . The support of Trace Context covers the same scenarios as the legacy trace-id functionality, except for the scenario where both trace-id and Trace Context are absent in a request, in which case legacy trace-id is generated. The two different ways of handling trace-id cannot be used at the same time. If both are used, the request generates an error response. Read about trace-id legacy functionality in .
NETCONF also lets LSA clusters to be part of Trace Context handling. A top LSA node will pass down the Trace Context to all LSA nodes beneath. For NSO to consider the attributes of Trace Context in a NETCONF request, the trace-id element in the configuration file must be enabled. As Trace Context is handled by the progress trace functionality, see also .
The YANG module tailf-netconf-ncs augments some NETCONF operations with additional parameters to control the behavior in NSO over NETCONF. See that YANG module for all the details. In this section, the options are summarized.
To control the commit behavior of NSO the following input parameters are available:
no-revision-drop
NSO will not run its data model revision algorithm, which requires all participating managed devices to have all parts of the data models for all data contained in this transaction. Thus, this flag forces NSO to never silently drop any data set operations towards a device.
no-overwrite
NSO will check that the data that should be modified has not changed on the device compared to NSO's view of the data.
no-networking
Do not send any data to the devices. This is a way to manipulate CDB in NSO without generating any southbound traffic.
These optional input parameters are augmented into the following NETCONF operations:
commit
edit-config
copy-config
prepare-transaction
The operation prepare-transaction is also augmented with an optional parameter dry-run, which can be used to show the effects that would have taken place, but not actually commit anything to the datastore or to the devices. dry-run takes an optional parameter outformat, which can be used to select in which format the result is returned. Possible formats are xml (default), cli, and native. The optional reverse parameter can be used together with the native format to display the device commands for getting back to the current running state in the network if the commit is successfully executed. Beware that if any changes are done later on the same data the reverse device commands returned are invalid.
FASTMAP attributes such as back pointers and reference counters are typically internal to NSO and are not shown by default. The optional parameter with-service-meta-data can be used to include these in the NETCONF reply. The parameter is augmented into the following NETCONF operations:
get
get-config
get-data
The Query API consists of several RPC operations to start queries, fetch chunks of the result from a query, restart a query, and stop a query.
In the installed release there are two YANG files named tailf-netconf-query.yang and tailf-common-query.yang that defines these operations. An easy way to find the files is to run the following command from the top directory of the release installation:
The API consists of the following operations:
start-query: Start a query and return a query handle.
fetch-query-result: Use a query handle to repeatedly fetch chunks of the result.
immediate-query: Start a query and return the entire result immediately.
In the following examples, the following data model is used:
Here is an example of a start-query operation:
An informal interpretation of this query is:
For each /x/host where enabled is true, select its name, and address, and return the result sorted by name, in chunks of 100 results at the time.
Let us discuss the various pieces of this request.
The actual XPath query to run is specified by the foreach element. The example below will search for all /x/host nodes that have the enabled node set to true:
Now we need to define what we want to have returned from the node set by using one or more select sections. What to actually return is defined by the XPath expression.
We must also choose how the result should be represented. Basically, it can be the actual value or the path leading to the value. This is specified per select chunk The possible result types are: string , path , leaf-value and inline.
The difference between string and leaf-value is somewhat subtle. In this case of string the result will be processed by the XPath function string() (which if the result is a node-set will concatenate all the values). The leaf-value will return the value of the first node in the result. As long as the result is a leaf node, string and leaf-value will return the same result. In the example above, we are using string as shown below. At least one result-type must be specified.
The result-type inline makes it possible to return the full sub-tree of data in XML format. The data will be enclosed with a tag: data.
Finally, we can specify an optional label for a convenient way of labeling the returned data. In the example we have the following:
The returned result can be sorted. This is expressed as XPath expressions, which in most cases are very simple and refer to the found node-set. In this example, we sort the result by the content of the name node:
To limit the maximum amount of results in each chunk that fetch-query-result will return we can set the limit element. The default is to get all results in one chunk.
With the offset element we can specify at which node we should start to receive the result. The default is 1, i.e., the first node in the resulting node set.
Now, if we continue by putting the operation above in a file query.xml we can send a request, using the command netconf-console, like this:
The result would look something like this:
The query handle (in this example 12345) must be used in all subsequent calls. To retrieve the result, we can now send:
Which will result in something like the following:
If we try to get more data with the fetch-query-result we might get more result entries in return until no more data exists and we get an empty query result back:
If we want to send the query and get the entire result with only one request, we can do this by using immediate-query. This function takes similar arguments as start-query and returns the entire result analogous fetch-query-result. Note that it is not possible to paginate or set an offset start node for the result list; i.e. the options limit and offset are ignored.
An example request and response:
If we want to go back in the "stream" of received data chunks and have them repeated, we can do that with the reset-query operation. In the example below, we ask to get results from the 42nd result entry:
Finally, when we are done we stop the query:
NSO supports three pieces of meta-data data nodes: tags, annotations, and inactive.
An annotation is a string that acts as a comment. Any data node present in the configuration can get an annotation. An annotation does not affect the underlying configuration but can be set by a user to comment what the configuration does.
An annotation is encoded as an XML attribute annotation on any data node. To remove an annotation, set the annotation attribute to an empty string.
Any configuration data node can have a set of tags. Tags are set by the user for data organization and filtering purposes. A tag does not affect the underlying configuration.
All tags on a data node are encoded as a space-separated string in an XML attribute tags. To remove all tags, set the tags attribute to an empty string.
Annotation, tags, and inactive attributes can be present in <edit-config>, <copy-config>, <get-config>, and <get>. For example:
NSO adds an additional namespace which is used to define elements that are included in the <error-info> element. This namespace also describes which <error-app-tag/> elements the server might generate, as part of an <rpc-error/>.
Implement redundancy in your deployment using High Availability (HA) setup.
As a single NSO node can fail or lose network connectivity, you can configure multiple nodes in a highly available (HA) setup, which replicates the CDB configuration and operational data across participating nodes. It allows the system to continue functioning even when some nodes are inoperable.
The replication architecture is that of one active primary and a number of secondaries. This means all configuration write operations must occur on the primary, which distributes the updates to the secondaries.
Operational data in the CDB may be replicated or not based on the tailf:persistent statement in the data model. If replicated, operational data writes can only be performed on the primary, whereas non-replicated operational data can also be written on the secondaries.
Replication is supported in several different architectural setups. For example, two-node active/standby designs as well as multi-node clusters with runtime software upgrade.
This feature is independent of but compatible with the , which also configures multiple NSO nodes to provide additional scalability. When the following text simply refers to a cluster, it identifies the set of NSO nodes participating in the same HA group, not an LSA cluster, which is a separate concept.
Concepts in usage of the Configuration Database (CDB).
When using CDB to store the configuration data, the applications need to be able to:
Read configuration data from the database.
React to changes to the database. There are several possible writers to the database, such as the CLI, NETCONF sessions, the Web UI, either of the NSO sync commands, alarms that get written into the alarm table, NETCONF notifications that arrive at NSO or the NETCONF agent.
The figure below illustrates the architecture of when the CDB is used. The Application components read configuration data and subscribe to changes to the database using a simple RPC-based API. The API is part of the Java library and is fully documented in the Javadoc for CDB.
HostKeyAlgorithms=+ssh-dss
PubkeyAcceptedKeyTypes=+ssh-dss<pam>
<enabled>true</enabled>
<service>common-auth</service>
</pam>Since the elements of the path to a given node may be defined in different YANG modules when augmentation is used, rules that have a value other than `*` for the `module-name` leaf may require that additional processing is done before a decision to permit or deny, or the access can be taken. Thus if an XPath that completely identifies the nodes that the rule should apply to is given for the `path` leaf (see below), it may be best to leave the `module-name` leaf unset. container authentication {
tailf:info "User management";
container users {
tailf:info "List of local users";
list user {
key name;
leaf name {
type string;
tailf:info "Login name of the user";
}
leaf uid {
type int32;
mandatory true;
tailf:info "User Identifier";
}
leaf gid {
type int32;
mandatory true;
tailf:info "Group Identifier";
}
leaf password {
type passwdStr;
mandatory true;
}
leaf ssh_keydir {
type string;
mandatory true;
tailf:info "Absolute path to directory where user's ssh keys
may be found";
}
leaf homedir {
type string;
mandatory true;
tailf:info "Absolute path to user's home directory";
}
}
}
}# ssh-keygen -b 4096 -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/bob/.ssh/id_rsa):
Created directory '/home/bob/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/bob/.ssh/id_rsa.
Your public key has been saved in /home/bob/.ssh/id_rsa.pub.
The key fingerprint is:
ce:1b:63:0a:f9:d4:1d:04:7a:1d:98:0c:99:66:57:65 bob@buzz
# ls -lt ~/.ssh
total 8
-rw------- 1 bob users 3247 Apr 4 12:28 id_rsa
-rw-r--r-- 1 bob users 738 Apr 4 12:28 id_rsa.pub<user>
<name>bob</name>
<uid>100</uid>
<gid>10</gid>
<password>$1$feedbabe$nGlMYlZpQ0bzenyFOQI3L1</password>
<ssh_keydir>/var/system/users/bob/.ssh</ssh_keydir>
<homedir>/var/system/users/bob</homedir>
</user># grep test /etc/group
operator:x:37:test
admin:x:1001:test<INFO> 28-Jan-2009::16:05:55.663 buzz ncs[14658]: audit user: test/0 logged
in over ssh from 127.0.0.1 with authmeth:password
<INFO> 28-Jan-2009::16:05:55.670 buzz ncs[14658]: audit user: test/5 assigned
to groups: operator,admin
<INFO> 28-Jan-2009::16:05:57.655 buzz ncs[14658]: audit user: test/5 CLI 'exit'ncs_cli --user adminlist group {
key name;
description
"One NACM Group Entry. This list will only contain
configured entries, not any entries learned from
any transport protocols.";
leaf name {
type group-name-type;
description
"Group name associated with this entry.";
}
leaf-list user-name {
type user-name-type;
description
"Each entry identifies the username of
a member of the group associated with
this entry.";
}
}augment /nacm:nacm/nacm:groups/nacm:group {
leaf gid {
type int32;
description
"This leaf associates a numerical group ID with the group.
When a OS command is executed on behalf of a user,
supplementary group IDs are assigned based on 'gid' values
for the groups that the use is a member of.";
}
}<group>
<name>admin</name>
<user-name>bob</user-name>
<user-name>joe</user-name>
<gid xmlns="http://tail-f.com/yang/acm">99</gid>
</group>list rule-list {
key "name";
ordered-by user;
description
"An ordered collection of access control rules.";
leaf name {
type string {
length "1..max";
}
description
"Arbitrary name assigned to the rule-list.";
}
leaf-list group {
type union {
type matchall-string-type;
type group-name-type;
}
description
"List of administrative groups that will be
assigned the associated access rights
defined by the 'rule' list.
The string '*' indicates that all groups apply to the
entry.";
}
// ...
}augment /nacm:nacm/nacm:rule-list {
list cmdrule {
key "name";
ordered-by user;
description
"One command access control rule. Command rules control access
to CLI commands and Web UI functions.
Rules are processed in user-defined order until a match is
found. A rule matches if 'context', 'command', and
'access-operations' match the request. If a rule
matches, the 'action' leaf determines if access is granted
or not.";
leaf name {
type string {
length "1..max";
}
description
"Arbitrary name assigned to the rule.";
}
leaf context {
type union {
type nacm:matchall-string-type;
type string;
}
default "*";
description
"This leaf matches if it has the value '*' or if its value
identifies the agent that is requesting access, i.e. 'cli'
for CLI or 'webui' for Web UI.";
}
leaf command {
type string;
default "*";
description
"Space-separated tokens representing the command. Refer
to the Tail-f AAA documentation for further details.";
}
leaf access-operations {
type union {
type nacm:matchall-string-type;
type nacm:access-operations-type;
}
default "*";
description
"Access operations associated with this rule.
This leaf matches if it has the value '*' or if the
bit corresponding to the requested operation is set.";
}
leaf action {
type nacm:action-type;
mandatory true;
description
"The access control action associated with the
rule. If a rule is determined to match a
particular request, then this object is used
to determine whether to permit or deny the
request.";
}
leaf log-if-permit {
type empty;
description
"If this leaf is present, access granted due to this rule
is logged in the developer log. Otherwise, only denied
access is logged. Mainly intended for debugging of rules.";
}
leaf comment {
type string;
description
"A textual description of the access rule.";
}
}
}augment /nacm:nacm {
leaf cmd-read-default {
type nacm:action-type;
default "permit";
description
"Controls whether command read access is granted
if no appropriate cmdrule is found for a
particular command read request.";
}
leaf cmd-exec-default {
type nacm:action-type;
default "permit";
description
"Controls whether command exec access is granted
if no appropriate cmdrule is found for a
particular command exec request.";
}
leaf log-if-default-permit {
type empty;
description
"If this leaf is present, access granted due to one of
/nacm/read-default, /nacm/write-default, or /nacm/exec-default
/nacm/cmd-read-default, or /nacm/cmd-exec-default
being set to 'permit' is logged in the developer log.
Otherwise, only denied access is logged. Mainly intended
for debugging of rules.";
}
}list rule {
key "name";
ordered-by user;
description
"One access control rule.
Rules are processed in user-defined order until a match is
found. A rule matches if 'module-name', 'rule-type', and
'access-operations' match the request. If a rule
matches, the 'action' leaf determines if access is granted
or not.";
leaf name {
type string {
length "1..max";
}
description
"Arbitrary name assigned to the rule.";
}
leaf module-name {
type union {
type matchall-string-type;
type string;
}
default "*";
description
"Name of the module associated with this rule.
This leaf matches if it has the value '*' or if the
object being accessed is defined in the module with the
specified module name.";
}
choice rule-type {
description
"This choice matches if all leafs present in the rule
match the request. If no leafs are present, the
choice matches all requests.";
case protocol-operation {
leaf rpc-name {
type union {
type matchall-string-type;
type string;
}
description
"This leaf matches if it has the value '*' or if
its value equals the requested protocol operation
name.";
}
}
case notification {
leaf notification-name {
type union {
type matchall-string-type;
type string;
}
description
"This leaf matches if it has the value '*' or if its
value equals the requested notification name.";
}
}
case data-node {
leaf path {
type node-instance-identifier;
mandatory true;
description
"Data Node Instance Identifier associated with the
data node controlled by this rule.
Configuration data or state data instance
identifiers start with a top-level data node. A
complete instance identifier is required for this
type of path value.
The special value '/' refers to all possible
data-store contents.";
}
}
}
leaf access-operations {
type union {
type matchall-string-type;
type access-operations-type;
}
default "*";
description
"Access operations associated with this rule.
This leaf matches if it has the value '*' or if the
bit corresponding to the requested operation is set.";
}
leaf action {
type action-type;
mandatory true;
description
"The access control action associated with the
rule. If a rule is determined to match a
particular request, then this object is used
to determine whether to permit or deny the
request.";
}
leaf comment {
type string;
description
"A textual description of the access rule.";
}
}augment /nacm:nacm/nacm:rule-list/nacm:rule {
leaf context {
type union {
type nacm:matchall-string-type;
type string;
}
default "*";
description
"This leaf matches if it has the value '*' or if its value
identifies the agent that is requesting access, e.g. 'netconf'
for NETCONF, 'cli' for CLI, or 'webui' for Web UI.";
}
leaf log-if-permit {
type empty;
description
"If this leaf is present, access granted due to this rule
is logged in the developer log. Otherwise, only denied
access is logged. Mainly intended for debugging of rules.";
}
}augment /nacm:nacm {
...
leaf log-if-default-permit {
type empty;
description
"If this leaf is present, access granted due to one of
/nacm/read-default, /nacm/write-default, /nacm/exec-default
/nacm/cmd-read-default, or /nacm/cmd-exec-default
being set to 'permit' is logged in the developer log.
Otherwise, only denied access is logged. Mainly intended
for debugging of rules.";
}
}foreach rule {
if (match(rule, path)) {
return rule.action;
}
}rules = select_rules_that_may_match(rules, path);
if (any_rule_is_permit(rules))
return permit;
else
return deny;augment "/nacm:nacm/nacm:rule-list/nacm:rule/nacm:rule-type" {
case device-group-rule {
leaf device-group {
type leafref {
path "/ncs:devices/ncs:device-group/ncs:name";
}
description
"Which device group this rule applies to.";
}
}
}<devices>
<device-group>
<name>us_east</name>
<device-name>cli0</device-name>
<device-name>gen0</device-name>
</device-group>
<device-group>
<name>us_west</name>
<device-name>nc0</device-name>
</device-group>
</devices><nacm>
<groups>
<group>
<name>us_east</name>
<user-name>us_east_oper</user-name>
</group>
</groups>
</nacm><nacm>
<rule-list>
<name>us_east</name>
<group>us_east</group>
<rule>
<name>us_east_read_permit</name>
<device-group xmlns="http://tail-f.com/yang/ncs-acm/device-group-authorization">us_east</device-group>
<access-operations>read</access-operations>
<action>permit</action>
</rule>
<rule>
<name>us_east_create_permit</name>
<device-group xmlns="http://tail-f.com/yang/ncs-acm/device-group-authorization">us_east</device-group>
<access-operations>create</access-operations>
<action>permit</action>
</rule>
<rule>
<name>us_east_update_permit</name>
<device-group xmlns="http://tail-f.com/yang/ncs-acm/device-group-authorization">us_east</device-group>
<access-operations>update</access-operations>
<action>permit</action>
</rule>
<rule>
<name>us_east_delete_permit</name>
<device-group xmlns="http://tail-f.com/yang/ncs-acm/device-group-authorization">us_east</device-group>
<access-operations>delete</access-operations>
<action>permit</action>
</rule>
</rule-list>
</nacm><rule-list>
<name>admin</name>
<group>admin</group>
<rule>
<name>tailf-aaa</name>
<module-name>tailf-aaa</module-name>
<path>/</path>
<access-operations>read create update delete</access-operations>
<action>permit</action>
</rule>
</rule-list>
<rule-list>
<name>oper</name>
<group>oper</group>
<rule>
<name>tailf-aaa</name>
<module-name>tailf-aaa</module-name>
<path>/</path>
<access-operations>read create update delete</access-operations>
<action>deny</action>
</rule>
</rule-list><rule-list>
<name>oper</name>
<group>oper</group>
<rule>
<name>edit-config</name>
<rpc-name>edit-config</rpc-name>
<context xmlns="http://tail-f.com/yang/acm">netconf</context>
<access-operations>exec</access-operations>
<action>deny</action>
</rule>
</rule-list><rule-list>
<name>admin</name>
<group>admin</group>
<rule>
<name>bob-password</name>
<path>/aaa/authentication/users/user[name='bob']/password</path>
<context xmlns="http://tail-f.com/yang/acm">cli</context>
<access-operations>read update</access-operations>
<action>permit</action>
</rule>
</rule-list><rule-list>
<name>admin</name>
<group>admin</group>
<rule>
<name>user-password</name>
<path>/aaa/authentication/users/user[name='$USER']/password</path>
<context xmlns="http://tail-f.com/yang/acm">cli</context>
<access-operations>read update</access-operations>
<action>permit</action>
</rule>
</rule-list>container test {
action double {
input {
leaf number {
type uint32;
}
}
output {
leaf result {
type uint32;
}
}
}
}<rule-list>
<name>oper</name>
<group>oper</group>
<rule>
<name>allow-netconf-rpc-action</name>
<rpc-name>action</rpc-name>
<context xmlns="http://tail-f.com/yang/acm">netconf</context>
<access-operations>exec</access-operations>
<action>permit</action>
</rule>
<rule>
<name>allow-read-test</name>
<path>/test</path>
<access-operations>read</access-operations>
<action>permit</action>
</rule>
<rule>
<name>allow-exec-double</name>
<path>/test/double</path>
<access-operations>exec</access-operations>
<action>permit</action>
</rule>
</rule-list><rule-list>
<name>oper</name>
<group>oper</group>
<rule>
<name>allow-netconf-rpc-action</name>
<rpc-name>action</rpc-name>
<context xmlns="http://tail-f.com/yang/acm">netconf</context>
<access-operations>exec</access-operations>
<action>permit</action>
</rule>
<rule>
<name>allow-exec-double</name>
<path>/test</path>
<access-operations>read exec</access-operations>
<action>permit</action>
</rule>
</rule-list><rule-list>
<name>oper</name>
<group>oper</group>
<cmdrule xmlns="http://tail-f.com/yang/acm">
<name>request-system-reboot</name>
<context>cli</context>
<command>request system reboot</command>
<access-operations>exec</access-operations>
<action>deny</action>
</cmdrule>
<!-- The following rule is required since the user can -->
<!-- do "edit system" -->
<cmdrule xmlns="http://tail-f.com/yang/acm">
<name>request-reboot</name>
<context>cli</context>
<command>request reboot</command>
<access-operations>exec</access-operations>
<action>deny</action>
</cmdrule>
<rule>
<name>netconf-reboot</name>
<rpc-name>reboot</rpc-name>
<context xmlns="http://tail-f.com/yang/acm">netconf</context>
<access-operations>exec</access-operations>
<action>deny</action>
</rule>
</rule-list> <rule>
<name>permit-acme-config</name>
<path xmlns:acme="http://example.com/ns/netconf">
/acme:acme-netconf/acme:config-parameters
</path>
...admin@ncs# config
Entering configuration mode terminal
admin@ncs(config)# devices device c1 config
admin@ncs(config-config)# ip name-server 192.0.2.1
admin@ncs(config-config)# top
admin@ncs(config)#admin@ncs(config)# commit dry-run outformat xml
result-xml {
local-node {
data <devices xmlns="http://tail-f.com/ns/ncs">
<device>
<name>c1</name>
<config>
<ip xmlns="urn:ios">
<name-server>192.0.2.1</name-server>
</ip>
</config>
</device>
</devices>
}
}admin@ncs# show running-config devices device c1 config ip name-server | display xml
<config xmlns="http://tail-f.com/ns/config/1.0">
<devices xmlns="http://tail-f.com/ns/ncs">
<device>
<name>c1</name>
<config>
<ip xmlns="urn:ios">
<name-server>192.0.2.1</name-server>
</ip>
</config>
</device>
</devices>
</config>admin@ncs# show running-config devices device c1 config ip name-server | display xml\
| save dns-template.xmlncs-make-package --build --no-test --service-skeleton template dns<config-template xmlns="http://tail-f.com/ns/config/1.0"
servicepoint="dns">
<devices xmlns="http://tail-f.com/ns/ncs">
<!-- ... more statements here ... -->
</devices>
</config-template><config-template xmlns="http://tail-f.com/ns/config/1.0"
servicepoint="dns">
<devices xmlns="http://tail-f.com/ns/ncs">
<device>
<name>c1</name>
<config>
<ip xmlns="urn:ios">
<name-server>192.0.2.1</name-server>
</ip>
</config>
</device>
</devices>
</config-template>$ cd $NCS_DIR/examples.ncs/implement-a-service/dns-v1
$ make demo<config-template xmlns="http://tail-f.com/ns/config/1.0"
servicepoint="dns">
<devices xmlns="http://tail-f.com/ns/ncs">
<device>
<name>{/name}</name>
<config>
<ip xmlns="urn:ios">
<name-server>192.0.2.1</name-server>
</ip>
</config>
</device>
</devices>
</config-template>admin@ncs# config
Entering configuration mode terminal
admin@ncs(config)# dns c2
admin@ncs(config-dns-c2)# commit dry-run
cli {
local-node {
data devices {
device c2 {
config {
ip {
+ name-server 192.0.2.1;
}
}
}
}
+dns c2 {
+}
}
}<config-template xmlns="http://tail-f.com/ns/config/1.0"
servicepoint="dns">
<devices xmlns="http://tail-f.com/ns/ncs">
<device>
<name>{/name}</name>
<config>
<ip xmlns="urn:ios">
<?if {starts-with(/name, 'c1')}?>
<name-server>192.0.2.1</name-server>
<?else?>
<name-server>192.0.2.2</name-server>
<?end?>
</ip>
</config>
</device>
</devices>
</config-template> list servicename {
key name;
uses ncs:service-data;
ncs:servicepoint "servicename";
leaf name {
type string;
}
// ... other statements ...
} list dns {
key name;
uses ncs:service-data;
ncs:servicepoint "dns";
leaf name {
type string;
}
leaf target-device {
type string;
}
}$ cd $NCS_DIR/examples.ncs/implement-a-service/dns-v2
$ make demo leaf target-device {
mandatory true;
type string {
length "2";
pattern "c[0-2]";
}
} leaf dns-server-ip {
type inet:ipv4-address {
pattern "192\\.0\\.2\\..*";
}
}<config-template xmlns="http://tail-f.com/ns/config/1.0"
servicepoint="dns">
<devices xmlns="http://tail-f.com/ns/ncs">
<device>
<name>{/target-device}</name>
<config>
<ip xmlns="urn:ios">
<?if {/dns-server-ip}?>
<!-- If dns-server-ip is set, use that. -->
<name-server>{/dns-server-ip}</name-server>
<?else?>
<!-- Otherwise, use the default one. -->
<name-server>192.0.2.1</name-server>
<?end?>
</ip>
</config>
</device>
</devices>
</config-template>$ cd $NCS_DIR/examples.ncs/implement-a-service/dns-v2.1
$ make demoadmin@ncs# config
Entering configuration mode terminal
admin@ncs(config)# devices device c1 config
admin@ncs(config-config)# interface GigabitEthernet 0/0
admin@ncs(config-if)# ip address 192.168.5.1 255.255.255.0<config-template xmlns="http://tail-f.com/ns/config/1.0"
servicepoint="iface-servicepoint">
<devices xmlns="http://tail-f.com/ns/ncs">
<device>
<name>c1</name>
<config>
<interface xmlns="urn:ios">
<GigabitEthernet>
<name>0/0</name>
<ip>
<address>
<primary>
<address>192.168.5.1</address>
<mask>255.255.255.0</mask>
</primary>
</address>
</ip>
</GigabitEthernet>
</interface>
</config>
</device>
</devices>
</config-template><config-template xmlns="http://tail-f.com/ns/config/1.0"
servicepoint="iface-servicepoint">
<devices xmlns="http://tail-f.com/ns/ncs">
<device>
<name>{/device}</name>
<config>
<interface xmlns="urn:ios">
<GigabitEthernet>
<name>{/interface}</name>
<ip>
<address>
<primary>
<address>{/ip-address}</address>
<mask>255.255.255.0</mask>
</primary>
</address>
</ip>
</GigabitEthernet>
</interface>
</config>
</device>
</devices>
</config-template> list iface {
key name;
uses ncs:service-data;
ncs:servicepoint "iface-servicepoint";
leaf name {
type string;
}
leaf device { ... }
leaf interface { ... }
leaf ip-address { ... }
} leaf device {
mandatory true;
type leafref {
path "/ncs:devices/ncs:device/ncs:name";
}
} leaf interface {
mandatory true;
type string {
pattern "[0-9]/[0-9]+";
}
}
leaf ip-address {
mandatory true;
type inet:ipv4-address;
}
} list iface {
key name;
uses ncs:service-data;
ncs:servicepoint "iface-servicepoint";
leaf name {
type string;
}
leaf device {
mandatory true;
type leafref {
path "/ncs:devices/ncs:device/ncs:name";
}
}
leaf interface {
mandatory true;
type string {
pattern "[0-9]/[0-9]+";
}
}
leaf ip-address {
mandatory true;
type inet:ipv4-address;
}
} list iface {
key name;
uses ncs:service-data;
ncs:servicepoint "iface-servicepoint";
leaf name {
type string;
}
leaf device {
mandatory true;
type leafref {
path "/ncs:devices/ncs:device/ncs:name";
}
}
leaf interface {
mandatory true;
type string {
pattern "[0-9]/[0-9]+";
}
}
leaf ip-address {
mandatory true;
type inet:ipv4-address;
}
leaf cidr-netmask {
default 24;
type uint8 {
range "0..32";
}
}
}<config-template xmlns="http://tail-f.com/ns/config/1.0">
<devices xmlns="http://tail-f.com/ns/ncs">
<device>
<name>{/device}</name>
<config>
<interface xmlns="urn:ios">
<GigabitEthernet>
<name>{/interface}</name>
<ip>
<address>
<primary>
<address>{/ip-address}</address>
<mask>{$NETMASK}</mask>
</primary>
</address>
</ip>
</GigabitEthernet>
</interface>
</config>
</device>
</devices>
</config-template>ncs-make-package --no-test --service-skeleton python-and-template iface def cb_create(self, tctx, root, service, proplist):
cidr_mask = service.cidr_netmask quad_mask = ipaddress.IPv4Network((0, cidr_mask)).netmask vars = ncs.template.Variables()
vars.add('NETMASK', quad_mask)
template = ncs.template.Template(service)
template.apply('iface-template', vars) def cb_create(self, tctx, root, service, proplist):
cidr_mask = service.cidr_netmask
quad_mask = ipaddress.IPv4Network((0, cidr_mask)).netmask
vars = ncs.template.Variables()
vars.add('NETMASK', quad_mask)
template = ncs.template.Template(service)
template.apply('iface-template', vars)ncs-make-package --no-test --service-skeleton java-and-template iface public Properties create(ServiceContext context,
NavuNode service,
NavuNode ncsRoot,
Properties opaque)
throws ConfException {
String cidr_mask_str = service.leaf("cidr-netmask").valueAsString();
int cidr_mask = Integer.parseInt(cidr_mask_str); long tmp_mask = 0xffffffffL << (32 - cidr_mask);
String quad_mask =
((tmp_mask >> 24) & 0xff) + "." +
((tmp_mask >> 16) & 0xff) + "." +
((tmp_mask >> 8) & 0xff) + "." +
((tmp_mask >> 0) & 0xff); Template myTemplate = new Template(context, "iface-template");
TemplateVariables myVars = new TemplateVariables();
myVars.putQuoted("NETMASK", quad_mask);
myTemplate.apply(service, myVars); public Properties create(ServiceContext context,
NavuNode service,
NavuNode ncsRoot,
Properties opaque)
throws ConfException {
try {
String cidr_mask_str = service.leaf("cidr-netmask").valueAsString();
int cidr_mask = Integer.parseInt(cidr_mask_str);
long tmp_mask = 0xffffffffL << (32 - cidr_mask);
String quad_mask = ((tmp_mask >> 24) & 0xff) +
"." + ((tmp_mask >> 16) & 0xff) +
"." + ((tmp_mask >> 8) & 0xff) +
"." + ((tmp_mask) & 0xff);
Template myTemplate = new Template(context, "iface-template");
TemplateVariables myVars = new TemplateVariables();
myVars.putQuoted("NETMASK", quad_mask);
myTemplate.apply(service, myVars);
} catch (Exception e) {
throw new DpCallbackException(e.getMessage(), e);
}
return opaque;
} leaf-list device {
type leafref {
path "/ncs:devices/ncs:device/ncs:name";
}
}<config-template xmlns="http://tail-f.com/ns/config/1.0"
servicepoint="servicename">
<devices xmlns="http://tail-f.com/ns/ncs">
<device>
<name>{/device}</name>
<config>
<!-- ... -->
</config>
</device>
</devices>
</config-template><config-template xmlns="http://tail-f.com/ns/config/1.0"
servicepoint="servicename">
<devices xmlns="http://tail-f.com/ns/ncs">
<?foreach {/device}?>
<device>
<name>{.}</name>
<config>
<!-- ... -->
</config>
</device>
<?end?>
</devices>
</config-template><config-template xmlns="http://tail-f.com/ns/config/1.0">
<devices xmlns="http://tail-f.com/ns/ncs">
<device>
<name>{/device}</name>
<config>
<!-- Part for device with the cisco-ios NED -->
<interface xmlns="urn:ios">
<GigabitEthernet>
<name>{/interface}</name>
<!-- ... -->
</GigabitEthernet>
</interface>
<!-- Part for device with the router-nc NED -->
<sys xmlns="http://example.com/router">
<interfaces>
<interface>
<name>{/interface}</name>
<!-- ... -->
</interface>
</interfaces>
</sys>
</config>
</device>
</devices>
</config-template><config-template xmlns="http://tail-f.com/ns/config/1.0">
<devices xmlns="http://tail-f.com/ns/ncs">
<device>
<name>{/device}</name>
<config>
<?if-ned-id cisco-ios-cli-3.0:cisco-ios-cli-3.0?>
<interface xmlns="urn:ios">
<GigabitEthernet>
<name>{/interface}</name>
<!-- ... -->
</GigabitEthernet>
</interface>
<?end?>
</config>
</device>
</devices>
</config-template> container dns-options {
list dns-option {
key name;
leaf name {
type string;
}
leaf-list servers {
type inet:ipv4-address;
}
}
} container dns-options {
// ...
}
list dns {
key name;
uses ncs:service-data;
ncs:servicepoint "dns";
// ...
}admin@ncs(config)# dns-options dns-option lon servers 192.0.2.3
admin@ncs(config-dns-option-lon)# top
admin@ncs(config)# dns-options dns-option sto servers 192.0.2.3
admin@ncs(config-dns-option-sto)# top
admin@ncs(config)# dns-options dns-option sjc servers [ 192.0.2.5 192.0.2.6 ]
admin@ncs(config-dns-option-sjc)# commit list dns {
key name;
uses ncs:service-data;
ncs:servicepoint "dns";
leaf name {
type string;
}
leaf target-device {
type string;
}
// Replace the old, explicit IP with a reference to shared data
// leaf dns-server-ip {
// type inet:ip-address {
// pattern "192\.0.\.2\..*";
// }
// }
leaf dns-servers {
mandatory true;
type leafref {
path "/dns-options/dns-option/name";
}
}
}<config-template xmlns="http://tail-f.com/ns/config/1.0"
servicepoint="dns">
<devices xmlns="http://tail-f.com/ns/ncs">
<device>
<name>{/target-device}</name>
<config>
<ip xmlns="urn:ios">
<name-server>{deref(/dns-servers)/../servers}</name-server>
</ip>
</config>
</device>
</devices>
</config-template> <ip xmlns="urn:ios">
<?set dns_option = {/dns-servers}?> <!-- Set $dns_option to e.g. 'lon' -->
<?set-root-node {/}?> <!-- Make '/' point to datastore root,
instead of service instance -->
<name-server>{/dns-options/dns-option[name=$dns_option]/servers}</name-server>
</ip> list iface {
key name;
uses ncs:service-data;
ncs:servicepoint "iface-servicepoint";
leaf name { /* ... */ }
leaf device { /* ... */ }
leaf interface { /* ... */ }
// ... other statements omitted ...
action test-enabled {
tailf:actionpoint iface-test-enabled;
output {
leaf status {
type enumeration {
enum up;
enum down;
enum unknown;
}
}
}
}
}class IfaceActions(Action):
@Action.action
def cb_action(self, uinfo, name, kp, input, output, trans):
... root = ncs.maagic.get_root(trans)
service = ncs.maagic.cd(root, kp)class IfaceActions(Action):
@Action.action
def cb_action(self, uinfo, name, kp, input, output, trans):
root = ncs.maagic.get_root(trans)
service = ncs.maagic.cd(root, kp)
device = root.devices.device[service.device]
status = 'unknown' # Replace with your own code that checks
# e.g. operational status of the interface
output.status = statusclass Main(ncs.application.Application):
def setup(self):
...
self.register_action('iface-test-enabled', IfaceActions) @ActionCallback(callPoint="iface-test-enabled",
callType=ActionCBType.ACTION)
public ConfXMLParam[] test_enabled(DpActionTrans trans, ConfTag name,
ConfObject[] kp, ConfXMLParam[] params)
throws DpCallbackException {
// ...
} NavuContext context = new NavuContext(maapi);
NavuContainer service =
(NavuContainer)KeyPath2NavuNode.getNode(kp, context); @ActionCallback(callPoint="iface-test-enabled",
callType=ActionCBType.ACTION)
public ConfXMLParam[] test_enabled(DpActionTrans trans, ConfTag name,
ConfObject[] kp, ConfXMLParam[] params)
throws DpCallbackException {
int port = NcsMain.getInstance().getNcsPort();
// Ensure socket gets closed on errors, also ending any ongoing
// session and transaction
try (Socket socket = new Socket("localhost", port)) {
Maapi maapi = new Maapi(socket);
maapi.startUserSession("admin", InetAddress.getByName("localhost"),
"system", new String[] {}, MaapiUserSessionFlag.PROTO_TCP);
NavuContext context = new NavuContext(maapi);
context.startRunningTrans(Conf.MODE_READ);
NavuContainer root = new NavuContainer(context);
NavuContainer service =
(NavuContainer)KeyPath2NavuNode.getNode(kp, context);
String status = "unknown"; // Replace with your own code that
// checks e.g. operational status of
// the interface
String nsPrefix = name.getPrefix();
return new ConfXMLParam[] {
new ConfXMLParamValue(nsPrefix, "status", new ConfBuf(status)),
};
} catch (Exception e) {
throw new DpCallbackException(name.toString() + " action failed",
e);
}
} list iface {
key name;
uses ncs:service-data;
ncs:servicepoint "iface-servicepoint";
// ... other statements omitted ...
action test-enabled {
tailf:actionpoint iface-test-enabled;
output {
leaf status {
type enumeration {
enum up;
enum down;
enum unknown;
}
}
}
}
leaf last-test-result {
config false;
type enumeration {
enum up;
enum down;
enum unknown;
}
}
} typedef iface-status-type {
type enumeration {
enum up;
enum down;
enum unknown;
}
} leaf last-test-status {
config false;
type iface-status-type;
}
action test-enabled {
tailf:actionpoint iface-test-enabled;
output {
leaf status {
type iface-status-type;
}
}admin@ncs# show iface test-instance1 last-test-status
iface test-instance1 last-test-status upwith contextlib.closing(socket.socket()) as s:
_ncs.cdb.connect(s, _ncs.cdb.DATA_SOCKET, ip='127.0.0.1', port=_ncs.PORT)
_ncs.cdb.start_session(s, _ncs.cdb.OPERATIONAL)
_ncs.cdb.set_elem(s, 'up', '/iface{test-instance1}/last-test-status')with ncs.maapi.single_write_trans('admin', 'python', db=ncs.OPERATIONAL) as t:
root = ncs.maagic.get_root(t)
root.iface['test-instance1'].last_test_status = 'up'
t.apply() def cb_action(self, uinfo, name, kp, input, output, trans):
with ncs.maapi.single_write_trans('admin', 'python',
db=ncs.OPERATIONAL) as t:
root = ncs.maagic.get_root(t)
service = ncs.maagic.cd(root, kp)
# ...
service.last_test_status = status
t.apply()
output.status = status leaf last-test-status {
config false;
type iface-status-type;
tailf:cdb-oper {
tailf:persistent true;
}
}class ServiceApp(Application):
def setup(self):
...
self.register_fun(init_oper_data, lambda _: None)
def init_oper_data(state):
state.log.info('Populating operational data')
with ncs.maapi.single_write_trans('admin', 'python',
db=ncs.OPERATIONAL) as t:
root = ncs.maagic.get_root(t)
# ...
t.apply()
return stateint port = NcsMain.getInstance().getNcsPort();
// Ensure socket gets closed on errors, also ending any ongoing session/lock
try (Socket socket = new Socket("localhost", port)) {
Cdb cdb = new Cdb("IfaceServiceOperWrite", socket);
CdbSession session = cdb.startSession(CdbDBType.CDB_OPERATIONAL);
String status = "up";
ConfPath path = new ConfPath("/iface{%s}/last-test-status",
"test-instance1");
session.setElem(ConfEnumeration.getEnumByLabel(path, status), path);
session.endSession();
}int port = NcsMain.getInstance().getNcsPort();
// Ensure socket gets closed on errors, also ending any ongoing
// session and transaction
try (Socket socket = new Socket("localhost", port)) {
Maapi maapi = new Maapi(socket);
maapi.startUserSession("admin", InetAddress.getByName("localhost"),
"system", new String[] {}, MaapiUserSessionFlag.PROTO_TCP);
NavuContext context = new NavuContext(maapi);
context.startOperationalTrans(Conf.MODE_READ_WRITE);
NavuContainer root = new NavuContainer(context);
NavuContainer service =
(NavuContainer)KeyPath2NavuNode.getNode(kp, context);
// ...
service.leaf("last-test-status").set(status);
context.applyClearTrans();
} leaf last-check-status {
config false;
type iface-status-type;
tailf:cdb-oper {
tailf:persistent true;
}
}/devices/device/config/interface[name="eth0"]eth0The wild card at the end as in: /services/web-site/* does not match the website service instances, but rather all children of the website service instances.\
Thus, the path in a rule is matched against the path in the attempted data access. If the attempted access has a path that is equal to or longer than the rule path - we have a match.
If none of the leafs rpc-name, notification-name, or path are set, the rule matches for any RPC, notification, data, or action access.









# cd $NCS_DIR/lib/ncs/lib/core/pam/priv/
# chown root:root epam
# chmod u+s epamyang/demo.yang:32: error: expected keyword 'type' as substatement to 'leaf'
make: *** [Makefile:41: ../load-dir/demo.fxs] Error 1 [javac] /nso-run/packages/demo/src/java/src/com/example/demo/demoRFS.java:52: error: ';' expected
[javac] Template myTemplate = new Template(context, "demo-template")
[javac] ^
[javac] 1 error
[javac] 1 warning
BUILD FAILEDadmin@ncs# packages reload
Error: Failed to load NCS package: demo; requires NCS version 6.3admin@ncs# packages reload
reload-result {
package demo
result false
info SyntaxError: invalid syntax
}admin@ncs# packages reload
reload-result {
package demo1
result false
info demo-template.xml:87 missing tag: name
}
reload-result {
package demo2
result false
info demo-template.xml:11 Unknown namespace: 'ios-xr'
}
reload-result {
package demo3
result false
info demo-template.xml:12: The XML stream is broken. Run-away < character found.
}admin@ncs# devtools true
admin@ncs# config
Entering configuration mode terminal
admin@ncs(config)# xpath eval /devices/device
admin@ncs(config)# xpath eval /devices/device[name='r0']admin@ncs(config)# commit trace-id myTrace1
Commit complete.admin@ncs# devtools true
admin@ncs(config)# timecmd commit
Commit complete.
Command executed in 5.31 seconds.admin@ncs# packages reload
reload-result {
package demo
result false
info demo-template.xml:2 Unknown servicepoint: notdemo
}admin@ncs(config-demo-s1)# commit dry-run
Aborted: no registration found for callpoint demo/service_create of type=externalYANGPATH += ../../my-dependency/src/yang \// Following XPath might trigger an error if there is collision for the 'interfaces' node with other modules
path "/ncs:devices/ncs:device['r0']/config/interfaces/interface";
yang/demo.yang:25: error: the node 'interfaces' from module 'demo' (in node 'config' from 'tailf-ncs') is not found
// And the following XPath will not, since it uses namespace prefixes
path "/ncs:devices/ncs:device['r0']/config/iosxr:interfaces/iosxr:interface";admin@ncs(config)# devices global-settings trace pretty
admin@ncs(config)# devices global-settings trace-dir ./my-trace
admin@ncs(config)# commit:url
The URL schemes supported are file, ftp, and sftp (SSH File Transfer Protocol). There is no standard URL syntax for the sftp scheme, but NSO supports the syntax used by curl:
Note that user name and password must be given for sftp URLs. NSO does not support validate from a URL.
:xpath
The NETCONF server supports XPath according to the W3C XPath 1.0 specification ().
with-defaults: Advertised if the server supports the :with-defaults capability, which NSO does.ietf-subscribed-notifications: This module from RFC 8639 defines operations, configuration data nodes, and operational state data nodes related to notification subscriptions. It defines the following features:
configured: Indicates that the server supports configured subscriptions. This feature is not advertised.
dscp: Indicates that the server supports the ability to set the Differentiated Services Code Point (DSCP) value in outgoing packets. This feature is not advertised.
encode-json: Indicates that the server supports JSON encoding of notifications. This is not applicable to NETCONF, and this feature is not advertised.
encode-xml: Indicates that the server supports XML encoding of notifications. This feature is advertised by NSO.
interface-designation: Indicates that a configured subscription can be configured to send notifications over a specific interface. This feature is not advertised.
qos: Indicates that a publisher supports absolute dependencies of one subscription's traffic over another as well as weighted bandwidth sharing between subscriptions. This feature is not advertised.
replay: Indicates that historical event record replay is supported. This feature is advertised by NSO.
subtree: Indicates that the server supports subtree filtering of notifications. This feature is advertised by NSO.
supports-vrf: Indicates that a configured subscription can be configured to send notifications from a specific VRF. This feature is not advertised.
xpath: Indicates that the server supports XPath filtering of notifications. This feature is advertised by NSO.
ietf-yang-typesietf-inet-types
ietf-restconf
ietf-datastores
ietf-yang-patch
netconf-subsysNegotiation of subscription parameters (through the use of hints returned as part of declined subscription requests)
Subscription state change notifications (e.g., publisher-driven suspension, parameter modification)
Independence from transport
operation-not-supportedon-changeYANG-Patch Media Type
An example push-change-update notification:
ncs-statencs-high-availabilitylive-statusno-out-of-sync-check
Continue with the transaction even if NSO detects that a device's configuration is out of sync.
no-deploy
Commit without invoking the service create method, i.e., write the service instance data without activating the service(s). The service(s) can later be redeployed to write the changes of the service(s) to the network.
reconcile/keep-non-service-config
Reconcile the service data. All data which existed before the service was created will now be owned by the service. When the service is removed that data will also be removed. In technical terms, the reference count will be decreased by one for everything that existed prior to the service. If manually configured data exists below in the configuration tree that data is kept.
reconcile/discard-non-service-config
Reconcile the service data but do not keep manually configured data that exists below in the configuration tree.
use-lsa
Force handling of the LSA nodes as such. This flag tells NSO to propagate applicable commit flags and actions to the LSA nodes without applying them on the upper NSO node itself. The commit flags affected are dry-run, no-networking, no-out-of-sync-check, no-overwrite and no-revision-drop.
no-lsa
Do not handle any of the LSA nodes as such. These nodes will be handled as any other device.
commit-queue/async
Commit the transaction data to the commit queue. The operation returns successfully if the transaction data has been successfully placed in the queue.
commit-queue/sync/timeout
Commit the transaction data to the commit queue. The operation does not return until the transaction data has been sent to all devices, or a timeout occurs. The timeout value specifies a maximum number of seconds to wait for the completion.
commit-queue/sync/infinity
Commit the transaction data to the commit queue. The operation does not return until the transaction data has been sent to all devices.
commit-queue/bypass
If /devices/global-settings/commit-queue/enabled-by-default is true the data in this transaction will bypass the commit queue. The data will be written directly to the devices.
commit-queue/atomic
Sets the atomic behavior of the resulting queue item. Possible values are: true and false. If this is set to false, the devices contained in the resulting queue item can start executing if the same devices in other non-atomic queue items ahead of it in the queue are completed. If set to true, the atomic integrity of the queue item is preserved.
commit-queue/block-others
The resulting queue item will block subsequent queue items, which use any of the devices in this queue item, from being queued.
commit-queue/lock
Place a lock on the resulting queue item. The queue item will not be processed until it has been unlocked, see the actions unlock and lock in /devices/commit-queue/queue-item. No following queue items, using the same devices, will be allowed to execute as long as the lock is in place.
commit-queue/tag
The value is a user-defined opaque tag. The tag is present in all notifications and events sent referencing the specific queue item.
commit-queue/error-option
The error option to use. Depending on the selected error option NSO will store the reverse of the original transaction to be able to undo the transaction changes and get back to the previous state. This data is stored in the /devices/commit-queue/completed tree from where it can be viewed and invoked with the rollback action. When invoked the data will be removed. Possible values are: continue-on-error, rollback-on-error, and stop-on-error. The continue-on-error value means that the commit queue will continue on errors. No rollback data will be created. The rollback-on-error value means that the commit queue item will roll back on errors. The commit queue will place a lock with block-others on the devices and services in the failed queue item. The rollback action will then automatically be invoked when the queue item has finished its execution. The lock will be removed as part of the rollback. The stop-on-error means that the commit queue will place a lock with block-others on the devices and services in the failed queue item. The lock must then either manually be released when the error is fixed or the rollback action under /devices/commit-queue/completed be invoked.
Read about error recovery in for a more detailed explanation.
trace-id
Use the provided trace ID as part of the log messages emitted while processing. If no trace ID is given, NSO will generate and assign a trace ID to the processing.
Note: trace-id within NETCONF extensions is deprecated from NSO version 6.3. Capabilities within Trace Context will provide support for trace-id, see the section Trace Context.
reset-query: (Re)set where the next fetched result will begin from.stop-query: Stop (and close) the query.
:writable-running
This capability is always advertised.
:candidate
Not supported by NSO.
:confirmed-commit
Not supported by NSO.
:rollback-on-error
This capability allows the client to set the <error-option> parameter to rollback-on-error. The other permitted values are stop-on-error (default) and continue-on-error. Note that the meaning of the word "error" in this context is not defined in the specification. Instead, the meaning of this word must be defined by the data model. Also, note that if stop-on-error or continue-on-error is triggered by the server, it means that some parts of the edit operation succeeded, and some parts didn't. The error partial-operation must be returned in this case. partial-operation is obsolete and should not be returned by a server. If some other error occurs (i.e. an error not covered by the meaning of "error" above), the server generates an appropriate error message, and the data store is unaffected by the operation.
The NSO server never allows partial configuration changes, since it might result in inconsistent configurations, and recovery from such a state can be very difficult for a client. This means that regardless of the value of the <error-option> parameter, NSO will always behave as if it had the value rollback-on-error. So in NSO, the meaning of the word "error" in stop-on-error and continue-on-error, is something that never can happen.
It is possible to configure the NETCONF server to generate an operation-not-supported error if the client asks for the error-option continue-on-error. See in Manual Pages.
:validate
NSO supports both version 1.0 and 1.1 of this capability.
:startup
Not supported by NSO.
:notification
NSO implements the urn:ietf:params:netconf:capability:notification:1.0 capability, including support for the optional replay feature. See Notification Capability for details.
:with-defaults
NSO implements the urn:ietf:params:netconf:capability:with-defaults:1.0 capability, which is used by the server to inform the client how default values are handled by the server, and by the client to control whether default values should be generated to replies or not.
If the capability is enabled, NSO also implements the urn:ietf:params:netconf:capability:with-operational-defaults:1.0 capability, which targets the operational state datastore while the :with-defaults capability targets configuration data stores.
:yang-library:1.0
NSO implements the urn:ietf:params:netconf:capability:yang-library:1.0 capability, which informs the client that the server implements the YANG module library RFC 7895, and informs the client about the current module-set-id.
:yang-library:1.1
NSO implements the urn:ietf:params:netconf:capability:yang-library:1.1 capability, which informs the client that the server implements the YANG library RFC 8525, and informs the client about the current content-id.
NSO supports the following options for implementing an HA setup to cater to the widest possible range of use cases (only one can be used at a time):
HA Raft: Using a modern, consensus-based algorithm, it offers a robust, hands-off solution that works best in the majority of cases.
Rule-based HA: A less sophisticated solution that allows you to influence the primary selection but may require occasional manual operator action.
External HA: NSO only provides data replication; all other functions, such as primary selection and group membership management, are performed by an external application, using the HA framework (HAFW).
In addition to data replication, having a fixed address to connect to the current primary in an HA group greatly simplifies access for operators, users, and other systems alike. Use Tail-f HCC Package or an external load balancer to manage it.
Raft is a consensus algorithm that reliably distributes a set of changes to a group of nodes and robustly handles network and node failure. It can operate in the face of multiple, subsequent failures, while also allowing a previously failed or disconnected node to automatically rejoin the cluster without risk of data conflicts.
Compared to traditional fail-over HA solutions, Raft relies on the consensus of the participating nodes, which addresses the so-called “split-brain” problem, where multiple nodes assume a primary role. This problem is especially characteristic of two-node systems, where it is impossible for a single node on its own to distinguish between losing network connectivity itself versus the other node malfunctioning. For this reason, Raft requires at least three nodes in the cluster.
Raft achieves robustness by requiring at least three nodes in the HA cluster. Three is the recommended cluster size, allowing the cluster to operate in the face of a single node failure. In case you need to tolerate two nodes failing simultaneously, you can add two additional nodes, for a 5-node cluster. However, permanently having more than five nodes in a single cluster is currently not recommended since Raft requires the majority of the currently configured nodes in the cluster to reach consensus. Without the consensus, the cluster cannot function.
You can start a sample HA Raft cluster using the examples.ncs/high-availability/raft-cluster example to test it out. The scripts in the example show various aspects of cluster setup and operation, which are further described in the rest of this section.
Optionally, examples using separate containers for each HA Raft cluster member with NSO system installations are available and referenced in the examples.ncs/development-guide/high-availability/hcc example in the NSO example set.
The Raft algorithm works with the concept of (election) terms. In each term, nodes in the cluster vote for a leader. The leader is elected when it receives the majority of the votes. Since each node only votes for a single leader in a given term, there can only be one leader in the cluster for this term.
Once elected, the leader becomes responsible for distributing the changes and ensuring consensus in the cluster for that term. Consensus means that the majority of the participating nodes must confirm a change before it is accepted. This is required for the system to ensure no changes ever get overwritten and provide reliability guarantees. On the other hand, it also means more than half of the nodes must be available for normal operation.
Changes can only be performed on the leader, that will accept the change after the majority of the cluster nodes confirm it. This is the reason a typical Raft cluster has an odd number of nodes; exactly half of the nodes agreeing on a change is not sufficient. It also makes a two-node cluster (or any even number of nodes in a cluster) impractical; the system as a whole is no more available than it is with one fewer node.
If the connection to the leader is broken, such as during a network partition, the nodes start a new term and a new election. Another node can become a leader if it gets the majority of the votes of all nodes initially in the cluster. While gathering votes, the node has the status of a candidate. In case multiple nodes assume candidate status, a split-vote scenario may occur, which is resolved by starting a fresh election until a candidate secures the majority vote.
If it happens that there aren't enough reachable nodes to obtain a majority, a candidate can stay in the candidate state for an indefinite time. Otherwise, when a node votes for a candidate, it becomes a follower and stays a follower in this term, regardless if the candidate is elected or not.
Additionally, the NSO node can also be in the stalled state, if HA Raft is enabled but the node has not joined a cluster.
Each node in an HA Raft cluster needs a unique name. Names are usually in the ADDRESS format, where ADDRESS identifies a network host where the NSO process is running, such as a fully qualified domain name (FQDN) or an IPv4 address.
Other nodes in the cluster must be able to resolve and reach the ADDRESS, which creates a dependency on the DNS if you use domain names instead of IP addresses.
Limitations of the underlying platform place a constraint on the format of ADDRESS, which can't be a simple short name (without a dot), even if the system is able to resolve such a name using hosts file or a similar mechanism.
You specify the node address in the ncs.conf file as the value for node-address, under the listen container. You can also use the full node name (with the “@” character), however, that is usually unnecessary as the system prepends ncsd@ as-needed.
Another aspect in which ADDRESS plays a role is authentication. The HA system uses mutual TLS to secure communication between cluster nodes. This requires you to configure a trusted Certificate Authority (CA) and a key/certificate pair for each node. When nodes connect, they check that the certificate of the peer validates against the CA and matches the ADDRESS of the peer.
In most cases, this means the ADDRESS must appear in the node certificate's Subject Alternative Name (SAN) extension, as dNSName (see RFC2459).
Create and use a self-signed CA to secure the NSO HA Raft cluster. A self-signed CA is the only secure option. The CA should only be used to sign the certificates of the member nodes in one NSO HA Raft cluster. It is critical for security that the CA is not used to sign any other certificates. Any certificate signed by the CA can be used to gain complete control of the NSO HA Raft cluster.
See the examples.ncs/high-availability/raft-cluster example for one way to set up a self-signed CA and provision individual node certificates. The example uses a shell script gen_tls_certs.sh that invokes the openssl command. Consult the section Recipe for a Self-signed CA for using it independently of the example.
Examples using separate containers for each HA Raft cluster member with NSO system installations that use a variant of the gen_tls_certs.sh script are available and referenced in the examples.ncs/development-guide/high-availability/hcc example in the NSO example set.
The following is a HA Raft configuration snippet for ncs.conf that includes certificate settings and a sample ADDRESS:
HA Raft uses a standard TLS protocol with public key cryptography for securing cross-node communication, where each node requires a separate public/private key pair and a corresponding certificate. Key and certificate management is a broad topic and is critical to the overall security of the system.
The following text provides a recipe for generating certificates using a self-signed CA. It uses strong cryptography and algorithms that are deemed suitable for production use. However, it makes a few assumptions that may not be appropriate for all environments. Always consider how they affect your own deployment and consult a security professional if in doubt.
The recipe makes the following assumptions:
You use a secured workstation or server to run these commands and handle the generated keys with care. In particular, you must copy the generated keys to NSO nodes in a secure fashion, such as using scp.
The CA is used solely for a single NSO HA Raft cluster, with certificates valid for 10 years, and provides no CRL. If a single key or host is compromised, a new CA and all key/certificate pairs must be recreated and reprovisioned in the cluster.
Keys and signatures based on ecdsa-with-sha384/P-384 are sufficiently secure for the vast majority of environments. However, if your organization has specific requirements, be sure to follow those.
To use this recipe, first, prepare a working environment on a secure host by creating a new directory and copying the gen_tls_certs.sh script from $NCS_DIR/examples.ncs/high-availability/raft-cluster into it. Additionally, ensure that the openssl command, version 1.1 or later, is available and the system time is set correctly. Supposing that you have a cluster named lower-west, you might run:
The recipe relies on the gen_tls_certs.sh script to generate individual certificates. For clusters using FQDN node addresses, invoke the script with full hostnames of all the participating nodes. For example:
If your HA cluster is using IP addresses instead, add the -a option to the command and list the IPs:
The script outputs the location of the relevant files and you should securely transfer each set of files to the corresponding NSO node. For each node, transfer only the three files: ca.crt, host.crt, and host.key.
Once certificates are deployed, you can check their validity with the openssl verify command:
This command takes into account the current time and can be used during troubleshooting. It can also display information contained in the certificate if you use the openssl x509 -text -in ssl/certs/node1.example.org.crt -noout variant. The latter form allows you to inspect the incorporated hostname/IP address and certificate validity dates.
NSO HA Raft can be controlled through several actions. All actions are found under /ha-raft/. In the best-case scenario, you will only need the create-cluster action to initialize the cluster and the read-only and create-cluster actions when upgrading the NSO version. The available actions are listed below:
create-cluster
Initialise an HA Raft cluster. This action should only be invoked once to form a new cluster when no HA Raft log exists.
The members of the HA Raft cluster consist of the NCS node where the /ha-raft/create-clusteraction is invoked, which will become the leader of the cluster; and the members specified by the member parameter.
adjust-membership
Add or remove an HA node from the HA Raft cluster.
disconnect
Disconnect an HA node from all remaining nodes. In the event of revoking a TLS certificate, invoke this action to disconnect the already established connections to the node with the revoked certificate. A disconnected node with a valid TLS certificate may re-establish the connection.
reset
Reset the (disabled) local node to make the leader perform a full sync to this local node if an HA Raft cluster exists. If reset is performed on the leader node, the node will step down from leadership and it will be synced by the next leader node.
An HA Raft member will change role to disabled if ncs.conf has incompatible changes to the ncs.conf on the leader; a member will also change role to disabled if there are non-recoverable failures upon opening a snapshot.
See the /ha-raft/status/disable-reason leaf for the reason.
Set force to true to override reset when /ha-raft/status/role is not set to disabled.
handover
Handover leadership to another member of the HA Raft cluster or step down from leadership and start a new election.
read-only
Toggle read-only mode. If the mode is true no configuration changes can occur.
In addition to the network connectivity required for the normal operation of a standalone NSO node, nodes in the HA Raft cluster must be able to initiate TCP connections from a random ephemeral client port to the following ports on other nodes:
Port 4369
Ports in the range 4370-4399 (configurable)
You can change the ports in the second listed range from the default of 4370-4399. Use the min-port and max-port settings of the ha-raft/listen container.
The Raft implementation does not impose any other hard limits on the network but you should keep in mind that consensus requires communication with other nodes in the cluster. A high round-trip latency between cluster nodes is likely to negatively impact the transaction throughput of the system.
The HA Raft cluster also requires compatible ncs.conf files among the member nodes. In particular, /ncs-config/cdb/operational/enabled and /ncs-config/rollback/enabled values affect replication behavior and must match. Likewise, each member must have the same set of encryption keys and the keys cannot be changed while the cluster is in operation.
To update the ncs.conf configuration, you must manually update the copy on each member node, making sure the new versions contain compatible values. Then perform the reload on the leader and the follower members will automatically reload their copies of the configuration file as well.
If a node is a cluster member but has been configured with a new, incompatible ncs.conf file, it gets automatically disabled. See the /ha-raft/status/disabled-reason for reason. You can re-enable the node with the ha-raft reset command, once you have reconciled the incompatibilities.
Raft has a notion of cluster configuration, in particular, how many and which members the cluster has. You define member nodes when you first initialize the cluster with the create-cluster command or use the adjust-membership command. The member nodes allow the cluster to know how many nodes are needed for consensus and similar.
However, not all cluster members may be reachable or alive all the time. Raft implementation in NSO uses TCP connections between nodes to transport data. The TCP connections are authenticated and encrypted using TLS by default (see Security Considerations). A working connection between nodes is essential for the cluster to function but a number of factors, such as firewall rules or expired/invalid certificates, can prevent the connection from establishing.
Therefore, NSO distinguishes between configured member nodes and nodes to which it has established a working transport connection. The latter are called connected nodes. In a normal, fully working, and properly configured cluster, the connected nodes will be the same as member nodes (except for the current node).
To help troubleshoot connectivity issues without affecting cluster operation, connected nodes will show even nodes that are not actively participating in the cluster but have established a transport connection to nodes in the cluster. The optional discovery mechanism, described next, relies on this functionality.
NSO includes a mechanism that simplifies the initial cluster setup by enumerating known nodes. This mechanism uses a set of seed nodes to discover all connectable nodes, which can then be used with the create-cluster command to form a Raft cluster.
When you specify one or more nodes with the /ha-raft/seed-nodes/seed-node setting in the ncs.conf file, the current node tries to establish a connection to these seed nodes, in order to discover the list of all nodes potentially participating in the cluster. For the discovery to work properly, all other nodes must also use seed nodes and the set of seed nodes must overlap. The recommended practice is to use the same set of seed nodes on every participating node.
Along with providing an autocompletion list for the create-cluster command, this feature streamlines the discovery of node names when using NSO in containerized or other dynamic environments, where node addresses are not known in advance.
Creating a new HA cluster consists of two parts: configuring the individual nodes and running the create-cluster action.
First, you must update the ncs.conf configuration file for each node. All HA Raft configuration comes under the /ncs-config/ha-raft element.
As part of the configuration, you must:
Enable HA Raft functionality through the enabled leaf.
Set node-address and the corresponding TLS parameters (see Node Names and Certificates).
Identify the cluster this node belongs to with cluster-name.
Reload or restart the NSO process (if already running).
Repeat the preceding steps for every participating node.
Enable read-only mode on designated leader to avoid potential sync issues in cluster formation.
Invoke the create-cluster action.
The cluster name is simply a character string that uniquely identifies this HA cluster. The nodes in the cluster must use the same cluster name or they will refuse to establish a connection. This setting helps prevent mistakenly adding a node to the wrong cluster when multiple clusters are in operation, such as in an LSA setup.
With all the nodes configured and running, connect to the node that you would like to become the initial leader and invoke the ha-raft create-cluster action. The action takes a list of nodes identified by their names. If you have configured seed-nodes, you will get auto-completion support, otherwise, you have to type in the names of the nodes yourself.
This action makes the current node a cluster leader and joins the other specified nodes to the newly created cluster. For example:
You can use the show ha-raft command on any node to inspect the status of the HA Raft cluster. The output includes the current cluster leader and members according to this node, as well as information about the local node, such as node name (local-node) and role. The status/connected-node list contains the names of the nodes with which this node has active network connections.
In case you get an error, such as the Error: NSO can't reach member node 'ncsd@ADDRESS'., please verify all of the following:
The node at the ADDRESS is reachable. You can use the ping ADDRESS command, for example.
The problematic node has the correct ncs.conf configuration, especially cluster-name and node-address. The latter should match the ADDRESS and should contain at least one dot.
Nodes use compatible configuration. For example, make sure the ncs.crypto_keys file (if used) or the encrypted-strings configuration in ncs.conf is identical across nodes.
HA Raft is enabled, using the show ha-raft command on the unreachable node.
The firewall configuration on the OS and on the network level permits traffic on the required ports (see ).
The node uses a certificate that the CA can validate. For example, copy the certificates to the same location and run openssl verify -CAfile CA_CERT NODE_CERT to verify this.
Verify the epmd -names command on each node shows the ncsd process. If not, stop NSO, run epmd -kill, and then start NSO again.
In addition to the above, you may also examine the logs/raft.log file for detailed information on the error message and overall operation of the Raft algorithm. The amount of information in the file is controlled by the /ncs-config/logs/raft-log configuration in the ncs.conf.
After the initial cluster setup, you can add new nodes or remove existing nodes from the cluster with the help of the ha-raft adjust-membership action. For example:
When removing nodes using the ha-raft adjust-membership remove-node command, the removed node is not made aware that it is removed from the cluster and continues signaling the other nodes. This is a limitation in the algorithm, as it must also handle situations, where the removed node is down or unreachable. To prevent further communication with the cluster, it is important you ensure the removed node is shut down. You should shut down the to-be-removed node prior to removal from the cluster, or immediately after it. The former is recommended but the latter is required if there are only two nodes left in the cluster and shutting down prior to removal would prevent the cluster from reaching consensus.
Additionally, you can force an existing follower node to perform a full re-sync from the leader by invoking the ha-raft reset action with the force option. Using this action on the leader will make the node give up the leader role and perform a sync with the newly elected leader.
As leader selection during the Raft election is not deterministic, NSO provides the ha-raft handover action, which allows you to either trigger a new election if called with no arguments or transfer leadership to a specific node. The latter is especially useful when, for example, one of the nodes resides in a different location and more traffic between locations may incur extra costs or additional latency, so you prefer this node is not the leader under normal conditions.
If you have an existing HA cluster using the rule-based built-in HA, you can migrate it to use HA Raft instead. This procedure is performed in four distinct high-level steps:
Ensuring the existing cluster meets migration prerequisites.
Preparing the required HA Raft configuration files.
Switching to HA Raft.
Adding additional nodes to the cluster.
The procedure does not perform an NSO version upgrade, so the cluster remains on the same version. It also does not perform any schema upgrades, it only changes the type of the HA cluster.
The migration procedure is in place, that is, the existing nodes are disconnected from the old cluster and connected to the new one. This results in a temporary disruption of the service, so it should be performed during a service window.
First, you should ensure the cluster meets migration prerequisites. The cluster must use:
NSO 6.1.2 or later
tailf-hcc 6.0 or later (if used)
In case these prerequisites are not met, follow the standard upgrade procedures to upgrade the existing cluster to supported versions first.
Additionally, ensure that all used packages are compatible with HA Raft, as NSO uses some new or updated notifications about HA state changes. Also, verify the network supports the new cluster communications (see Network and ncs.conf Prerequisites).
Secondly, prepare all the ncs.conf and related files for each node, such as certificates and keys. Create a copy of all the ncs.conf files and disable or remove the existing >ha< section in the copies. Then add the required configuration items to the copies, as described in Initial Cluster Setup and Node Names and Certificates. Do not update the ncs.conf files used by the nodes yet.
It is recommended but not necessary that you set the seed nodes in ncs.conf to the designated primary and fail-over primary. Do this for all ncs.conf files for all nodes.
With the new configurations at hand and verified, start the switch to HA Raft. The cluster nodes should be in their nominal, designated roles. If not, perform a failover first.
On the designated (actual) primary, called node1, enable read-only mode.
Then take a backup of all nodes.
Once the backup successfully completes, stop the designated fail-over primary (actual secondary) NSO process, update its ncs.conf and the related (certificate) files for HA Raft, and then start it again. Connect to this node's CLI, here called node2, and verify HA Raft is enabled with the show ha-raft command.
Now repeat the same for the designated primary (node1). If you have set the seed nodes, you should see the fail-over primary show under connected-node.
On the old designated primary (node1) invoke the ha-raft create-cluster action and create a two-node Raft cluster with the old fail-over primary (node2, actual secondary). The action takes a list of nodes identified by their names. If you have configured seed-nodes, you will get auto-completion support, otherwise you have to type in the name of the node yourself.
In case of errors running the action, refer to for possible causes and troubleshooting steps.
Raft requires at least three nodes to operate effectively (as described in ) and currently, there are only two in the cluster. If the initial cluster had only two nodes, you must provision an additional node and set it up for HA Raft. If the cluster initially had three nodes, there is the remaining secondary node, node3, which you must stop, update its configuration as you did with the other two nodes, and start it up again.
Finally, on the old designated primary and current HA Raft leader, use the ha-raft adjust-membership add-node action to add this third node to the cluster.
Communication between the NSO nodes in an HA Raft cluster takes place over Distributed Erlang, an RPC protocol transported over TLS (unless explicitly disabled by setting /ncs-config/ha-raft/ssl/enabled to 'false').
TLS (Transport Layer Security) provides Authentication and Privacy by only allowing NSO nodes to connect using certificates and keys issued from the same Certificate Authority (CA). Distributed Erlang is transported over TLS 1.2. Access to a host can be revoked by the CA through the means of a CRL (Certificate Revocation List). To enforce certificate revocation within an HA Raft cluster, invoke the action /ha-raft/disconnect to terminate the pre-existing connection. A connection to the node can re-establish once the node's certificate is valid.
Please ensure the CA key is kept in a safe place since it can be used to generate new certificates and key pairs for peers.
Distributed Erlang supports for multiple NSO nodes to run on the same host and the node addresses are resolved by the epmd (Erlang Port Mapper Daemon) service. Once resolved, the NSO nodes communicate directly.
The ports epmd and the NSO nodes listen to can be found in Network and ncs.conf Prerequisites. epmd binds the wildcard IPv4 address 0.0.0.0 and the IPv6 address ::.
In case epmd is exposed to a DoS attack, the HA Raft members may be unable to resolve addresses and communication could be disrupted. Please ensure traffic on these ports are only accepted between the HA Raft members by using firewall rules or other means.
Two NSO nodes can only establish a connection if a shared secret "cookie" matches. The cookie is optionally configured from /ncs-config/ha-raft/cluster-name. Please note the cookie is not a security feature but a way to isolate HA Raft clusters and to avoid accidental misuse.
NSO contains a mechanism for distributing packages to nodes in a Raft cluster, greatly simplifying package management in a highly-available setup.
You perform all package management operations on the current leader node. To identify the leader node, you can use the show ha-raft status leader command on a running cluster.
Invoking the packages reload command makes the leader node update its currently loaded packages, identical to a non-HA, single-node setup. At the same time, the leader also distributes these packages to the followers to load. However, the load paths on the follower nodes, such as /var/opt/ncs/packages/, are not updated. This means, that if a leader election took place, a different leader was elected, and you performed another packages reload, the system would try to load the versions of the packages on this other leader, which may be out of date or not even present.
The recommended approach is, therefore, to use the packages ha sync and-reload command instead, unless a load path is shared between NSO nodes, such as the same network drive. This command distributes and updates packages in the load paths on the follower nodes, as well as loading them.
For the full procedure, first, ensure all cluster nodes are up and operational, then follow these steps on the leader node:
Perform a full backup of the NSO instance, such as running ncs-backup.
Add, replace, or remove packages on the filesystem. The exact location depends on the type of NSO deployment, for example /var/opt/ncs/packages/.
Invoke the packages ha sync and-reload or packages ha sync and-add command to start the upgrade process.
Note that while the upgrade is in progress, writes to the CDB are not allowed and will be rejected.
For a packages ha sync and-reload example see the raft-upgrade-l2 NSO system installation-based example referenced by the examples.ncs/development-guide/high-availability/hcc example in the NSO example set.
For more details, troubleshooting, and general upgrade recommendations, see NSO Packages and Upgrade.
Currently, the only supported and safe way of upgrading the Raft HA cluster NSO version requires that the cluster be taken offline since the nodes must, at all times, run the same software version.
Do not attempt an upgrade unless all cluster member nodes are up and actively participating in the cluster. Verify the current cluster state with the show ha-raft status command. All member nodes must also be present in the connected-node list.
The procedure differentiates between the current leader node versus followers. To identify the leader, you can use the show ha-raft status leader command on a running cluster.
Procedure 2. Cluster version upgrade
On the leader, first enable read-only mode using the ha-raft read-only mode true command and then verify that all cluster nodes are in sync with the show ha-raft status log replications state command.
Before embarking on the upgrade procedure, it's imperative to backup each node. This ensures that you have a safety net in case of any unforeseen issues. For example, you can use the $NCS_DIR/bin/ncs-backup command.
Delete the $NCS_RUN_DIR/cdb/compact.lock file and compact the CDB write log on all nodes using, for example, the $NCS_DIR/bin/ncs --cdb-compact $NCS_RUN_DIR/cdb command.
On all nodes, delete the $NCS_RUN_DIR/state/raft/ directory with a command such as rm -rf $NCS_RUN_DIR/state/raft/.
Stop NSO on all the follower nodes, for example, invoking the $NCS_DIR/bin/ncs --stop or systemctl stop ncs command on each node.
Stop NSO on the leader node only after you have stopped all the follower nodes in the previous step. Alternatively NSO can be stopped on the nodes before deleting the HA Raft state and compacting the CDB write log without needing to delete the compact.lock file.
Upgrade the NSO packages on the leader to support the new NSO version.
Install the new NSO version on all nodes.
Start NSO on all nodes.
Re-initialize the HA cluster using the ha-raft create-cluster action on the node to become the leader.
Finally, verify the cluster's state through the show ha-raft status command. Ensure that all data has been correctly synchronized across all cluster nodes and that the leader is no longer read-only. The latter happens automatically after re-initializing the HA cluster.
For a standard System Install, the single-node procedure is described in Single Instance Upgrade, but in general depends on the NSO deployment type. For example, it will be different for containerized environments. For specifics, please refer to the documentation for the deployment type.
For an example see the raft-upgrade-l2 NSO system installation-based example referenced by the examples.ncs/development-guide/high-availability/hcc example in the NSO example set.
If the upgrade fails before or during the upgrade of the original leader, start up the original followers to restore service and then restore the original leader, using backup as necessary.
However, if the upgrade fails after the original leader was successfully upgraded, you should still be able to complete the cluster upgrade. If you are unable to upgrade a follower node, you may provision a (fresh) replacement and the data and packages in use will be copied from the leader.
NSO can manage the HA groups based on a set of predefined rules. This functionality was added in NSO 5.4 and is sometimes referred to simply as the built-in HA. However, since NSO 6.1, HA Raft (which is also built-in) is available as well, and is likely a better choice in most situations.
Rule-based HA allows administrators to:
Configure HA group members with IP addresses and default roles
Configure failover behavior
Configure start-up behavior
Configure HA group members with IP addresses and default roles
Assign roles, join HA group, enable/disable rule-based HA through actions
View the state of the current HA setup
NSO rule-based HA is defined in tailf-ncs-high-availability.yang, with data residing under the /high-availability/ container.
NSO rule-based HA does not manage any virtual IP addresses, or advertise any BGP routes or similar. This must be handled by an external package. Tail-f HCC 5.x and greater has this functionality compatible with NSO rule-based HA. You can read more about the HCC package in the following chapter.
To use NSO rule-based HA, HA must first be enabled in ncs.conf - See Mode of Operation.
All HA group members are defined under /high-availability/ha-node. Each configured node must have a unique IP address configured and a unique HA ID. Additionally, nominal roles and fail-over settings may be configured on a per-node basis.
The HA Node ID is a unique identifier used to identify NSO instances in an HA group. The HA ID of the local node - relevant amongst others when an action is called - is determined by matching configured HA node IP addresses against IP addresses assigned to the host machine of the NSO instance. As the HA ID is crucial to NSO HA, NSO rule-based HA will not function if the local node cannot be identified.
To join a HA group, a shared secret must be configured on the active primary and any prospective secondary. This is used for a CHAP-2-like authentication and is specified under /high-availability/token/.
The token configured on the secondary node is overwritten with the encrypted token of type aes-256-cfb-128-encrypted-string from the primary node when the secondary node connects to the primary. If there is a mismatch between the encrypted-string configuration on the nodes, NSO will not decrypt the HA token to match the token presented. As a result, the primary node denies the secondary node access the next time the HA connection needs to reestablish with a "Token mismatch, secondary is not allowed" error.
See the upgrade-l2 example, referenced from examples.ncs/development-guide/high-availability/hcc, for an example setup and the Deployment Example for a description of the example.
Also, note that the ncs.crypto_keys file is highly sensitive. The file contains the encryption keys for all CDB data that is encrypted on disk. Besides the HA token, this often includes passwords for various entities, such as login credentials to managed devices.
NSO can assume HA roles primary, secondary and none. Roles can be assigned directly through actions, or at startup or failover. See HA Framework Requirements for the definition of these roles.
NSO rule-based HA distinguishes between the concepts of nominal role and assigned role. Nominal-role is configuration data that applies when an NSO instance starts up and at failover. The assigned role is the role that the NSO instance has been ordered to assume either by an action or as a result of startup or failover.
Failover may occur when a secondary node loses the connection to the primary node. A secondary may then take over the primary role. Failover behavior is configurable and controlled by the parameters:
/high-availability/ha-node{id}/failover-primary
/high-availability/settings/enable-failover
For automatic failover to function, /high-availability/settings/enable-failover must be se to true. It is then possible to enable at most one node with a nominal role secondary as failover-primary, by setting the parameter /high-availability/ha-node{id}/failover-primary. The failover works in both directions; if a nominal primary is currently connected to the failover-primary as a secondary and loses the connection, then it will attempt to take over as a primary.
Before failover happens, a failover-primary-enabled secondary node may attempt to reconnect to the previous primary before assuming the primary role. This behavior is configured by the parameters denoting how many reconnect attempts will be made, and with which interval, respectively.
/high-availability/settings/reconnect-attempts
/high-availability/settings/reconnect-interval
HA Members that are assigned as secondaries, but are neither failover-primaries nor set with a nominal-role primary, may attempt to rejoin the HA group after losing connection to the primary.
This is controlled by /high-availability/settings/reconnect-secondaries. If this is true, secondary nodes will query the nodes configured under /high-availability/ha-node for an NSO instance that currently has the primary role. Any configured nominal roles will not be considered. If no primary node is found, subsequent attempts to rejoin the HA setup will be issued with an interval defined by /high-availability/settings/reconnect-interval.
In case a net-split provokes a failover it is possible to end up in a situation with two primaries, both nodes accepting writes. The primaries are then not synchronized and will end up in a split brain. Once one of the primaries joins the other as a secondary, the HA cluster is once again consistent but any out-of-sync changes will be overwritten.
To prevent split-brain from occurring, NSO 5.7 or later comes with a rule-based algorithm. The algorithm is enabled by default, it can be disabled or changed from the parameters:
/high-availability/settings/consensus/enabled [true]
/high-availability/settings/consensus/algorithm [ncs:rule-based]
The rule-based algorithm can be used in either of the two HA constellations:
Two nodes: one nominal primary and one nominal secondary configured as failover-primary.
Three nodes: one nominal primary, one nominal secondary configured as failover-primary, and one perpetual secondary.
On failover:
Failover-primary: become primary but enable read-only mode. Once the secondary joins, disable read-only.
Nominal primary: on loss of all secondaries, change role to none. If one secondary node is connected, stay primary.
To restore the HA cluster one may need to manually invoke the /high-availability/be-secondary-to action.
The read-write mode can manually be enabled from the /high-availability/read-only action with the parameter mode passed with value false.
When any node loses connection, this can also be observed in high-availability alarms as either a ha-primary-down or a ha-secondary-down alarm.
Startup behavior is defined by a combination of the parameters /high-availability/settings/start-up/assume-nominal-role and /high-availability/settings/start-up/join-ha as well as the node's nominal role:
assume-nominal-role
join-ha
nominal-role
behaviour
true
false
primary
Assume primary role.
true
false
secondary
Attempt to connect as secondary to the node (if any) which has nominal-role primary. If this fails, make no retry attempts and assume none role.
true
false
none
Assume none role
NSO rule-based HA can be controlled through several actions. All actions are found under /high-availability/. The available actions are listed below:
be-primary
Order the local node to assume ha role primary
be-none
Order the local node to assume ha role none
be-secondary-to
Order the local node to connect as secondary to the provided HA node. This is an asynchronous operation, result can be found under /high-availability/status/be-secondary-result
local-node-id
Identify the which of the nodes in /high-availability/ha-node (if any) corresponds to the local NSO instance
enable
Enable NSO rule-based HA and optionally assume a ha role according to /high-availability/settings/start-up/ parameters
disable
Disable NSO rule-based HA and assume a ha role none
The current state of NSO rule-based HA can be monitored by observing /high-availability/status/. Information can be found about the current active HA mode and the current assigned role. For nodes with active mode primary, a list of connected nodes and their source IP addresses is shown. For nodes with assigned role secondary the latest result of the be-secondary operation is listed. All NSO rule-based HA status information is non-replicated operational data - the result here will differ between nodes connected in an HA setup.
The Tail-f HCC package extends the built-in HA functionality by providing virtual IP addresses (VIPs) that can be used to connect to the NSO HA group primary node. HCC ensures that the VIP addresses are always bound by the HA group primary and never bound by a secondary. Each time a node transitions between primary and secondary states HCC reacts by binding (primary) or unbinding (secondary) the VIP addresses.
HCC manages IP addresses at the link layer (OSI layer 2) for Ethernet interfaces, and optionally, also at the network layer (OSI layer 3) using BGP router advertisements. The layer-2 and layer-3 functions are mostly independent and this document describes the details of each one separately. However, the layer-3 function builds on top of the layer-2 function. The layer-2 function is always necessary, otherwise, the Linux kernel on the primary node would not recognize the VIP address or accept traffic directed to it.
Both the HCC layer-2 VIP and layer-3 BGP functionality depend on iproute2 utilities and awk. An optional dependency is arping (either from iputils or Thomas Habets arping implementation) which allows HCC to announce the VIP to MAC mapping to all nodes in the network by sending gratuitous ARP requests.
The HCC layer-3 BGP functionality depends on the GoBGP daemon version 2.x being installed on each NSO host that is configured to run HCC in BGP mode.
GoBGP is open-source software originally developed by NTT Communications and released under the Apache License 2.0. GoBGP can be obtained directly from https://osrg.github.io/gobgp/ and is also packaged for mainstream Linux distributions.
The HCC layer-3 DNS Update functionality depends on the command line utility nsupdate.
Tools Dependencies are listed below:
ip
iproute2
yes
Adds and deletes the virtual IP from the network interface.
awk
mawk or gawk
yes
Installed with most Linux distributions.
sed
sed
yes
Same as with built-in HA functionality, all NSO instances must be configured to run in HA mode. See the following instructions on how to enable HA on NSO instances.
GoBGP uses TCP port 179 for its communications and binds to it at startup. As port 179 is considered a privileged port it is normally required to run gobgpd as root.
When NSO is running as a non-root user the gobgpd command will be executed as the same user as NSO and will prevent gobgpd from binding to port 179.
There a multiple ways of handling this and two are listed here.
Set capability CAP_NET_BIND_SERVICE on the gobgpd file. May not be supported by all Linux distributions.
Set the owner to root and the setuid bit of the gobgpd file. Works on all Linux distributions.
The vipctl script, included in the HCC package, uses sudo to run the ip and arping commands when NSO is not running as root. If sudo is used, you must ensure it does not require password input. For example, if NSO runs as admin user, the sudoers file can be edited similarly to the following:
Tail-f HCC 5.x or later does not participate in decisions on which NSO node is primary or secondary. These decisions are taken by NSO's built-in HA and then pushed as notifications to HCC. The NSO built-in HA functionality is available in NSO starting with version 5.4, where older NSO versions are not compatible with the HCC 5.x or later.
HCC 5.x or later operates a GoBGP daemon as a subprocess completely managed by NSO. The old HCC function pack interacted with an external Quagga BGP daemon using a NED interface.
HCC 5.x or later automatically associates VIP addresses with Linux network interfaces using the ip utility from the iproute2 package. VIP addresses are also treated as /32 without defining a new subnet. The old HCC function pack used explicit configuration to associate VIPs with existing addresses on each NSO host and define IP subnets for VIP addresses.
Since version 5.0, HCC relies on the NSO built-in HA for cluster management and only performs address or route management in reaction to cluster changes. Therefore, no special measures are necessary if using HCC when performing an NSO version upgrade or a package upgrade. Instead, you should follow the standard best practice HA upgrade procedure from NSO HA Version Upgrade.
A reference to upgrade examples can be found in the NSO example set under examples.ncs/development-guide/high-availability/hcc/README.
The purpose of the HCC layer-2 functionality is to ensure that the configured VIP addresses are bound in the Linux kernel of the NSO primary node only. This ensures that the primary node (and only the primary node) will accept traffic directed toward the VIP addresses.
HCC also notifies the local layer-2 network when VIP addresses are bound by sending Gratuitous ARP (GARP) packets. Upon receiving the Gratuitous ARP, all the nodes in the network update their ARP tables with the new mapping so they can continue to send traffic to the non-failed, now primary node.
HCC binds the VIP addresses as additional (alias) addresses on existing Linux network interfaces (e.g. eth0). The network interface for each VIP is chosen automatically by performing a kernel routing lookup on the VIP address. That is, the VIP will automatically be associated with the same network interface that the Linux kernel chooses to send traffic to the VIP.
This means that you can map each VIP onto a particular interface by defining a route for a subnet that includes the VIP. If no such specific route exists the VIP will automatically be mapped onto the interface of the default gateway.
The layer-2 functionality is configured by providing a list of IPv4 and/or IPv6 VIP addresses and enabling HCC. The VIP configuration parameters are found under /hcc:hcc.
Global Layer-2 Configuration:
enabled
boolean
If set to 'true', the primary node in an HA group automatically binds the set of Virtual IPv[46] addresses.
vip-address
list of inet:ip-address
The list of virtual IPv[46] addresses to bind on the primary node. The addresses are automatically unbound when a node becomes secondary. The addresses can therefore be used externally to reliably connect to the HA group primary node.
The purpose of the HCC layer-3 BGP functionality is to operate a BGP daemon on each NSO node and to ensure that routes for the VIP addresses are advertised by the BGP daemon on the primary node only.
The layer-3 functionality is an optional add-on to the layer-2 functionality. When enabled, the set of BGP neighbors must be configured separately for each NSO node. Each NSO node operates an embedded BGP daemon and maintains connections to peers but only the primary node announces the VIP addresses.
The layer-3 functionality relies on the layer-2 functionality to assign the virtual IP addresses to one of the host's interfaces. One notable difference in assigning virtual IP addresses when operating in Layer-3 mode is that the virtual IP addresses are assigned to the loopback interface lo rather than to a specific physical interface.
HCC operates a GoBGP subprocess as an embedded BGP daemon. The BGP daemon is started, configured, and monitored by HCC. The HCC YANG model includes basic BGP configuration data and state data.
Operational data in the YANG model includes the state of the BGP daemon subprocess and the state of each BGP neighbor connection. The BGP daemon writes log messages directly to NSO where the HCC module extracts updated operational data and then repeats the BGP daemon log messages into the HCC log verbatim. You can find these log messages in the developer log (devel.log).
The layer-3 BGP functionality is configured as a list of BGP configurations with one list entry per node. Configurations are separate because each NSO node usually has different BGP neighbors with their own IP addresses, authentication parameters, etc.
The BGP configuration parameters are found under /hcc:hcc/bgp/node{id}.
Per-Node Layer-3 Configuration:
node-id
string
Unique node ID. A reference to /ncs:high-availability/ha-node/id.
enabled
boolean
If set to true, this node uses BGP to announce VIP addresses when in the HA primary state.
as
inet:as-number
The BGP Autonomous System Number for the local BGP daemon.
router-id
inet:ip-address
The router ID for the local BGP daemon.
Each NSO node can connect to a different set of BGP neighbors. For each node, the BGP neighbor list configuration parameters are found under /hcc:hcc/bgp/node{id}/neighbor{address}.
Per-Neighbor BGP Configuration:
address
inet:ip-address
BGP neighbor IP address.
as
inet:as-number
BGP neighbor Autonomous System Number.
ttl-min
uint8
Optional minimum TTL value for BGP packets. When configured enables BGP Generalized TTL Security Mechanism (GTSM).
password
string
Optional password to use for BGP authentication with this neighbor.
The purpose of the HCC layer-3 DNS Update functionality is to notify a DNS server of the IP address change of the active primary NSO server, allowing the DNS server to update the DNS record for the given domain name.
Geographically redundant NSO setup typically relies on DNS support. To enable this use case, tailf-hcc can dynamically update DNS with the nsupdate utility on HA status change notification.
The DNS server used should support updates through nsupdate command (RFC 2136).
HCC listens on the underlying NSO HA notifications stream. When HCC receives a notification about an NSO node being Primary, it updates the DNS Server with the IP address of the Primary NSO for the given hostname. The HCC YANG model includes basic DNS configuration data and operational status data.
Operational data in the YANG model includes the result of the latest DNS update operation.
If the DNS Update is unsuccessful, an error message will be populated in operational data, for example:
The layer-3 DNS Update functionality needs DNS-related information like DNS server IP address, port, zone, etc, and information about NSO nodes involved in HA - node, ip, and location.
The DNS configuration parameters are found under /hcc:hcc/dns.
Layer-3 DNS Configuration:
enabled
boolean
If set to true, DNS updates will be enabled.
fqdn
inet:domain-name
DNS domain-name for the HA primary.
ttl
uint32
Time to live for DNS record, default 86400.
key-file
string
Specifies the file path for nsupdate keyfile.
Each NSO node can be placed in a separate Location/Site/Availability-Zone. This is configured as a list member configuration, with one list entry per node ID. The member list configuration parameters are found under /hcc:hcc/dns/member{node-id}.
node-id
string
Unique NSO HA node ID. Valid values are: /high-availability/ha-node when built-in HA is used or /ha-raft/status/member for HA Raft.
ip-address
inet:ip-address
IP where NSO listens for incoming requests to any northbound interfaces.
location
string
Name of the Location/Site/Availability-Zone where node is placed.
Here is an example configuration for a setup of two dual-stack NSO nodes, node-1 and node-2, that have an IPv4 and an IPv6 address configured. The configuration also sets up an update signing with the specified key.
This section describes basic deployment scenarios for HCC. Layer-2 mode is demonstrated first and then the layer-3 BGP functionality is configured in addition:
A reference to container-based examples for the layer-2 and layer-3 deployment scenarios described here can be found in the NSO example set under examples.ncs/development-guide/high-availability/hcc.
Both scenarios consist of two test nodes: london and paris with a single IPv4 VIP address. For the layer-2 scenario, the nodes are on the same network. The layer-3 scenario also involves a BGP-enabled router node as the london and paris nodes are on two different networks.
The layer-2 operation is configured by simply defining the VIP addresses and enabling HCC. The HCC configuration on both nodes should match, otherwise, the primary node's configuration will overwrite the secondary node configuration when the secondary connects to the primary node.
Addresses:
paris
192.168.23.99
Paris service node.
london
192.168.23.98
London service node.
vip4
192.168.23.122
NSO primary node IPv4 VIP address.
Configuring VIPs:
Verifying VIP Availability:
Once enabled, HCC on the HA group primary node will automatically assign the VIP addresses to corresponding Linux network interfaces.
On the secondary node, HCC will not configure these addresses.
Layer-2 Example Implementation:
A reference to a container-based example of the layer-2 scenario can be found in the NSO example set. See the examples.ncs/development-guide/high-availability/hcc/README
Layer-3 operation is configured for each NSO HA group node separately. The HCC configuration on both nodes should match, otherwise, the primary node's configuration will overwrite the configuration on the secondary node.
Addresses:
paris
192.168.31.99
64512
Paris node
london
192.168.30.98
64513
London node
router
192.168.30.2
192.168.31.2
64514
Configuring BGP for Paris Node:
Configuring BGP for London Node:
Check BGP Neighbor Connectivity:
Check neighbor connectivity on the paris primary node. Note that its connection to neighbor 192.168.31.2 (router) is ESTABLISHED.
Check neighbor connectivity on the london secondary node. Note that the primary node also has an ESTABLISHED connection to its neighbor 192.168.30.2 (router). The primary and secondary nodes both maintain their BGP neighbor connections at all times when BGP is enabled, but only the primary node announces routes for the VIPs.
Check Advertised BGP Routes Neighbors:
Check the BGP routes received by the router.
The VIP subnet is routed to the paris host, which is the primary node.
Layer-3 BGP Example Implementation:
A reference to a container-based example of the combined layer-2 and layer-3 BGP scenario can be found in the NSO example set. See the examples.ncs/development-guide/high-availability/hcc/README
If enabled prior to the HA being established, HCC will update the DNS server with the IP address of the Primary node once a primary is selected.
If an HA is already operational, and Layer-3 DNS is enabled and configured afterward, HCC will not update the DNS server automatically. An automatic DNS server update will only happen if a HA switchover happens. HCC exposes an update action to manually trigger an update to the DNS server with the IP address of the primary node.
DNS Update Action:
The user can explicitly update DNS from the specific NSO node by running the update action.
Check the result of invoking the DNS update utility using the operational data in /hcc/dns:
One way to verify DNS server updates is through the nslookup program. However, be mindful of the DNS caching mechanism, which may cache the old value for the amount of time controlled by the TTL setting.
DNS get-node-location Action:
/hcc/dns/member holds the information about all members involved in HA. The get-node-location action provides information on the location of an NSO node.
The HCC data model can be found in the HCC package (tailf-hcc.yang).
As an alternative to the HCC package, NSO built-in HA, either rule-based or HA Raft, can also be used in conjunction with a load balancer device in a reverse proxy configuration. Instead of managing the virtual IP address directly as HCC does, this setup relies on an external load balancer to route traffic to the currently active primary node.
The load balancer uses HTTP health checks to determine which node is currently the active primary. The example, found in the examples.ncs/development-guide/high-availability/load-balancer directory uses HTTP status codes on the health check endpoint to easily distinguish whether the node is currently primary or not.
In the example, freely available HAProxy software is used as a load balancer to demonstrate the functionality. It is configured to steer connections on localhost to either of the TCP port 2024 (SSH CLI) and TCP port 8080 (web UI and RESTCONF) to the active node in a 2-node HA cluster. The HAProxy software is required if you wish to run this example yourself.
You can start all the components in the example by running the make build start command. At the beginning, the first node n1 is the active primary. Connecting to the localhost port 2024 will establish a connection to this node:
Then, you can disable the high availability subsystem on n1 to simulate a node failure.
Disconnect and wait a few seconds for the built-in HA to perform the failover to node n2. The time depends on the high-availability/settings/reconnect-interval and is set quite aggressively in this example to make the failover in about 6 seconds. Reconnect with the SSH client and observe the connection is now made to the fail-over node which has become the active primary:
Finally, shut down the example with the make stop clean command.
NSO can be configured for the HA primary to listen on additional ports for the northbound interfaces NETCONF, RESTCONF, the web server (including JSON-RPC), and the CLI over SSH. Once a different node transitions to role primary the configured listen addresses are brought up on that node instead.
When the following configuration is added to ncs.conf, then the primary HA node will listen(2) and bind(2) port 1830 on the wildcard IPv4 and IPv6 addresses.
A similar configuration can be added for other NB interfaces, see the ha-primary-listen list under /ncs-config/{restconf,webui,cli}.
If an external HAFW is used, NSO only replicates the CDB data. NSO must be told by the HAFW which node should be primary and which nodes should be secondaries.
The HA framework must also detect when nodes fail and instruct NSO accordingly. If the primary node fails, the HAFW must elect one of the remaining secondaries and appoint it the new primary. The remaining secondaries must also be informed by the HAFW about the new primary situation.
NSO must be instructed through the ncs.conf configuration file that it should run in HA mode. The following configuration snippet enables HA mode:
Make sure to restart the ncs process for the changes to take effect.
The IP address and the port above indicate which IP and which port should be used for the communication between the HA nodes. extra-listen is an optional list of ip:port pairs that a HA primary also listens to for secondary connections. For IPv6 addresses, the syntax [ip]:port may be used. If the :port is omitted, port is used. The tick-timeout is a duration indicating how often each secondary must send a tick message to the primary indicating liveness. If the primary has not received a tick from a secondary within 3 times the configured tick time, the secondary is considered to be dead. Similarly, the primary sends tick messages to all the secondaries. If a secondary has not received any tick messages from the primary within the 3 times the timeout, the secondary will consider the primary dead and report accordingly.
A HA node can be in one of three states: NONE, SECONDARY or PRIMARY. Initially, a node is in the NONE state. This implies that the node will read its configuration from CDB, stored locally on file. Once the HA framework has decided whether the node should be a secondary or a primary the HAFW must invoke either the methods Ha.beSecondary(primary) or Ha.bePrimary()
When an NSO HA node starts, it always starts up in mode NONE. At this point, there are no other nodes connected. Each NSO node reads its configuration data from the locally stored CDB and applications on or off the node may connect to NSO and read the data they need. Although write operations are allowed in the NONE state it is highly discouraged to initiate southbound communication unless necessary. A node in NONE state should only be used to configure NSO itself or to do maintenance such as upgrades. When in NONE state, some features are disabled, including but not limited to:
commit queue
NSO scheduler
nano-service side effect queue
This is to avoid situations where multiple NSO nodes are trying to perform the same southbound operation simultaneously.
At some point, the HAFW will command some nodes to become secondary nodes of a named primary node. When this happens, each secondary node tracks changes and (logically or physically) copies all the data from the primary. Previous data at the secondary node is overwritten.
Note that the HAFW, by using NSO's start phases, can make sure that NSO does not start its northbound interfaces (NETCONF, CLI, ...) until the HAFW has decided what type of node it is. Furthermore once a node has been set to the SECONDARY state, it is not possible to initiate new write transactions towards the node. It is thus never possible for an agent to write directly into a secondary node. Once a node is returned either to the NONE state or to the PRIMARY state, write transactions can once again be initiated towards the node.
The HAFW may command a secondary node to become primary at any time. The secondary node already has up-to-date data, so it simply stops receiving updates from the previous primary. Presumably, the HAFW also commands the primary node to become a secondary node or takes it down, or handles the situation somehow. If it has crashed, the HAFW tells the secondary to become primary, restarts the necessary services on the previous primary node, and gives it an appropriate role, such as secondary. This is outside the scope of NSO.
Each of the primary and secondary nodes has the same set of all callpoints and validation points locally on each node. The start sequence has to make sure the corresponding daemons are started before the HAFW starts directing secondary nodes to the primary, and before replication starts. The associated callbacks will however only be executed at the primary. If e.g. the validation executing at the primary needs to read data that is not stored in the configuration and only available on another node, the validation code must perform any needed RPC calls.
If the order from the HAFW is to become primary, the node will start to listen for incoming secondaries at the ip:port configured under /ncs-config/ha. The secondaries TCP connect to the primary and this socket is used by NSO to distribute the replicated data.
If the order is to be a secondary, the node will contact the primary and possibly copy the entire configuration from the primary. This copy is not performed if the primary and secondary decide that they have the same version of the CDB database loaded, in which case nothing needs to be copied. This mechanism is implemented by use of a unique token, the transaction id - it contains the node id of the node that generated it and a time stamp, but is effectively "opaque".
This transaction ID is generated by the cluster primary each time a configuration change is committed, and all nodes write the same transaction ID into their copy of the committed configuration. If the primary dies and one of the remaining secondaries is appointed the new primary, the other secondaries must be told to connect to the new primary. They will compare their last transaction ID to the one from the newly appointed primary. If they are the same, no CDB copy occurs. This will be the case unless a configuration change has sneaked in since both the new primary and the remaining secondaries will still have the last transaction ID generated by the old primary - the new primary will not generate a new transaction ID until a new configuration change is committed. The same mechanism works if a secondary node is simply restarted. No cluster reconfiguration will lead to a CDB copy unless the configuration has been changed in between.
Northbound agents should run on the primary, an agent can't commit write operations at a secondary node.
When an agent commits its CDB data, CDB will stream the committed data out to all registered secondaries. If a secondary dies during the commit, nothing will happen, the commit will succeed anyway. When and if the secondary reconnects to the cluster, the secondary will have to copy the entire configuration. All data on the HA sockets between NSO nodes only go in the direction from the primary to the secondaries. A secondary that isn't reading its data will eventually lead to a situation with full TCP buffers at the primary. In principle, it is the responsibility of HAFW to discover this situation and notify the primary NSO about the hanging secondary. However, if 3 times the tick timeout is exceeded, NSO will itself consider the node dead and notify the HAFW. The default value for tick timeout is 20 seconds.
The primary node holds the active copy of the entire configuration data in CDB. All configuration data has to be stored in CDB for replication to work. At a secondary node, any request to read will be serviced while write requests will be refused. Thus, the CDB subscription code works the same regardless of whether the CDB client is running at the primary or at any of the secondaries. Once a secondary has received the updates associated to a commit at the primary, all CDB subscribers at the secondary will be duly notified about any changes using the normal CDB subscription mechanism.
If the system has been set up to subscribe for NETCONF notifications, the secondaries will have all subscriptions as configured in the system, but the subscription will be idle. All NETCONF notifications are handled by the primary, and once the notifications get written into stable storage (CDB) at the primary, the list of received notifications will be replicated to all secondaries.
We specify in ncs.conf which IP address the primary should bind for incoming secondaries. If we choose the default value 0.0.0.0 it is the responsibility of the application to ensure that connection requests only arrive from acceptable trusted sources through some means of firewalling.
A cluster is also protected by a token, a secret string only known to the application. The Ha.connect() method must be given the token. A secondary node that connects to a primary node negotiates with the primary using a CHAP-2-like protocol, thus both the primary and the secondary are ensured that the other end has the same token without ever revealing their own token. The token is never sent in clear text over the network. This mechanism ensures that a connection from an NSO secondary to a primary can only succeed if they both have the same token.
It is indeed possible to store the token itself in CDB, thus an application can initially read the token from the local CDB data, and then use that token in . the constructor for the Ha class. In this case, it may very well be a good idea to have the token stored in CDB be of type tailf:aes-256-cfb-128-encrypted-string.
If the actual CDB data that is sent on the wire between cluster nodes is sensitive, and the network is untrusted, the recommendation is to use IPSec between the nodes. An alternative option is to decide exactly which configuration data is sensitive and then use the tailf:aes-256-cfb-128-encrypted-string type for that data. If the configuration data is of type tailf:aes-256-cfb-128-encrypted-string the encrypted data will be sent on the wire in update messages from the primary to the secondaries.
There are two APIs used by the HA framework to control the replication aspects of NSO. First, there exists a synchronous API used to tell NSO what to do, secondly, the application may create a notifications socket and subscribe to HA-related events where NSO notifies the application on certain HA-related events such as the loss of the primary, etc. The HA-related notifications sent by NSO are crucial to how to program the HA framework.
The HA-related classes reside in the com.tailf.ha package. See Javadocs for reference. The HA notifications-related classes reside in the com.tailf.notif package, See Javadocs for reference.
The configuration parameter /ncs-cfg/ha/tick-timeout is by default set to 20 seconds. This means that every 20 seconds each secondary will send a tick message on the socket leading to the primary. Similarly, the primary will send a tick message every 20 seconds on every secondary socket.
This aliveness detection mechanism is necessary for NSO. If a socket gets closed all is well, NSO will clean up and notify the application accordingly using the notifications API. However, if a remote node freezes, the socket will not get properly closed at the other end. NSO will distribute update data from the primary to the secondaries. If a remote node is not reading the data, TCP buffer will get full and NSO will have to start to buffer the data. NSO will buffer data for at most tickTime times 3 time units. If a tick has not been received from a remote node within that time, the node will be considered dead. NSO will report accordingly over the notifications socket and either remove the hanging secondary or, if it is a secondary that loses contact with the primary, go into the initial NONE state.
If the HAFW can be really trusted, it is possible to set this timeout to PT0S, i.e zero, in which case the entire dead-node-detection mechanism in NSO is disabled.
The normal setup of an NSO HA cluster is to have all secondaries connected directly to the primary. This is a configuration that is both conceptually simple and reasonably straightforward to manage for the HAFW. In some scenarios, in particular a cluster with multiple secondaries at a location that is network-wise distant from the primary, it can however be sub-optimal, since the replicated data will be sent to each remote secondary individually over a potentially low-bandwidth network connection.
To make this case more efficient, we can instruct a secondary to be a relay for other secondaries, by invoking the Ha.beRelay() method. This will make the secondary start listening on the IP address and port configured for HA in ncs.conf, and handle connections from other secondaries in the same manner as the cluster primary does. The initial CDB copy (if needed) to a new secondary will be done from the relay secondary, and when the relay secondary receives CDB data for replication from its primary, it will distribute the data to all its connected secondaries in addition to updating its own CDB copy.
To instruct a node to become a secondary connected to a relay secondary, we use the Ha.beSecondary() method as usual, but pass the node information for the relay secondary instead of the node information for the primary. I.e. the "sub-secondary" will in effect consider the relay secondary as its primary. To instruct a relay secondary to stop being a relay, we can invoke the Ha.beSecondary() method with the same parameters as in the original call. This is a no-op for a "normal" secondary, but it will cause a relay secondary to stop listening for secondary connections, and disconnect any already connected "sub-secondaries".
This setup requires special consideration by the HAFW. Instead of just telling each secondary to connect to the primary independently, it must set up the secondaries that are intended to be relays, and tell them to become relays, before telling the "sub-secondaries" to connect to the relay secondaries. Consider the case of a primary M and a secondary S0 in one location, and two secondaries S1 and S2 in a remote location, where we want S1 to act as relay for S2. The setup of the cluster then needs to follow this procedure:
Tell M to be primary.
Tell S0 and S1 to be secondary with M as primary.
Tell S1 to be relay.
Tell S2 to be secondary with S1 as primary.
Conversely, the handling of network outages and node failures must also take the relay secondary setup into account. For example, if a relay secondary loses contact with its primary, it will transition to the NONE state just like any other secondary, and it will then disconnect its sub-secondaries which will cause those to transition to NONE too, since they lost contact with "their" primary. Or if a relay secondary dies in a way that is detected by its sub-secondaries, they will also transition to NONE. Thus in the example above, S1 and S2 needs to be handled differently. E.g. if S2 dies, the HAFW probably won't take any action, but if S1 dies, it makes sense to instruct S2 to be a secondary of M instead (and when S1 comes back, perhaps tell S2 to be a relay and S1 to be a secondary of S2).
Besides the use of Ha.beRelay(), the API is mostly unchanged when using relay secondaries. The HA event notifications reporting the arrival or the death of a secondary are still generated only by the "real" cluster primary. If the Ha.HaStatus() method is used towards a relay secondary, it will report the node state as SECONDARY_RELAY rather than just SECONDARY, and the array of nodes will have its primary as the first element (same as for a "normal" secondary), followed by its "sub-secondaries" (if any).
When HA is enabled in ncs.conf, CDB automatically replicates data written on the primary to the connected secondary nodes. Replication is done on a per-transaction basis to all the secondaries in parallel and is synchronous. When NSO is in secondary mode the northbound APIs are in read-only mode, that is the configuration can not be changed on a secondary other than through replication updates from the primary. It is still possible to read from for example NETCONF or CLI (if they are enabled) on a secondary. CDB subscriptions work as usual. When NSO is in the NONE state CDB is unlocked and it behaves as when NSO is not in HA mode at all.
Unlike configuration data, operational data is replicated only if it is defined as persistent in the data model (using the tailf:persistent extension).


examples.ncs/getting-started/developing-with-ncs/6-extern-db for details.In the following, we will use the files in examples.ncs/service-provider/mpls-vpn as a source for our examples. Refer to README in that directory for additional details.
NSO is designed to manage devices and services. NSO uses YANG as the overall modeling language. YANG models describe the NSO configuration, the device configurations, and the configuration of services. Therefore it is vital to understand the data model for NSO including these aspects. The YANG models are available in $NCS_DIR/src/ncs/yang and are structured as follows.
tailf-ncs.yang is the top module that includes the following sub-modules:
tailf-ncs-common.yang: common definitions
tailf-ncs-packages.yang: this sub-module defines the management of packages that are run by NSO. A package contains custom code, models, and documentation for any function added to the NSO platform. It can for example be a service application or a southbound integration to a device.
tailf-ncs-devices.yang: This is a core model of NSO. The device model defines everything a user can do with a device that NSO speaks to via a Network Element Driver, NED.
tailf-ncs-services.yang: Services represent anything that spans across devices. This can for example be MPLS VPN, MEF e-line, BGP peer, or website. NSO provides several mechanisms to handle services in general which are specified by this model. Also, it defines placeholder containers under which developers, as an option, can augment their specific services.
tailf-ncs-snmp-notification-receiver.yang: NSO can subscribe to SNMP notifications from the devices. The subscription is specified by this model.
tailf-ncs-java-vm.yang: Custom code that is part of a package is loaded and executed by the NSO Java VM. This is managed by this model. Further, when browsing $NCS_DIR/src/ncs/yang you will find models for all aspects of NSO functionality, for example
tailf-ncs-alarms.yang: This model defines how NSO manages alarms. The source of an alarm can be anything like an NSO state change, SNMP, or NETCONF notification.
tailf-ncs-snmp.yang: This model defines how to configure the NSO northbound SNMP agent.
tailf-ncs-config.yang: This model describes the layout of the NSO config file, usually called ncs.conf
tailf-ncs-packages.yang: This model describes the layout of the file package-meta-data.xml. All user code, data models MIBS, and Java code are always contained in an NSO package. The package-meta-data.xml file must always exist in a package and describe the package.
These models will be illustrated and briefly explained below. Note that the figures only contain some relevant aspects of the model and are far from complete. The details of the model are explained in the respective sections.
A good way to learn the model is to start the NSO CLI and use tab completion to navigate the model. Note that depending if you are in operation mode or configuration mode different parts of the model will show up. Also try using TAB to get a list of actions at the level you want, for example, devices TAB.
Another way to learn and explore the NSO model is to use the Yanger tool to render a tree output from the NSO model: yanger -f tree --tree-depth=3 tailf-ncs.yang. This will show a tree for the complete model. Below is a truncated example:
As CDB stores hierarchical data as specified by a YANG model, data is addressed by a path to the key. We call this a keypath. A keypath provides a path through the configuration data tree. A keypath can be either absolute or relative. An absolute keypath starts from the root of the tree, while a relative path starts from the "current position" in the tree. They are differentiated by the presence or absence of a leading /. Navigating the configuration data tree is thus done in the same way as a directory structure. It is possible to change the current position with for example the CdbSession.cd() method. Several of the API methods take a keypath as a parameter.
YANG elements that are lists of other YANG elements can be traversed using two different path notations. Consider the following YANG model fragment:
We can use the method CdbSession.getNumberOfInstances() to find the number of elements in a list has, and then traverse them using a standard index notation, i.e., <path to list>[integer]. The children of a list are numbered starting from 0. Looking at the example above (L3 VPN YANG Extract) the path /l3vpn:topology/connection[2]/endpoint-1 refers to the endpoint-1 leaf of the third connection. This numbering is only valid during the current CDB session. CDB is always locked for writing during a read session.
We can also refer to list instances using the values of the keys of the list. In a YANG model, you specify which leafs (there can be several) are to be used for keys by using the key <name> statement at the beginning of the list. In our case a connection has the name leaf as the key. So the path /l3vpn:topology/connection{c1}/endpoint-2 refers to the endpoint-2 leaf of the connection whose name is “c1”.
A YANG list may have more than one key. The syntax for the keys is a space-separated list of key values enclosed within curly brackets: {Key1 Key2 ...}
Which version of the list element referencing to use depends on the situation. Indexing with an integer is convenient when looping through all elements. As a convenience all methods expecting keypaths accept formatting characters and accompanying data items. For example, you can use CdbSession.getElem("server[%d]/ifc{%s}/mtu", 2, "eth0") to fetch the MTU of the third server instance's interface named "eth0". Using relative paths and CdbSession.pushd() it is possible to write code that can be re-used for common sub-trees.
The current position also includes the namespace. To read elements from a different namespace use the prefix qualified tag for that element like in l3vpn:topology.
The CDB subscription mechanism allows an external program to be notified when some part of the configuration changes. When receiving a notification it is also possible to iterate through the changes written to CDB. Subscriptions are always towards the running data store (it is not possible to subscribe to changes to the startup data store). Subscriptions towards operational data (see Operational Data in CDB) kept in CDB are also possible, but the mechanism is slightly different.
The first thing to do is to inform CDB which paths we want to subscribe to. Registering a path returns a subscription point identifier. This is done by acquiring a subscriber instance by calling CdbSubscription Cdb.newSubscription() method. For the subscriber (or CdbSubscription instance) the paths are registered with the CdbSubscription.subscribe() that that returns the actual subscription point identifier. A subscriber can have multiple subscription points, and there can be many different subscribers. Every point is defined through a path - similar to the paths we use for read operations, with the exception that instead of fully instantiated paths to list instances we can selectively use tagpaths.
When a client is done defining subscriptions it should inform NSO that it is ready to receive notifications by calling CdbSubscription.subscribeDone(), after which the subscription socket is ready to be polled.
We can subscribe either to specific leaves, or entire subtrees. Explaining this by example we get:
/ncs:devices/global-settings/trace: Subscription to a leaf. Only changes to this leaf will generate a notification.
/ncs:devices: Subscription to the subtree rooted at /ncs:devices. Any changes to this subtree will generate a notification. This includes additions or removals of device instances, as well as changes to already existing device instances.
/ncs:devices/device{"ex0"}/address: Subscription to a specific element in a list. A notification will be generated when the device ex0 changes its IP address.
/ncs:devices/device/address: Subscription to a leaf in a list. A notification will be generated leaf address is changed in any device instance.
When adding a subscription point the client must also provide a priority, which is an integer (a smaller number means a higher priority). When data in CDB is changed, this change is part of a transaction. A transaction can be initiated by a commit operation from the CLI or an edit-config operation in NETCONF resulting in the running database being modified. As the last part of the transaction CDB will generate notifications in lock-step priority order. First, all subscribers at the lowest numbered priority are handled, once they all have replied and synchronized by calling CdbSubscription.sync() the next set - at the next priority level - is handled by CDB. Not until all subscription points have been acknowledged is the transaction complete. This implies that if the initiator of the transaction was for example a commit command in the CLI, the command will hang until notifications have been acknowledged.
Note that even though the notifications are delivered within the transaction, a subscriber can't reject the changes (since this would break the two-phase commit protocol used by the NSO backplane towards all data providers).
As a subscriber has read its subscription notifications using CdbSubscription.read(), it can iterate through the changes that caused the particular subscription notification using the CdbSubscription.diffIterate() method. It is also possible to start a new read-session to the CdbDBType.CDB_PRE_COMMIT_RUNNING database to read the running database as it was before the pending transaction.
To view registered subscribers use the ncs --status command.
It is important to note that CDB is locked for writing during a read session using the Java API. A session starts with CdbSession Cdb.startSession() and the lock is not released until the CdbSession.endSession() (or the Cdb.close()) call. CDB will also automatically release the lock if the socket is closed for some other reason, such as program termination.
When NSO starts for the first time, the CDB database is empty. The location of the database files used by CDB is given in ncs.conf. At first startup, when CDB is empty, i.e., no database files are found in the directory specified by <db-dir> (./ncs-cdb as given by the example below (CDB Init)), CDB will try to initialize the database from all XML documents found in the same directory.
This feature can be used to reset the configuration to factory settings.
Given the YANG model in the example above (L3 VPN YANG Extract), the initial data for topology can be found in topology.xml as seen in the example below (Initial Data for Topology).
Another example of using these features is when initializing the AAA database. This is described in AAA infrastructure.
All files ending in .xml will be loaded (in an undefined order) and committed in a single transaction when CDB enters start phase 1 (see Starting NSO for more details on start phases). The format of the init files is rather lax in that it is not required that a complete instance document following the data model is present, much like the NETCONF edit-config operation. It is also possible to wrap multiple top-level tags in the file with a surrounding config tag, as shown in the example below (Wrapper for Multiple Top-Level Tags) like this:
In addition to handling configuration data, CDB can also take care of operational data such as alarms and traffic statistics. By default, operational data is not persistent and thus not kept between restarts. In the YANG model annotating a node with config false will mark the subtree rooted at that node as operational data. Reading and writing operational data is done similarly to ordinary configuration data, with the main difference being that you have to specify that you are working against operational data. Also, the subscription model is different.
Subscriptions towards the operational data in CDB are similar to the above, but because the operational data store is designed for light-weight access, does not have transactions, and normally avoids the use of any locks, there are several differences - in particular:
Subscription notifications are only generated if the writer obtains the “subscription lock”, by using the Cdb.startSession() method with the CdbLockType.LOCK_REQUEST flag.
Subscriptions are registered with the CdbSubscription.subscribe() method with the flag CdbSubscriptionType.SUB_OPERATIONAL rather than CdbSubscriptionType.SUB_RUNNING.
No priorities are used.
Neither the writer that generated the subscription notifications nor other writes to the same data are blocked while notifications are being delivered. However, the subscription lock remains in effect until notification delivery is complete.
The previous value for the modified leaf is not available when using the CdbSubscriber.diffIterate() method.
Essentially a write operation towards the operational data store, combined with the subscription lock, takes on the role of a transaction for configuration data as far as subscription notifications are concerned. This means that if operational data updates are done with many single-element write operations, this can potentially result in a lot of subscription notifications. Thus it is a good idea to use the multi-element CdbSession.setObject() etc methods for updating operational data that applications subscribe to.
Since write operations that do not attempt to obtain the subscription lock are allowed to proceed even during notification delivery, it is the responsibility of the applications using the operational data store to obtain the lock as needed when writing. If subscribers should be able to reliably read the exact data that resulted from the write that triggered their subscription, the subscription lock must always be obtained when writing that particular set of data elements. One possibility is of course to obtain the lock for all writes to operational data, but this may have an unacceptable performance impact.
We will take a first look at the examples.ncs/getting-started/developing-with-ncs/1-cdb example. This example is an NSO project with two packages: cdb and router.
router: A NED package with a simple but still realistic model of a network device. The only component in this package is the NED component that uses NETCONF to communicate with the device. This package is used in many NSO examples including examples.ncs/getting-started/developing-with-ncs/0-router-network which is an introduction to NSO device manager, NSO netsim, and this router package.
cdb: This package has an even simpler YANG model to illustrate some aspects of CDB data retrieval. The package consists of five application components:
Plain CDB Subscriber: This CDB subscriber subscribes to changes under the path /devices/device{ex0}/config. Whenever a change occurs there, the code iterates through the change and prints the values.
CdbCfgSubscriber: A more advanced CDB subscriber that subscribes to changes under the path /devices/device/config/sys/interfaces/interface.
OperSubscriber: An operational data subscriber that subscribes to changes under the path /t:test/stats-item.
The cdb package includes the YANG shown in the example below (1-cdb Simple Config Data).
Let us now populate the database and look at the Plain CDB Subscriber and how it can use the Java API to react to changes to the data. This component subscribes to changes under the path /devices/device{ex0}/config which is configuration changes for the device named ex0 which is a device connected to NSO via the router NED.
Being an application component in the cdb package implies that this component is realized by a Java class that implements the com.tailf.ncs.ApplicationComponent Java interface. This interface inherits the Java standard Runnable interface which requires the run() method to be implemented. In addition to this method, there is a init() and a finish() method that has to be implemented. When the NSO Java-VM starts this class will be started in a separate thread with an initial call to init() before the thread starts. When the package is requested to stop execution a call to finish() is performed and this method is expected to end thread execution.
We will walk through the code and highlight different aspects. We start with how the Cdb instance is retrieved in this example. It is always possible to open a socket to NSO and create the Cdb instance with this socket. But with this comes the responsibility to manage that socket. In NSO, there is a resource manager that can take over this responsibility. In the code, the field that should contain the Cdb instance is simply annotated with a @Resource annotation. The resource manager will find this annotation and create the Cdb instance as specified. In this example below (Resource Annotation) Scope.INSTANCE implies that new instances of this example class should have unique Cdb instances (see more in The Resource Manager).
The init() method (shown in the example below, (Plain Subscriber Init) is called before this application component thread is started. For this subscriber, this is the place to set up the subscription. First, an CdbSubscription instance is created and in this instance, the subscription points are registered (one in this case). When all subscription points are registered a call to CdbSubscriber.subscribeDone() will indicate that the registration is finished and the subscriber is ready to start.
The run() method comes from the standard Java API Runnable interface and is executed when the application component thread is started. For this subscriber (see example below (Plain CDB Subscriber)) a loop over the CdbSubscription.read() method drives the subscription. This call will block until data has changed for some of the subscription points that were registered, and the IDs for these subscription points will then be returned. In our example, since we only have one subscription point, we know that this is the one stored as subId. This subscriber chooses to find the changes by calling the CdbSubscription.diffIterate() method. Important is to acknowledge the subscription by calling CdbSubscription.sync() or else this subscription will block the ongoing transaction.
The call to the CdbSubscription.diffIterate() requires an object instance implementing an iterate() method. To do this, the CdbDiffIterate interface is implemented by a suitable class. In our example, this is done by a private inner class called Iter (Example below (Plain Subscriber Iterator Implementation)). The iterate() method is called for all changes and the path, type of change, and data are provided as arguments. In the end, the iterate() should return a flag that controls how further iteration should prolong, or if it should stop. Our example iterate() method just logs the changes.
The finish() method (Example below (Plain Subscriber finish)) is called when the NSO Java-VM wants the application component thread to stop execution. An orderly stop of the thread is expected. Here the subscription will stop if the subscription socket and underlying Cdb instance are closed. This will be done by the ResourceManager when we tell it that the resources retrieved for this Java object instance could be unregistered and closed. This is done by a call to the ResourceManager.unregisterResources() method.
We will now compile and start the 1-cdb example, populate some config data, and look at the result. The example below (Plain Subscriber Startup) shows how to do this.
By far, the easiest way to populate the database with some actual data is to run the CLI (see the example below (Populate Data using CLI)).
We have now added a server to the Syslog. What remains is to check what our 'Plain CDB Subscriber' ApplicationComponent got as a result of this update. In the logs directory of the 1-cdb example there is a file named PlainCdbSub.out which contains the log data from this application component. At the beginning of this file, a lot of logging is performed which emanates from the sync-from of the device. At the end of this file, we can find the three log rows that come from our update. See the extract in the example below (Plain Subscriber Output) (with each row split over several to fit on the page).
We will turn to look at another subscriber which has a more elaborate diff iteration method. In our example cdb package, we have an application component named CdbCfgSubscriber. This component consists of a subscriber for the subscription point /ncs:devices/device/config/r:sys/interfaces/interface. The iterate() method is here implemented as an inner class called DiffIterateImpl.
The code for this subscriber is left out but can be found in the file ConfigCdbSub.java.
The example below (Run CdbCfgSubscriber Example) shows how to build and run the example.
If we look at the file logs/ConfigCdbSub.out, we will find log records from the subscriber (see the example below (Subscriber Output)). At the end of this file the last DUMP DB will show only one remaining interface.
We will look once again at the YANG model for the CDB package in the examples.ncs/getting-started/developing-with-ncs/1-cdb example. Inside the test.yang YANG model, there is a test container. As a child in this container, there is a list stats-item (see the example below (1-cdb Simple Operational Data).
Note the list stats-item has the substatement config false; and below it, we find a tailf:cdb-oper; statement. A standard way to implement operational data is to define a callpoint in the YANG model and write instrumentation callback methods for retrieval of the operational data (see more on data callbacks in DP API). Here on the other hand we use the tailf:cdb-oper; statement which implies that these instrumentation callbacks are automatically provided internally by NSO. The downside is that we must populate this operational data in CDB from the outside.
An example of Java code that creates operational data using the Navu API is shown in the example below (Creating Operational Data using Navu API)).
An example of Java code that deletes operational data using the CDB API is shown in the example below (Deleting Operational Data using CDB API).
In the 1-cdb example in the CDB package, there is also an application component with an operational data subscriber that subscribes to data from the path "/t:test/stats-item" (see the example below (CDB Operational Subscriber Java code)).
Notice that the CdbOperSubscriber is very similar to the CdbConfigSubscriber described earlier.
In the 1-cdb examples, there are two shell scripts setoper and deloper that will execute the above CreateEntry() and DeleteEntry() respectively. We can use these to populate the operational data in CDB for the test.yang YANG model (see the example below (Populating Operational Data)).
And if we look at the output from the 'CDB Operational Subscriber' that is found in the logs/OperCdbSub.out, we will see output similar to the example below (Operational subscription Output).
Software upgrades and downgrades represent one of the main problems in managing the configuration data of network devices. Each software release for a network device is typically associated with a certain version of configuration data layout, i.e., a schema. In NSO the schema is the data model stored in the .fxs files. Once CDB has initialized, it also stores a copy of the schema associated with the data it holds.
Every time NSO starts, CDB will check the current contents of the .fxs files with its own copy of the schema files. If CDB detects any changes in the schema, it initiates an upgrade transaction. In the simplest case, CDB automatically resolves the changes and commits the new data before NSO reaches start-phase one.
The CDB upgrade can be followed by checking the devel.log. The development log is meant to be used as support while the application is developed. It is enabled in ncs.conf as shown in the example below (Enabling Developer Logging).
CDB can automatically handle the following changes to the schema:
Deleted elements: When an element is deleted from the schema, CDB simply deletes it (and any children) from the database.
Added elements: If a new element is added to the schema it needs to either be optional, dynamic, or have a default value. New elements with a default are added and set to their default value. New dynamic or optional elements are simply noted as a schema change.
Re-ordering elements: An element with the same name, but in a different position on the same level, is considered to be the same element. If its type hasn't changed it will retain its value, but if the type has changed it will be upgraded as described below.
Type changes: If a leaf is still present but its type has changed, automatic coercions are performed, so for example integers may be transformed to their string representation if the type changed from e.g. int32 to string. Automatic type conversion succeeds as long as the string representation of the current value can be parsed into its new type. (Which of course also implies that a change from a smaller integer type, e.g. int8, to a larger type, e.g., int32, succeeds for any value - while the opposite will not hold, but might!).
If the coercion fails, any supplied default value will be used. If no default value is present in the new schema, the automatic upgrade will fail and the leaf will be deleted after the CDB upgrade.
Note: The conversion between the empty and boolean types deviate from the aforementioned rule. Let's consider a scenario where a leaf of type boolean is being upgraded to a leaf of type empty. If the original leaf is set to true, it will be upgraded to a set empty leaf. Conversely, if the original leaf is set to false, it will be deleted after the upgrade. On the other hand, a set empty leaf will be upgraded to a leaf of type boolean
Node type changes: CDB can handle automatic type changes between a container and a list. When converting from a container to a list, the child nodes of the container are mapped to the child nodes of the list, applying type coercion on the nodes when necessary. Conversely, a list can be automatically transformed into a container provided the list contains at most one list entry. Node attributes will remain intact, with the exception of the list key entry. Attributes set on a container will be transferred to the list key entry and vice versa. However, attributes on the container child node corresponding to the list key value will be lost in the upgrade. Additionally, type changes between leaf and leaf-list are allowed, and the data is kept intact if the number of entries in the leaf-list is exactly one. If a leaf-list has more than one entry, all entries will be deleted when upgrading to leaf. Type changes to and from empty leaf are possible to some extent. A type change from any type is allowed to empty leaf, but an empty leaf can only be changed to a presence container. Node attributes will only be preserved for node changes between empty leaf and container.
Hash changes: When a hash value of a particular element has changed (due to an addition of, or a change to, a tailf:id-value statement) CDB will update that element.
Key changes: When a key of a list is modified, CDB tries to upgrade the key using the same rules as explained above for adding, deleting, re-ordering, change of type, and change of hash value. If an automatic upgrade of a key fails the entire list entry will be deleted. When individual entries upgrade successfully but result in an invalid list, all list entries will be deleted. This can happen, e.g., when an upgrade removes a leaf from the key, resulting in several entries having the same key.
Default values: If a leaf has a default value, that has not been changed from its default, then the automatic upgrade will use the new default value (if any). If the leaf value has been changed from the old default, then that value will be kept.
Adding / Removing namespaces: If a namespace no longer is present after an upgrade, CDB removes all data in that namespace. When CDB detects a new namespace, it is initialized with default values.
Changing to/from operational: Elements that previously had config false set that are changed into database elements will be treated as added elements. In the opposite case, where data elements in the new data model are tagged with config false, the elements will be deleted from the database.
Callpoint changes: CDB only considers the part of the data model in YANG modules that do not have external data callpoints. But while upgrading, CDB handles moving subtrees into CDB from a callpoint and vice versa. CDB simply considers these as added and deleted schema elements. Thus an application can be developed using CDB in the first development cycle. When the external database component is ready it can easily replace CDB without changing the schema.
Should the automatic upgrade fail, exit codes and log entries will indicate the reason (see Disaster Management).
As described earlier, when NSO starts with an empty CDB database, CDB will load all instantiated XML documents found in the CDB directory and use these to initialize the database. We can also use this mechanism for CDB upgrade since CDB will again look for files in the CDB directory ending in .xml when doing an upgrade.
This allows for handling many of the cases that the automatic upgrade can not do by itself, e.g., the addition of mandatory leaves (without default statements), or multiple instances of new dynamic containers. Most of the time we can probably simply use the XML init file that is appropriate for a fresh install of the new version and also for the upgrade from a previous version.
When using XML files for the initialization of CDB, the complete contents of the files are used. On upgrade, however, doing this could lead to modification of the user's existing configuration - e.g., we could end up resetting data that the user has modified since CDB was first initialized. For this reason, two restrictions are applied when loading the XML files on upgrade:
Only data for elements that are new as of the upgrade, i.e., elements that did not exist in the previous schema, will be considered.
The data will only be loaded if all old, i.e., previously existing, optional/dynamic parent elements and instances exist in the current configuration.
To clarify this, let's make up the following example. Some ServerManager package was developed and delivered. It was realized that the data model had a serious shortcoming in that there was no way to specify the protocol to use, TCP or UDP. To fix this, in a new version of the package, another leaf was added to the /servers/server list, and the new YANG module can be seen in the example below (New YANG module for the ServerManager Package).
The differences from the earlier version of the YANG module can be seen in the example below (Difference between YANG Modules).
Since it was considered important that the user explicitly specified the protocol, the new leaf was made mandatory. The XML init file must include this leaf, and the result can be seen in the example below (Protocol Upgrade Init File) like this:
We can then just use this new init file for the upgrade, and the existing server instances in the user's configuration will get the new /servers/server/protocol leaf filled in as expected. However some users may have deleted some of the original servers from their configuration, and in those cases, we do not want those servers to get re-created during the upgrade just because they are present in the XML file - the above restrictions make sure that this does not happen. The configuration after the upgrade can be seen in the example below (Configuration After Upgrade).
Here is what the configuration looks like after the upgrade if the smtp server has been deleted before the upgrade:
This example also implicitly shows a limitation of this method. If the user has created additional servers, the new XML file will not specify what protocol to use for those servers, and the upgrade cannot succeed unless the package upgrade component method is used, see below. However, the example is a bit contrived. In practice, this limitation is rarely a problem. It does not occur for new lists or optional elements, nor for new mandatory elements that are not children of old lists. In fact, correctly adding this protocol leaf for user-created servers would require user input; it cannot be done by any fully automated procedure.
It is always possible to write a package-specific upgrade component to change the data belonging to a package before the upgrade transaction is committed. This will be explained in the following section.
One case the system does not handle directly is the addition of new custom validation points using the tailf:validate statement during an upgrade. The issue that surfaces is that the schema upgrade is performed before the (new) user code gets deployed and therefore the code required for validation is not yet available. It results in an error similar to no registration found for callpoint NEW-VALIDATION/validate or simply application communication failure.
One way to solve this problem is to first redeploy the package with the custom validation code and then perform the schema upgrade through the full packages reload action. For example, suppose you are upgrading the package test-svc. Then you first perform packages package test-svc redeploy, followed by packages reload. The main downside to this approach is that the new code must work with the old data model, which may require extra effort when there are major data model changes.
An alternative is to temporarily disable the validation by starting the NSO with the --ignore-initial-validation option. In this case, you should stop the ncs process and start it using --ignore-initial-validation and --with-package-reload options to perform the schema upgrade without custom validation. However, this may result in data in the CDB that would otherwise not pass custom validation. If you still want to validate the data, you can write an upgrade component to do this one-time validation.
In previous sections, we showed how automatic upgrades and XML initialization files can help in upgrading CDB when YANG models have changed. In some situations, this is not sufficient. For instance, if a YANG model is changed and new mandatory leaves are introduced that need calculations to set the values then a programmatic upgrade is needed. This is when the upgrade component of a package comes into play.
An upgrade component is a Java class with a standard main() method that becomes a standalone program that is run as part of the package reload action.
As with any package component type, the upgrade component has to be defined in the package-meta-data.xml file for the package (see the example below (Upgrade Package Components)).
Let's recapitulate how packages are loaded and reloaded. NSO can search the /ncs-config/load-path for packages to run and will copy these to a private directory tree under /ncs-config/state-dir with root directory packages-in-use.cur. However, NSO will only do this search when packages-in-use.cur is empty or when a reload is requested. This scheme makes package upgrades controlled and predictable, for more on this, see Loading Packages.
So in preparation for a package upgrade, the new packages replace the old ones in the load path. In our scenario, the YANG model changes are such that the automatic schema upgrade that CDB performs is not sufficient, therefore the new packages also contain upgrade components. At this point, NSO is still running with the old package definitions.
When the package reload is requested, the packages in the load path are copied to the state directory. The old state directory is scratched, so that packages that no longer exist in the load path are removed and new packages are added. Unchanged packages will be unchanged. Automatic schema CDB upgrades will be performed, and afterward, for all packages that have an upgrade component and for which at least one YANG model was changed, this upgrade component will be executed. Also for added packages that have an upgrade component, this component will be executed. Hence the upgrade component needs to be programmed in such a way that care is taken for both the new and upgrade package scenarios.
So how should an upgrade component be implemented? In the previous section, we described how CDB can perform an automatic upgrade. But this means that CDB has deleted all values that are no longer part of the schema. Well, not quite yet. At the initial phase of the NSO startup procedure (called start-phase0), it is possible to use all the CDB Java API calls to access the data using the schema from the database as it looked before the automatic upgrade. That is, the complete database as it stood before the upgrade is still available to the application. It is under this condition that the upgrade components are executed and this is the reason why they are standalone programs and not executed by the NSO Java-VM as all other Java code for components are.
So the CDB Java API can be used to read data defined by the old YANG models. To write new config data Maapi has a specific method Maapi.attachInit(). This method attaches a Maapi instance to the upgrade transaction (or init transaction) during phase0. This special upgrade transaction is only available during phase0. NSO will commit this transaction when the phase0 is ended, so the user should only write config data (not attempt to commit, etc.).
We take a look at the example $NCS_DIR/examples.ncs/getting-started/developing-with-ncs/14-upgrade-service to see how an upgrade component can be implemented. Here the vlan package has an original version which is replaced with a version vlan_v2. See the README and play with examples to get acquainted.
The complete YANG model for the version 2 of the VLAN service looks as follows:
If we diff the changes between the two YANG models for the service, we see that in version 2, a new mandatory leaf has been added (see the example below (YANG Service diff)).
We need to create a Java class with a main() method that connects to CDB and MAAPI. This main will be executed as a separate program and all private and shared jars defined by the package will be in the classpath. To upgrade the VLAN service, the following Java code is needed:
Let's go through the code and point out the different aspects of writing an upgrade component. First (see the example below (Upgrade Init)) we open a socket and connect to NSO. We pass this socket to a Java API Cdb instance and call Cdb.setUseForCdbUpgrade(). This method will prepare cdb sessions for reading old data from the CDB database, and it should only be called in this context. At the end of this first code fragment, we start the CDB upgrade session:
We then open and connect a second socket to NSO and pass this to a Java API Maapi instance. We call the Maapi.attachInit() method to get the init transaction (see the example below (Upgrade Get Transaction)).
Using the CdbSession instance we read the number of service instance that exists in the CDB database. We will work on all these instances. Also, if the number of instances is zero the loop will not be entered. This is a simple way to prevent the upgrade component from doing any harm in the case of this being a new package that is added to NSO for the first time:
Via the CdbUpgradeSession, the old service data is retrieved:
The value for the new leaf introduced in the new version of the YANG model is calculated, and the value is set using Maapi and the init transaction:
At the end of the program, the sockets are closed. Important to note is that no commits or other handling of the init transaction is done. This is NSO's responsibility:
More complicated service package upgrade scenarios occur when a YANG model containing a service point is renamed or moved and augmented to a new place in the NSO model. This is because, not only, does the complete config data set need to be recreated on the new position but a service also has hidden private data that is part of the FASTMAP algorithm and necessary for the service to be valid. For this reason a specific MAAPI method Maapi.ncsMovePrivateData() exists that takes both the old and the new positions for the service point and moves the service data between these positions.
In the 14-upgrade-service example, this more complicated scenario is illustrated with the tunnel package. The tunnel package YANG model maps the vlan_v2 package one-to-one but is a complete rename of the model containers and all leafs:
To upgrade from the vlan_v2 to the tunnel package, a new upgrade component for the tunnel package has to be implemented:
We will walk through this code also and point out the aspects that differ from the earlier more simple scenario. First, we want to create the Cdb instance and get the CdbSession. However, in this scenario, the old namespace is removed and the Java API cannot retrieve it from NSO. To be able to use CDB to read and interpret the old YANG Model, the old generated and removed Java namespace classes have to be temporarily reinstalled. This is solved by adding a jar (Java archive) containing these removed namespaces to the private-jar directory of the tunnel package. The removed namespace can then be instantiated and passed to Cdb via an overridden version of the Cdb.setUseForCdbUpgrade() method:
As an alternative to including the old namespace file in the package, a ConfNamespaceStub can be constructed for each old model that is to be accessed:
Since the old YANG model with the service point is removed, the new service container with the new service has to be created before any config data can be written to this position:
The complete config for the old service is read via the CdbUpgradeSession. Note in particular that the path oldPath is constructed as a ConfCdbUpgradePath. These are the paths that allow access to nodes that are not available in the current schema (i.e., nodes in deleted models).
The new data structure with the service data is created and written to NSO via Maapi and the init transaction:
Last the service private data is moved from the old position to the new position via the method Maapi.ncsMovePrivateData():

$ netconf-console --get -x '/devices/device[name="x0"]/config/yang-library'
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="1">
<data>
<devices xmlns="http://tail-f.com/ns/ncs">
<device>
<name>x0</name>
<config>
<yang-library xmlns="urn:ietf:params:xml:ns:yang:ietf-yang-library">
<module-set>
<name>common</name>
<module>
<name>a</name>
<namespace>urn:a</namespace>
</module>
<module>
<name>b</name>
<namespace>urn:b</namespace>
</module>
</module-set>
<schema>
<name>common</name>
<module-set>common</module-set>
</schema>
<datastore>
<name xmlns:ds="urn:ietf:params:xml:ns:yang:ietf-datastores">\
ds:running\
</name>
<schema>common</schema>
</datastore>
<datastore>
<name xmlns:ds="urn:ietf:params:xml:ns:yang:ietf-datastores">\
ds:intended\
</name>
<schema>common</schema>
</datastore>
<datastore>
<name xmlns:ds="urn:ietf:params:xml:ns:yang:ietf-datastores">\
ds:operational\
</name>
<schema>common</schema>
</datastore>
<content-id>f0071b28c1e586f2e8609da036379a58</content-id>
</yang-library>
</config>
</device>
</devices>
</data>
</rpc-reply>Subsystem netconf /usr/local/bin/netconf-subsys<interface>
<name>atm1</name>
<rpc-error xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<error-type>application</error-type>
<error-tag>operation-failed</error-tag>
<error-severity>error</error-severity>
<error-message xml:lang="en">Failed to talk to hardware</error-message>
<error-info>
<bad-element>mac-address</bad-element>
</error-info>
</rpc-error>
...
</interface><interface>
<!-- successfully retrieved list entry -->
<name>eth0</name>
<mtu>1500</mtu>
<!-- more leafs here -->
</interface>
<rpc-error xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<error-type>application</error-type>
<error-tag>operation-failed</error-tag>
<error-severity>error</error-severity>
<error-message xml:lang="en">Failed to talk to hardware</error-message>
<error-info>
<bad-element>interface</bad-element>
</error-info>
</rpc-error>$ netconf-console --get-config -x /nacm/groups
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="1">
<data>
<nacm xmlns="urn:ietf:params:xml:ns:yang:ietf-netconf-acm">
<groups>
<group>
<name>admin</name>
<user-name>admin</user-name>
<user-name>private</user-name>
</group>
<group>
<name>oper</name>
<user-name>oper</user-name>
<user-name>public</user-name>
</group>
</groups>
</nacm>
</data>
</rpc-reply><notifications>
<event-streams>
<stream>
<name>device-notifications</name>
<description>Notifications received from devices</description>
<replay-support>true</replay-support>
<builtin-replay-store>
<enabled>true</enabled>
<dir>/var/log</dir>
<max-size>S10M</max-size>
<max-files>50</max-files>
</builtin-replay-store>
</stream>
<stream>
<name>debug</name>
<description>Debug notifications</description>
<replay-support>false</replay-support>
</stream>
</event-streams>
</notifications>$ netconf-console --get -x /subscriptions
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="1">
<data>
<subscriptions xmlns="urn:ietf:params:xml:ns:yang:ietf-subscribed-notifications">
subscription>
<id>3</id>
<stream-xpath-filter>/if:interfaces/interface[name='eth0']/enabled</stream-xpath-filter>
<stream>interface</stream>
<stop-time>2030-10-04T14:00:00+02:00</stop-time>
<encoding>encode-xml</encoding>
<receivers>
<receiver>
<name>127.0.0.1:57432</name>
<state>active</state>
</receiver>
</receivers>
/subscription>
</subsrcriptions>
</data>
</rpc-reply><notification xmlns="urn:ietf:params:xml:ns:netconf:notification:1.0">
<eventTime>2020-06-10T10:00:00.00Z</eventTime>
<push-update xmlns="urn:ietf:params:xml:ns:yang:ietf-yang-push">
<id>1</id>
<datastore-contents>
<interfaces xmlns="urn:ietf:params:xml:ns:yang:ietf-interfaces">
<interface>
<name>eth0</name>
<oper-status>up</oper-status>
</interface>
</interfaces>
</datastore-contents>
</push-update>
</notification>$ cat ./sync-from-ce1.xml
<action xmlns="http://tail-f.com/ns/netconf/actions/1.0">
<data>
<devices xmlns="http://tail-f.com/ns/ncs">
<device>
<name>ce1</name>
<sync-from/>
</device>
</devices>
</data>
</action>
$ netconf-console --rpc sync-from-ce1.xml
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="1">
<data>
<devices xmlns="http://tail-f.com/ns/ncs">
<device>
<name>ce1</name>
<sync-from>
<result>true</result>
</sync-from>
</device>
</devices>
</data>
</rpc-reply> http://tail-f.com/ns/netconf/actions/1.0 C S
| |
| capability exchange |
|-------------------------->|
|<------------------------->|
| |
| <start-transaction> |
|-------------------------->|
|<--------------------------|
| <ok/> |
| |
| <edit-config> |
|-------------------------->|
|<--------------------------|
| <ok/> |
| |
| <prepare-transaction> |
|-------------------------->|
|<--------------------------|
| <ok/> |
| |
| <commit-transaction> |
|-------------------------->|
|<--------------------------|
| <ok/> |
| | http://tail-f.com/ns/netconf/transactions/1.0 <rpc message-id="101"
xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<start-transaction xmlns="http://tail-f.com/ns/netconf/transactions/1.0">
<target>
<running/>
</target>
</start-transaction>
</rpc>
<rpc-reply message-id="101"
xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok/>
</rpc-reply> <rpc message-id="103"
xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<prepare-transaction
xmlns="http://tail-f.com/ns/netconf/transactions/1.0"/>
</rpc>
<rpc-reply message-id="103"
xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok/>
</rpc-reply> <rpc message-id="104"
xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<commit-transaction
xmlns="http://tail-f.com/ns/netconf/transactions/1.0"/>
</rpc>
<rpc-reply message-id="104"
xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok/>
</rpc-reply> <rpc message-id="104"
xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<abort-transaction
xmlns="http://tail-f.com/ns/netconf/transactions/1.0"/>
</rpc>
<rpc-reply message-id="104"
xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok/>
</rpc-reply> http://tail-f.com/ns/netconf/inactive/1.0 <rpc message-id="101"
xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<edit-config>
<target>
<running/>
</target>
<with-inactive
xmlns="http://tail-f.com/ns/netconf/inactive/1.0"/>
<config>
<top xmlns="http://example.com/schema/1.2/config">
<interface inactive="inactive">
<name>Ethernet0/0</name>
<mtu>1500</mtu>
</interface>
</top>
</config>
</edit-config>
</rpc>
<rpc-reply message-id="101"
xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok/>
</rpc-reply> <rpc message-id="102"
xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<get-config>
<source>
<running/>
</source>
<with-inactive
xmlns="http://tail-f.com/ns/netconf/inactive/1.0"/>
</get-config>
</rpc>
<rpc-reply message-id="102"
xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<data>
<top xmlns="http://example.com/schema/1.2/config">
<interface inactive="inactive">
<name>Ethernet0/0</name>
<mtu>1500</mtu>
</interface>
</top>
</data>
</rpc-reply> <rpc message-id="103"
xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<get-config>
<source>
<running/>
</source>
</get-config>
</rpc>
<rpc-reply message-id="103"
xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<data>
</data>
</rpc-reply> <rpc message-id="104"
xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<edit-config>
<target>
<running/>
</target>
<with-inactive
xmlns="http://tail-f.com/ns/netconf/inactive/1.0"/>
<config>
<top xmlns="http://example.com/schema/1.2/config">
<interface active="active">
<name>Ethernet0/0</name>
</interface>
</top>
</config>
</edit-config>
</rpc>
<rpc-reply message-id="104"
xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<ok/>
</rpc-reply> http://tail-f.com/ns/netconf/with-rollback-id o edit-config
o copy-config
o commit
o commit-transactiontraceparent = <version>-<trace-id>-<parent-id>-<flags>tracestate = key1=value1,key2=value2<rpc message-id="101"
xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"
xmlns:w3ctc="urn:ietf:params:xml:ns:netconf:w3ctc:1.0"
w3ctc:traceparent="00-100456789abcde10123456789abcde10-001006789abcdef0-01"
w3ctc:tracestate="key1=value1,key2=value2">
<edit-config>
<target>
<running/>
</target>
<config>
<interfaces xmlns="http://example.com/ns/if">
<interface>
<name>eth0</name>
...
</interface>
</interfaces>
</config>
</edit-config>
</rpc>$ find . -name tailf-netconf-query.yangcontainer x {
list host {
key number;
leaf number {
type int32;
}
leaf enabled {
type boolean;
}
leaf name {
type string;
}
leaf address {
type inet:ip-address;
}
}
}<start-query xmlns="http://tail-f.com/ns/netconf/query">
<foreach>
/x/host[enabled = 'true']
</foreach>
<select>
<label>Host name</label>
<expression>name</expression>
<result-type>string</result-type>
</select>
<select>
<expression>address</expression>
<result-type>string</result-type>
</select>
<sort-by>name</sort-by>
<limit>100</limit>
<offset>1</offset>
</start-query><foreach>
/x/host[enabled = 'true']
</foreach><select>
<label>Host name</label>
<expression>name</expression>
<result-type>string</result-type>
</select>
<select>
<expression>address</expression>
<result-type>string</result-type>
</select><sort-by>name</sort-by><limit>100</limit><offset>1</offset>$ netconf-console --rpc query.xml<start-query-result>
<query-handle>12345</query-handle>
</start-query-result><fetch-query-result xmlns="http://tail-f.com/ns/netconf/query">
<query-handle>12345</query-handle>
</fetch-query-result><query-result xmlns="http://tail-f.com/ns/netconf/query">
<result>
<select>
<label>Host name</label>
<value>One</value>
</select>
<select>
<value>10.0.0.1</value>
</select>
</result>
<result>
<select>
<label>Host name</label>
<value>Three</value>
</select>
<select>
<value>10.0.0.1</value>
</select>
</result>
</query-result><query-result xmlns="http://tail-f.com/ns/netconf/query">
</query-result><immediate-query xmlns="http://tail-f.com/ns/netconf/query">
<foreach>
/x/host[enabled = 'true']
</foreach>
<select>
<label>Host name</label>
<expression>name</expression>
<result-type>string</result-type>
</select>
<select>
<expression>address</expression>
<result-type>string</result-type>
</select>
<sort-by>name</sort-by>
<timeout>600</timeout>
</immediate-query><query-result xmlns="http://tail-f.com/ns/netconf/query">
<result>
<select>
<label>Host name</label>
<value>One</value>
</select>
<select>
<value>10.0.0.1</value>
</select>
</result>
<result>
<select>
<label>Host name</label>
<value>Three</value>
</select>
<select>
<value>10.0.0.3</value>
</select>
</result>
</query-result><reset-query xmlns="http://tail-f.com/ns/netconf/query">
<query-handle>12345</query-handle>
<offset>42</offset>
</reset-query><stop-query xmlns="http://tail-f.com/ns/netconf/query">
<query-handle>12345</query-handle>
</stop-query><rpc message-id="101"
xmlns="urn:ietf:params:xml:ns:netconf:base:1.0">
<edit-config>
<target>
<running/>
</target>
<config>
<interfaces xmlns="http://example.com/ns/if">
<interface annotation="this is the management interface"
tags=" important ethernet ">
<name>eth0</name>
...
</interface>
</interfaces>
</config>
</edit-config>
</rpc><?xml version="1.0" encoding="UTF-8"?>
<xs:schema targetNamespace="http://tail-f.com/ns/netconf/params/1.1"
xmlns:xs="http://www.w3.org/2001/XMLSchema"
xml:lang="en">
<xs:annotation>
<xs:documentation>
Tail-f's namespace for additional error information.
This namespace is used to define elements which are included
in the 'error-info' element.
The following are the app-tags used by the NETCONF agent:
o not-writable
Means that an edit-config or copy-config operation was
attempted on an element which is read-only
(i.e. non-configuration data).
o missing-element-in-choice
Like the standard error missing-element, but generated when
one of a set of elements in a choice is missing.
o pending-changes
Means that a lock operation was attempted on the candidate
database, and the candidate database has uncommitted
changes. This is not allowed according to the protocol
specification.
o url-open-failed
Means that the URL given was correct, but that it could not
be opened. This can e.g. be due to a missing local file, or
bad ftp credentials. An error message string is provided in
the <error-message> element.
o url-write-failed
Means that the URL given was opened, but write failed. This
could e.g. be due to lack of disk space. An error message
string is provided in the <error-message> element.
o bad-state
Means that an rpc is received when the session is in a state
which don't accept this rpc. An example is
<prepare-transaction> before <start-transaction>
</xs:documentation>
</xs:annotation>
<xs:element name="bad-keyref">
<xs:annotation>
<xs:documentation>
This element will be present in the 'error-info' container when
'error-app-tag' is "instance-required".
</xs:documentation>
</xs:annotation>
<xs:complexType>
<xs:sequence>
<xs:element name="bad-element" type="xs:string">
<xs:annotation>
<xs:documentation>
Contains an absolute XPath expression pointing to the element
which value refers to a non-existing instance.
</xs:documentation>
</xs:annotation>
</xs:element>
<xs:element name="missing-element" type="xs:string">
<xs:annotation>
<xs:documentation>
Contains an absolute XPath expression pointing to the missing
element referred to by 'bad-element'.
</xs:documentation>
</xs:annotation>
</xs:element>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:element name="bad-instance-count">
<xs:annotation>
<xs:documentation>
This element will be present in the 'error-info' container when
'error-app-tag' is "too-few-elements" or "too-many-elements".
</xs:documentation>
</xs:annotation>
<xs:complexType>
<xs:sequence>
<xs:element name="bad-element" type="xs:string">
<xs:annotation>
<xs:documentation>
Contains an absolute XPath expression pointing to an
element which exists in too few or too many instances.
</xs:documentation>
</xs:annotation>
</xs:element>
<xs:element name="instances" type="xs:unsignedInt">
<xs:annotation>
<xs:documentation>
Contains the number of existing instances of the element
referd to by 'bad-element'.
</xs:documentation>
</xs:annotation>
</xs:element>
<xs:choice>
<xs:element name="min-instances" type="xs:unsignedInt">
<xs:annotation>
<xs:documentation>
Contains the minimum number of instances that must
exist in order for the configuration to be consistent.
This element is present only if 'app-tag' is
'too-few-elems'.
</xs:documentation>
</xs:annotation>
</xs:element>
<xs:element name="max-instances" type="xs:unsignedInt">
<xs:annotation>
<xs:documentation>
Contains the maximum number of instances that can
exist in order for the configuration to be consistent.
This element is present only if 'app-tag' is
'too-many-elems'.
</xs:documentation>
</xs:annotation>
</xs:element>
</xs:choice>
</xs:sequence>
</xs:complexType>
</xs:element>
<xs:attribute name="annotation" type="xs:string">
<xs:annotation>
<xs:documentation>
This attribute can be present on any configuration data node. It
acts as a comment for the node. The annotation does not affect the
underlying configuration data.
</xs:documentation>
</xs:annotation>
</xs:attribute>
<xs:attribute name="tags" type="xs:string">
<xs:annotation>
<xs:documentation>
This attribute can be present on any configuration data node. It
is a space separated string of tags for the node. The tags of a
node does not affect the underlying configuration data, but can
be used by a user for data organization, and data filtering.
</xs:documentation>
</xs:annotation>
</xs:attribute>
</xs:schema><notification xmlns="urn:ietf:params:xml:ns:netconf:notification:1.0">
<eventTime>2020-06-10T10:05:00.00Z</eventTime>
<push-change-update
xmlns="urn:ietf:params:xml:ns:yang:ietf-yang-push">
<id>2</id>
<datastore-changes>
<yang-patch>
<patch-id>s2-p4</patch-id>
<edit>
<edit-id>edit1</edit-id>
<operation>merge</operation>
<target>/ietf-interfaces:interfaces</target>
<value>
<interfaces xmlns="urn:ietf:params:xml:ns:yang:ietf-interfaces">
<interface>
<name>eth0</name>
<oper-status>down</oper-status>
</interface>
</interfaces>
</value>
</edit>
</yang-patch>
</datastore-changes>
</push-change-update>
</notification>admin@node1# high-availability read-only mode trueadmin@ncs# high-availability read-only mode trueadmin@ncs# high-availability read-only mode false$ sudo setcap 'cap_net_bind_service=+ep' /usr/bin/gobgpd$ sudo chown root /usr/bin/gobgpd
$ sudo chmod u+s /usr/bin/gobgpdadmin@paris:~$ ip route get 192.168.123.22 <ha-raft>
<!-- ... -->
<listen>
<node-address>198.51.100.10</node-address>
</listen>
<ssl>
<ca-cert-file>${NCS_CONFIG_DIR}/dist/ssl/cert/myca.crt</ca-cert-file>
<cert-file>${NCS_CONFIG_DIR}/dist/ssl/cert/node-100-10.crt</cert-file>
<key-file>${NCS_CONFIG_DIR}/dist/ssl/cert/node-100-10.key</key-file>
</ssl>
</ha-raft>$ mkdir raft-ca-lower-west
$ cd raft-ca-lower-west
$ cp $NCS_DIR/examples.ncs/high-availability/raft-cluster/gen_tls_certs.sh .
$ openssl version
$ date$ ./gen_tls_certs.sh node1.example.org node2.example.org node3.example.org$ ./gen_tls_certs.sh -a 192.0.2.1 192.0.2.2 192.0.2.3$ openssl verify -CAfile ssl/certs/ca.crt ssl/certs/node1.example.org.crt <ha-raft>
<enabled>true</enabled>
<cluster-name>sherwood</cluster-name>
<listen>
<node-address>ash.example.org</node-address>
</listen>
<ssl>
<ca-cert-file>${NCS_CONFIG_DIR}/dist/ssl/cert/myca.crt</ca-cert-file>
<cert-file>${NCS_CONFIG_DIR}/dist/ssl/cert/ash.crt</cert-file>
<key-file>${NCS_CONFIG_DIR}/dist/ssl/cert/ash.key</key-file>
</ssl>
<seed-nodes>
<seed-node>birch.example.org</seed-node>
</seed-nodes>
</ha-raft>admin@ncs# request ha-raft read-only-mode true
admin@ncs# ha-raft create-cluster member [ birch.example.org cedar.example.org ]
admin@ncs# show ha-raft
ha-raft status role leader
ha-raft status leader ash.example.org
ha-raft status member [ ash.example.org birch.example.org cedar.example.org ]
ha-raft status connected-node [ birch.example.org cedar.example.org ]
ha-raft status local-node ash.example.org
...
admin@ncs# request ha-raft read-only-mode falseadmin@ncs# show ha-raft status member
ha-raft status member [ ash.example.org birch.example.org cedar.example.org ]
admin@ncs# ha-raft adjust-membership remove-node birch.example.org
admin@ncs# show ha-raft status member
ha-raft status member [ ash.example.org cedar.example.org ]
admin@ncs# ha-raft adjust-membership add-node dollartree.example.org
admin@ncs# show ha-raft status member
ha-raft status member [ ash.example.org cedar.example.org dollartree.example.org ]alarms alarm-list alarm ncs ha-primary-down /high-availability/ha-node[id='paris']
is-cleared false
last-status-change 2022-05-30T10:02:45.706947+00:00
last-perceived-severity critical
last-alarm-text "Lost connection to primary due to: Primary closed connection"
status-change 2022-05-30T10:02:45.706947+00:00
received-time 2022-05-30T10:02:45.706947+00:00
perceived-severity critical
alarm-text "Lost connection to primary due to: Primary closed connection"alarms alarm-list alarm ncs ha-secondary-down /high-availability/ha-node[id='london'] ""
is-cleared false
last-status-change 2022-05-30T10:04:33.231808+00:00
last-perceived-severity critical
last-alarm-text "Lost connection to secondary"
status-change 2022-05-30T10:04:33.231808+00:00
received-time 2022-05-30T10:04:33.231808+00:00
perceived-severity critical
alarm-text "Lost connection to secondary"admin@ncs(config)# hcc enabled
admin@ncs(config)# hcc vip 192.168.123.22
admin@ncs(config)# hcc vip 2001:db8::10
admin@ncs(config)# commitadmin@ncs# show hcc
NODE BGPD BGPD
ID PID STATUS ADDRESS STATE CONNECTED
-------------------------------------------------------------
london - - 192.168.30.2 - -
paris 827 running 192.168.31.2 ESTABLISHED trueadmin@ncs(config)# hcc bgp node paris enabled
admin@ncs(config)# hcc bgp node paris as 64512
admin@ncs(config)# hcc bgp node paris router-id 192.168.31.99
admin@ncs(config)# hcc bgp node paris neighbor 192.168.31.2 as 64514
admin@ncs(config)# ... repeated for each neighbor if more than one ...
... repeated for each node ...
admin@ncs(config)# commitadmin@ncs# show hcc dns
hcc dns status time 2023-10-20T23:16:33.472522+00:00
hcc dns status exit-code 0admin@ncs# show hcc dns
hcc dns status time 2023-10-20T23:36:33.372631+00:00
hcc dns status exit-code 2
hcc dns status error-message "; Communication with 10.0.0.10#53 failed: timed out"admin@ncs(config)# hcc dns enabled
admin@ncs(config)# hcc dns fqdn example.com
admin@ncs(config)# hcc dns ttl 120
admin@ncs(config)# hcc dns key-file /home/cisco/DNS-testing/good.key
admin@ncs(config)# hcc dns server 10.0.0.10
admin@ncs(config)# hcc dns port 53
admin@ncs(config)# hcc dns zone zone1.nso
admin@ncs(config)# hcc dns member node-1 ip-address [ 10.0.0.20 ::10 ]
admin@ncs(config)# hcc dns member node-1 location SanJose
admin@ncs(config)# hcc dns member node-2 ip-address [ 10.0.0.30 ::20 ]
admin@ncs(config)# hcc dns member node-2 location NewYork
admin@ncs(config)# commitadmin@ncs(config)# hcc enabled
admin@ncs(config)# hcc vip 192.168.23.122
admin@ncs(config)# commitroot@paris:/var/log/ncs# ip address list
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:fa:61:99 brd ff:ff:ff:ff:ff:ff
inet 192.168.23.99/24 brd 192.168.23.255 scope global enp0s3
valid_lft forever preferred_lft forever
inet 192.168.23.122/32 scope global enp0s3
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fefa:6199/64 scope link
valid_lft forever preferred_lft foreverroot@london:~# ip address list
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 ...
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 ...
link/ether 52:54:00:fa:61:98 brd ff:ff:ff:ff:ff:ff
inet 192.168.23.98/24 brd 192.168.23.255 scope global enp0s3
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fefa:6198/64 scope link
valid_lft forever preferred_lft foreveradmin@ncs(config)# hcc bgp node paris enabled
admin@ncs(config)# hcc bgp node paris as 64512
admin@ncs(config)# hcc bgp node paris router-id 192.168.31.99
admin@ncs(config)# hcc bgp node paris neighbor 192.168.31.2 as 64514
admin@ncs(config)# commitadmin@ncs(config)# hcc bgp node london enabled
admin@ncs(config)# hcc bgp node london as 64513
admin@ncs(config)# hcc bgp node london router-id 192.168.30.98
admin@ncs(config)# hcc bgp node london neighbor 192.168.30.2 as 64514
admin@ncs(config)# commitadmin@ncs# show hcc
BGPD BGPD
NODE ID PID STATUS ADDRESS STATE CONNECTED
----------------------------------------------------------------
london - - 192.168.30.2 - -
paris 2486 running 192.168.31.2 ESTABLISHED trueadmin@ncs# show hcc
BGPD BGPD
NODE ID PID STATUS ADDRESS STATE CONNECTED
----------------------------------------------------------------
london 494 running 192.168.30.2 ESTABLISHED true
paris - - 192.168.31.2 - -admin@ncs# show ip bgp
...
Network Next Hop Metric LocPrf Weight Path
*> 192.168.23.122/32
192.168.31.99 0 64513 ?admin@ncs# hcc dns updateadmin@ncs# show hcc dns
hcc dns status time 2023-10-10T20:47:31.733661+00:00
hcc dns status exit-code 0
hcc dns status error-message ""cisco@node-2:~$ nslookup example.com
Server: 10.0.0.10
Address: 10.0.0.10#53
Name: example.com
Address: 10.0.0.20
Name: example.com
Address: ::10admin@ncs(config)# hcc dns get-node-location
location SanJose$ make build start
Setting up run directory for nso-node1
... make output omitted ...
Waiting for n2 to connect: .
$ ssh -p 2024 admin@localhost
admin@localhost's password: admin
admin connected from 127.0.0.1 using ssh on localhost
admin@n1> switch cli
admin@n1# show high-availability
high-availability enabled
high-availability status mode primary
high-availability status current-id n1
high-availability status assigned-role primary
high-availability status read-only-mode false
ID ADDRESS
---------------
n2 127.0.0.1admin@n1# high-availability disable
result NSO Built-in HA disabled
admin@n1# exit
Connection to localhost closed.$ ssh -p 2024 admin@localhost
admin@localhost's password: admin
admin connected from 127.0.0.1 using ssh on localhost
admin@n2> switch cli
admin@n2# show high-availability
high-availability enabled
high-availability status mode primary
high-availability status current-id n2
high-availability status assigned-role primary
high-availability status read-only-mode false<netconf-north-bound>
<transport>
<ssh>
<enabled>true</enabled>
<ip>0.0.0.0</ip>
<port>830</port>
<ha-primary-listen>
<ip>0.0.0.0</ip>
<port>1830</port>
</ha-primary-listen>
<ha-primary-listen>
<ip>::</ip>
<port>1830</port>
</ha-primary-listen>
</ssh>
</transport>
</netconf-north-bound><ha>
<enabled>true</enabled>
<ip>0.0.0.0</ip>
<port>4570</port>
<extra-listen>
<ip>::</ip>
<port>4569</port>
</extra-listen>
<tick-timeout>PT20S</tick-timeout>
</ha>$ yanger -f tree --tree-depth=3 tailf-ncs.yang
module: tailf-ncs
+--rw ssh
| +--rw host-key-verification? ssh-host-key-verification-level
| +--rw private-key* [name]
| +--rw name string
| +--rw key-data ssh-private-key
| +--rw passphrase? tailf:aes-256-cfb-128-encrypted-string
+--rw cluster
| +--rw remote-node* [name]
| | +--rw name node-name
| | +--rw address? inet:host
| | +--rw port? inet:port-number
| | +--rw ssh
| | +--rw authgroup -> /cluster/authgroup/name
| | +--rw trace? trace-flag
| | +--rw username? string
| | +--rw notifications
| | +--ro device* [name]
| +--rw authgroup* [name]
| | +--rw name string
| | +--rw default-map!
| | +--rw umap* [local-user]
| +--rw commit-queue
| | +--rw enabled? boolean
| +--ro enabled? boolean
| +--ro connection*
| +--ro remote-node? -> /cluster/remote-node/name
| +--ro address? inet:ip-address
| +--ro port? inet:port-number
| +--ro channels? uint32
| +--ro local-user? string
| +--ro remote-user? string
| +--ro status? enumeration
| +--ro trace? enumeration
...module l3vpn {
namespace "http://com/example/l3vpn";
prefix l3vpn;
...
container topology {
list role {
key "role";
tailf:cli-compact-syntax;
leaf role {
type enumeration {
enum ce;
enum pe;
enum p;
}
}
leaf-list device {
type leafref {
path "/ncs:devices/ncs:device/ncs:name";
}
}
}
list connection {
key "name";
leaf name {
type string;
}
container endpoint-1 {
tailf:cli-compact-syntax;
uses connection-grouping;
}
container endpoint-2 {
tailf:cli-compact-syntax;
uses connection-grouping;
}
leaf link-vlan {
type uint32;
}
}
}<!-- Where the database (and init XML) files are kept -->
<cdb>
<db-dir>./ncs-cdb</db-dir>
</cdb><config xmlns="http://tail-f.com/ns/config/1.0">
<topology xmlns="http://com/example/l3vpn">
<role>
<role>ce</role>
<device>ce0</device>
<device>ce1</device>
<device>ce2</device>
...
</role>
<role>
<role>pe</role>
<device>pe0</device>
<device>pe1</device>
<device>pe2</device>
<device>pe3</device>
</role>
...
<connection>
<name>c0</name>
<endpoint-1>
<device>ce0</device>
<interface>GigabitEthernet0/8</interface>
<ip-address>192.168.1.1/30</ip-address>
</endpoint-1>
<endpoint-2>
<device>pe0</device>
<interface>GigabitEthernet0/0/0/3</interface>
<ip-address>192.168.1.2/30</ip-address>
</endpoint-2>
<link-vlan>88</link-vlan>
</connection>
<connection>
<name>c1</name>
...<config xmlns="http://tail-f.com/ns/config/1.0">
...
</config>module test {
namespace "http://example.com/test";
prefix t;
import tailf-common {
prefix tailf;
}
description "This model is used as a simple example model
illustrating some aspects of CDB subscriptions
and CDB operational data";
revision 2012-06-26 {
description "Initial revision.";
}
container test {
list config-item {
key ckey;
leaf ckey {
type string;
}
leaf i {
type int32;
}
}
list stats-item {
config false;
tailf:cdb-oper;
key skey;
leaf skey {
type string;
}
leaf i {
type int32;
}
container inner {
leaf l {
type string;
}
}
}
}
}public class PlainCdbSub implements ApplicationComponent {
private static final Logger LOGGER
= LogManager.getLogger(PlainCdbSub.class);
@Resource(type = ResourceType.CDB, scope = Scope.INSTANCE,
qualifier = "plain")
private Cdb cdb;
private CdbSubscription sub;
private int subId;
private boolean requestStop;
public PlainCdbSub() {
}
public void init() {
try {
LOGGER.info(" init cdb subscriber ");
sub = new CdbSubscription(cdb);
String str = "/devices/device{ex0}/config";
subId = sub.subscribe(1, new Ncs(), str);
sub.subscribeDone();
LOGGER.info("subscribeDone");
requestStop = false;
} catch (Exception e) {
throw new RuntimeException("FAIL in init", e);
}
}
public void run() {
try {
while (!requestStop) {
try {
sub.read();
sub.diffIterate(subId, new Iter());
} finally {
sub.sync(CdbSubscriptionSyncType.DONE_SOCKET);
}
}
} catch (ConfException e) {
if (e.getErrorCode() == ErrorCode.ERR_EOF) {
// Triggered by finish method
// if we throw further NCS JVM will try to restart
// the package
LOGGER.warn(" Socket Closed!");
} else {
throw new RuntimeException("FAIL in run", e);
}
} catch (Exception e) {
LOGGER.warn("Exception:" + e.getMessage());
throw new RuntimeException("FAIL in run", e);
} finally {
requestStop = false;
LOGGER.warn(" run end ");
}
}
public void finish() {
requestStop = true;
LOGGER.warn(" PlainSub in finish () =>");
try {
// ResourceManager will close the resource (cdb) used by this
// instance that triggers ConfException with ErrorCode.ERR_EOF
// in run method
ResourceManager.unregisterResources(this);
} catch (Exception e) {
throw new RuntimeException("FAIL in finish", e);
}
LOGGER.warn(" PlainSub in finish () => ok");
}
private class Iter implements CdbDiffIterate {
public DiffIterateResultFlag iterate(ConfObject[] kp,
DiffIterateOperFlag op,
ConfObject oldValue,
ConfObject newValue,
Object state) {
try {
String kpString = Conf.kpToString(kp);
LOGGER.info("diffIterate: kp= " + kpString + ", OP=" + op
+ ", old_value=" + oldValue + ", new_value="
+ newValue);
return DiffIterateResultFlag.ITER_RECURSE;
} catch (Exception e) {
return DiffIterateResultFlag.ITER_CONTINUE;
}
}
}
} @Resource(type = ResourceType.CDB, scope = Scope.INSTANCE,
qualifier = "plain")
private Cdb cdb; public void init() {
try {
LOGGER.info(" init cdb subscriber ");
sub = new CdbSubscription(cdb);
String str = "/devices/device{ex0}/config";
subId = sub.subscribe(1, new Ncs(), str);
sub.subscribeDone();
LOGGER.info("subscribeDone");
requestStop = false;
} catch (Exception e) {
throw new RuntimeException("FAIL in init", e);
}
} public void run() {
try {
while (!requestStop) {
try {
sub.read();
sub.diffIterate(subId, new Iter());
} finally {
sub.sync(CdbSubscriptionSyncType.DONE_SOCKET);
}
}
} catch (ConfException e) {
if (e.getErrorCode() == ErrorCode.ERR_EOF) {
// Triggered by finish method
// if we throw further NCS JVM will try to restart
// the package
LOGGER.warn(" Socket Closed!");
} else {
throw new RuntimeException("FAIL in run", e);
}
} catch (Exception e) {
LOGGER.warn("Exception:" + e.getMessage());
throw new RuntimeException("FAIL in run", e);
} finally {
requestStop = false;
LOGGER.warn(" run end ");
}
} private class Iter implements CdbDiffIterate {
public DiffIterateResultFlag iterate(ConfObject[] kp,
DiffIterateOperFlag op,
ConfObject oldValue,
ConfObject newValue,
Object state) {
try {
String kpString = Conf.kpToString(kp);
LOGGER.info("diffIterate: kp= " + kpString + ", OP=" + op
+ ", old_value=" + oldValue + ", new_value="
+ newValue);
return DiffIterateResultFlag.ITER_RECURSE;
} catch (Exception e) {
return DiffIterateResultFlag.ITER_CONTINUE;
}
}
} public void finish() {
requestStop = true;
LOGGER.warn(" PlainSub in finish () =>");
try {
// ResourceManager will close the resource (cdb) used by this
// instance that triggers ConfException with ErrorCode.ERR_EOF
// in run method
ResourceManager.unregisterResources(this);
} catch (Exception e) {
throw new RuntimeException("FAIL in finish", e);
}
LOGGER.warn(" PlainSub in finish () => ok");
}$ make clean all
$ ncs-netsim start
DEVICE ex0 OK STARTED
DEVICE ex1 OK STARTED
DEVICE ex2 OK STARTED
$ ncs$ ncs_cli -u admin
admin connected from 127.0.0.1 using console on ncs
admin@ncs# config exclusive
Entering configuration mode exclusive
Warning: uncommitted changes will be discarded on exit
admin@ncs(config)# devices sync-from
sync-result {
device ex0
result true
}
sync-result {
device ex1
result true
}
sync-result {
device ex2
result true
}
admin@ncs(config)# devices device ex0 config r:sys syslog server 4.5.6.7 enabled
admin@ncs(config-server-4.5.6.7)# commit
Commit complete.
admin@ncs(config-server-4.5.6.7)# top
admin@ncs(config)# exit
admin@ncs# show devices device ex0 config r:sys syslog
NAME
----------
4.5.6.7
10.3.4.5<INFO> 05-Feb-2015::13:24:55,760 PlainCdbSub$Iter
(cdb-examples:Plain CDB Subscriber) -Run-4: - diffIterate:
kp= /ncs:devices/device{ex0}/config/r:sys/syslog/server{4.5.6.7},
OP=MOP_CREATED, old_value=null, new_value=null
<INFO> 05-Feb-2015::13:24:55,761 PlainCdbSub$Iter
(cdb-examples:Plain CDB Subscriber) -Run-4: - diffIterate:
kp= /ncs:devices/device{ex0}/config/r:sys/syslog/server{4.5.6.7}/name,
OP=MOP_VALUE_SET, old_value=null, new_value=4.5.6.7
<INFO> 05-Feb-2015::13:24:55,762 PlainCdbSub$Iter
(cdb-examples:Plain CDB Subscriber) -Run-4: - diffIterate:
kp= /ncs:devices/device{ex0}/config/r:sys/syslog/server{4.5.6.7}/enabled,
OP=MOP_VALUE_SET, old_value=null, new_value=true$ make clean all
$ ncs-netsim start
DEVICE ex0 OK STARTED
DEVICE ex1 OK STARTED
DEVICE ex2 OK STARTED
$ ncs
$ ncs_cli -u admin
admin@ncs# devices sync-from suppress-positive-result
admin@ncs# config
admin@ncs(config)# no devices device ex* config r:sys interfaces
admin@ncs(config)# devices device ex0 config r:sys interfaces \
> interface en0 mac 3c:07:54:71:13:09 mtu 1500 duplex half unit 0 family inet \
> address 192.168.1.115 broadcast 192.168.1.255 prefix-length 32
admin@ncs(config-address-192.168.1.115)# commit
Commit complete.
admin@ncs(config-address-192.168.1.115)# top
admin@ncs(config)# exit...
<INFO> 05-Feb-2015::16:10:23,346 ConfigCdbSub
(cdb-examples:CdbCfgSubscriber)-Run-1: - Device {ex0}
<INFO> 05-Feb-2015::16:10:23,346 ConfigCdbSub
(cdb-examples:CdbCfgSubscriber)-Run-1: - INTERFACE
<INFO> 05-Feb-2015::16:10:23,346 ConfigCdbSub
(cdb-examples:CdbCfgSubscriber)-Run-1: - name: {en0}
<INFO> 05-Feb-2015::16:10:23,346 ConfigCdbSub
(cdb-examples:CdbCfgSubscriber)-Run-1: - description:null
<INFO> 05-Feb-2015::16:10:23,350 ConfigCdbSub
(cdb-examples:CdbCfgSubscriber)-Run-1: - speed:null
<INFO> 05-Feb-2015::16:10:23,354 ConfigCdbSub
(cdb-examples:CdbCfgSubscriber)-Run-1: - duplex:half
<INFO> 05-Feb-2015::16:10:23,354 ConfigCdbSub
(cdb-examples:CdbCfgSubscriber)-Run-1: - mtu:1500
<INFO> 05-Feb-2015::16:10:23,354 ConfigCdbSub
(cdb-examples:CdbCfgSubscriber)-Run-1: - mac:<<60,7,84,113,19,9>>
<INFO> 05-Feb-2015::16:10:23,354 ConfigCdbSub
(cdb-examples:CdbCfgSubscriber)-Run-1: - UNIT
<INFO> 05-Feb-2015::16:10:23,354 ConfigCdbSub
(cdb-examples:CdbCfgSubscriber)-Run-1: - name: {0}
<INFO> 05-Feb-2015::16:10:23,355 ConfigCdbSub
(cdb-examples:CdbCfgSubscriber)-Run-1: - descripton: null
<INFO> 05-Feb-2015::16:10:23,355 ConfigCdbSub
(cdb-examples:CdbCfgSubscriber)-Run-1: - vlan-id:null
<INFO> 05-Feb-2015::16:10:23,355 ConfigCdbSub
(cdb-examples:CdbCfgSubscriber)-Run-1: - ADDRESS-FAMILY
<INFO> 05-Feb-2015::16:10:23,355 ConfigCdbSub
(cdb-examples:CdbCfgSubscriber)-Run-1: - key: {192.168.1.115}
<INFO> 05-Feb-2015::16:10:23,355 ConfigCdbSub
(cdb-examples:CdbCfgSubscriber)-Run-1: - prefixLength: 32
<INFO> 05-Feb-2015::16:10:23,355 ConfigCdbSub
(cdb-examples:CdbCfgSubscriber)-Run-1: - broadCast:192.168.1.255
<INFO> 05-Feb-2015::16:10:23,356 ConfigCdbSub
(cdb-examples:CdbCfgSubscriber)-Run-1: - Device {ex1}
<INFO> 05-Feb-2015::16:10:23,356 ConfigCdbSub
(cdb-examples:CdbCfgSubscriber)-Run-1: - Device {ex2} list stats-item {
config false;
tailf:cdb-oper;
key skey;
leaf skey {
type string;
}
leaf i {
type int32;
}
container inner {
leaf l {
type string;
}
}
} public static void createEntry(String key)
throws IOException, ConfException {
Socket socket = new Socket("127.0.0.1", Conf.NCS_PORT);
Maapi maapi = new Maapi(socket);
maapi.startUserSession("system", InetAddress.getByName(null),
"system", new String[]{},
MaapiUserSessionFlag.PROTO_TCP);
NavuContext operContext = new NavuContext(maapi);
int th = operContext.startOperationalTrans(Conf.MODE_READ_WRITE);
NavuContainer mroot = new NavuContainer(operContext);
LOGGER.debug("ROOT --> " + mroot);
ConfNamespace ns = new test();
NavuContainer testModule = mroot.container(ns.hash());
NavuList list = testModule.container("test").list("stats-item");
LOGGER.debug("LIST: --> " + list);
List<ConfXMLParam> param = new ArrayList<>();
param.add(new ConfXMLParamValue(ns, "skey", new ConfBuf(key)));
param.add(new ConfXMLParamValue(ns, "i",
new ConfInt32(key.hashCode())));
param.add(new ConfXMLParamStart(ns, "inner"));
param.add(new ConfXMLParamValue(ns, "l", new ConfBuf("test-" + key)));
param.add(new ConfXMLParamStop(ns, "inner"));
list.setValues(param.toArray(new ConfXMLParam[0]));
maapi.applyTrans(th, false);
maapi.finishTrans(th);
maapi.endUserSession();
socket.close();
} public static void deleteEntry(String key)
throws IOException, ConfException {
Socket s = new Socket("127.0.0.1", Conf.NCS_PORT);
Cdb c = new Cdb("writer", s);
CdbSession sess = c.startSession(CdbDBType.CDB_OPERATIONAL,
EnumSet.of(CdbLockType.LOCK_REQUEST,
CdbLockType.LOCK_WAIT));
ConfPath path = new ConfPath("/t:test/stats-item{%x}",
new ConfKey(new ConfBuf(key)));
sess.delete(path);
sess.endSession();
s.close();
}public class OperCdbSub implements ApplicationComponent, CdbDiffIterate {
private static final Logger LOGGER = LogManager.getLogger(OperCdbSub.class);
// let our ResourceManager inject Cdb sockets to us
// no explicit creation of creating and opening sockets needed
@Resource(type = ResourceType.CDB, scope = Scope.INSTANCE,
qualifier = "sub-sock")
private Cdb cdbSub;
@Resource(type = ResourceType.CDB, scope = Scope.INSTANCE,
qualifier = "data-sock")
private Cdb cdbData;
private boolean requestStop;
private int point;
private CdbSubscription cdbSubscription;
public OperCdbSub() {
}
public void init() {
LOGGER.info(" init oper subscriber ");
try {
cdbSubscription = cdbSub.newSubscription();
String path = "/t:test/stats-item";
point = cdbSubscription.subscribe(
CdbSubscriptionType.SUB_OPERATIONAL,
1, test.hash, path);
cdbSubscription.subscribeDone();
LOGGER.info("subscribeDone");
requestStop = false;
} catch (Exception e) {
LOGGER.error("Fail in init", e);
}
}
public void run() {
try {
while (!requestStop) {
try {
int[] points = cdbSubscription.read();
CdbSession cdbSession
= cdbData.startSession(CdbDBType.CDB_OPERATIONAL);
EnumSet<DiffIterateFlags> diffFlags
= EnumSet.of(DiffIterateFlags.ITER_WANT_PREV);
cdbSubscription.diffIterate(points[0], this, diffFlags,
cdbSession);
cdbSession.endSession();
} finally {
cdbSubscription.sync(
CdbSubscriptionSyncType.DONE_OPERATIONAL);
}
}
} catch (Exception e) {
LOGGER.error("Fail in run shouldrun", e);
}
requestStop = false;
}
public void finish() {
requestStop = true;
try {
ResourceManager.unregisterResources(this);
} catch (Exception e) {
LOGGER.error("Fail in finish", e);
}
}
@Override
public DiffIterateResultFlag iterate(ConfObject[] kp,
DiffIterateOperFlag op,
ConfObject oldValue,
ConfObject newValue,
Object initstate) {
LOGGER.info(op + " " + Arrays.toString(kp) + " value: " + newValue);
switch (op) {
case MOP_DELETED:
break;
case MOP_CREATED:
case MOP_MODIFIED: {
break;
}
default:
break;
}
return DiffIterateResultFlag.ITER_RECURSE;
}
}$ make clean all
$ ncs
$ ./setoper eth0
$ ./setoper ethX
$ ./deloper ethX
$ ncs_cli -u admin
admin@ncs# show test
SKEY I L
--------------------------
eth0 3123639 test-eth0<INFO> 05-Feb-2015::16:27:46,583 OperCdbSub
(cdb-examples:OperSubscriber)-Run-0:
- MOP_CREATED [{eth0}, t:stats-item, t:test] value: null
<INFO> 05-Feb-2015::16:27:46,584 OperCdbSub
(cdb-examples:OperSubscriber)-Run-0:
- MOP_VALUE_SET [t:skey, {eth0}, t:stats-item, t:test] value: eth0
<INFO> 05-Feb-2015::16:27:46,584 OperCdbSub
(cdb-examples:OperSubscriber)-Run-0:
- MOP_VALUE_SET [t:l, t:inner, {eth0}, t:stats-item, t:test] value: test-eth0
<INFO> 05-Feb-2015::16:27:46,585 OperCdbSub
(cdb-examples:OperSubscriber)-Run-0:
- MOP_VALUE_SET [t:i, {eth0}, t:stats-item, t:test] value: 3123639
<INFO> 05-Feb-2015::16:27:52,429 OperCdbSub
(cdb-examples:OperSubscriber)-Run-0:
- MOP_CREATED [{ethX}, t:stats-item, t:test] value: null
<INFO> 05-Feb-2015::16:27:52,430 OperCdbSub
(cdb-examples:OperSubscriber)-Run-0:
- MOP_VALUE_SET [t:skey, {ethX}, t:stats-item, t:test] value: ethX
<INFO> 05-Feb-2015::16:27:52,430 OperCdbSub
(cdb-examples:OperSubscriber)-Run-0:
- MOP_VALUE_SET [t:l, t:inner, {ethX}, t:stats-item, t:test] value: test-ethX
<INFO> 05-Feb-2015::16:27:52,431 OperCdbSub
(cdb-examples:OperSubscriber)-Run-0:
- MOP_VALUE_SET [t:i, {ethX}, t:stats-item, t:test] value: 3123679
<INFO> 05-Feb-2015::16:28:00,669 OperCdbSub
(cdb-examples:OperSubscriber)-Run-0:
- MOP_DELETED [{ethX}, t:stats-item, t:test] value: null <developer-log>
<enabled>true</enabled>
<file>
<name>./logs/devel.log</name>
<enabled>true</enabled>
</file>
<syslog>
<enabled>true</enabled>
</syslog>
</developer-log>
<developer-log-level>trace</developer-log-level>module servers {
namespace "http://example.com/ns/servers";
prefix servers;
import ietf-inet-types {
prefix inet;
}
revision "2007-06-01" {
description "added protocol.";
}
revision "2006-09-01" {
description "Initial servers data model";
}
/* A set of server structures */
container servers {
list server {
key name;
max-elements 64;
leaf name {
type string;
}
leaf ip {
type inet:ip-address;
mandatory true;
}
leaf port {
type inet:port-number;
mandatory true;
}
leaf protocol {
type enumeration {
enum tcp;
enum udp;
}
mandatory true;
}
}
}
}diff ../servers1.5.yang ../servers1.4.yang
9,12d8
< revision "2007-06-01" {
< description "added protocol.";
< }
<
31,37d26
< mandatory true;
< }
< leaf protocol {
< type enumeration {
< enum tcp;
< enum udp;
< }<servers:servers xmlns:servers="http://example.com/ns/servers">
<servers:server>
<servers:name>www</servers:name>
<servers:ip>192.168.3.4</servers:ip>
<servers:port>88</servers:port>
<servers:protocol>tcp</servers:protocol>
</servers:server>
<servers:server>
<servers:name>www2</servers:name>
<servers:ip>192.168.3.5</servers:ip>
<servers:port>80</servers:port>
<servers:protocol>tcp</servers:protocol>
</servers:server>
<servers:server>
<servers:name>smtp</servers:name>
<servers:ip>192.168.3.4</servers:ip>
<servers:port>25</servers:port>
<servers:protocol>tcp</servers:protocol>
</servers:server>
<servers:server>
<servers:name>dns</servers:name>
<servers:ip>192.168.3.5</servers:ip>
<servers:port>53</servers:port>
<servers:protocol>udp</servers:protocol>
</servers:server>
</servers:servers> <servers xmlns="http://example.com/ns/servers">
<server>
<name>dns</name>
<ip>192.168.3.5</ip>
<port>53</port>
<protocol>udp</protocol>
</server>
<server>
<name>www</name>
<ip>192.168.3.4</ip>
<port>88</port>
<protocol>tcp</protocol>
</server>
<server>
<name>www2</name>
<ip>192.168.3.5</ip>
<port>80</port>
<protocol>tcp</protocol>
</server>
</servers><ncs-package xmlns="http://tail-f.com/ns/ncs-packages">
....
<component>
<name>do-upgrade</name>
<upgrade>
<java-class-name>com.example.DoUpgrade</java-class-name>
</upgrade>
</component>
</ncs-package>module vlan-service {
namespace "http://example.com/vlan-service";
prefix vl;
import tailf-common {
prefix tailf;
}
import tailf-ncs {
prefix ncs;
}
description
"This service creates a vlan iface/unit on all routers in our network. ";
revision 2013-08-30 {
description
"Added mandatory leaf global-id.";
}
revision 2013-01-08 {
description
"Initial revision.";
}
augment /ncs:services {
list vlan {
key name;
leaf name {
tailf:info "Unique service id";
tailf:cli-allow-range;
type string;
}
uses ncs:service-data;
ncs:servicepoint vlanspnt_v2;
tailf:action self-test {
tailf:info "Perform self-test of the service";
tailf:actionpoint vlanselftest;
output {
leaf success {
type boolean;
}
leaf message {
type string;
description
"Free format message.";
}
}
}
leaf global-id {
type string;
mandatory true;
}
leaf iface {
type string;
mandatory true;
}
leaf unit {
type int32;
mandatory true;
}
leaf vid {
type uint16;
mandatory true;
}
leaf description {
type string;
mandatory true;
}
}
}
}$ diff vlan/src/yang/vlan-service.yang \
vlan_v2/src/yang/vlan-service.yang
16a18,22
> revision 2013-08-30 {
> description
> "Added mandatory leaf global-id.";
> }
>
48a55,58
> leaf global-id {
> type string;
> mandatory true;
> }
68c78public class UpgradeService {
public UpgradeService() {
}
public static void main(String[] args) throws Exception {
Socket s1 = new Socket("localhost", Conf.NCS_PORT);
Cdb cdb = new Cdb("cdb-upgrade-sock", s1);
cdb.setUseForCdbUpgrade();
CdbUpgradeSession cdbsess =
cdb.startUpgradeSession(
CdbDBType.CDB_RUNNING,
EnumSet.of(CdbLockType.LOCK_SESSION,
CdbLockType.LOCK_WAIT));
Socket s2 = new Socket("localhost", Conf.NCS_PORT);
Maapi maapi = new Maapi(s2);
int th = maapi.attachInit();
int no = cdbsess.getNumberOfInstances("/services/vlan");
for(int i = 0; i < no; i++) {
Integer offset = Integer.valueOf(i);
ConfBuf name = (ConfBuf)cdbsess.getElem("/services/vlan[%d]/name",
offset);
ConfBuf iface = (ConfBuf)cdbsess.getElem("/services/vlan[%d]/iface",
offset);
ConfInt32 unit =
(ConfInt32)cdbsess.getElem("/services/vlan[%d]/unit",
offset);
ConfUInt16 vid =
(ConfUInt16)cdbsess.getElem("/services/vlan[%d]/vid",
offset);
String nameStr = name.toString();
System.out.println("SERVICENAME = " + nameStr);
String globId = String.format("%1$s-%2$s-%3$s", iface.toString(),
unit.toString(), vid.toString());
ConfPath gidpath = new ConfPath("/services/vlan{%s}/global-id",
name.toString());
maapi.setElem(th, new ConfBuf(globId), gidpath);
}
s1.close();
s2.close();
}
} Socket s1 = new Socket("localhost", Conf.NCS_PORT);
Cdb cdb = new Cdb("cdb-upgrade-sock", s1);
cdb.setUseForCdbUpgrade();
CdbUpgradeSession cdbsess =
cdb.startUpgradeSession(
CdbDBType.CDB_RUNNING,
EnumSet.of(CdbLockType.LOCK_SESSION,
CdbLockType.LOCK_WAIT)); Socket s2 = new Socket("localhost", Conf.NCS_PORT);
Maapi maapi = new Maapi(s2);
int th = maapi.attachInit(); int no = cdbsess.getNumberOfInstances("/services/vlan");
for(int i = 0; i < no; i++) { ConfBuf name = (ConfBuf)cdbsess.getElem("/services/vlan[%d]/name",
offset);
ConfBuf iface = (ConfBuf)cdbsess.getElem("/services/vlan[%d]/iface",
offset);
ConfInt32 unit =
(ConfInt32)cdbsess.getElem("/services/vlan[%d]/unit",
offset);
ConfUInt16 vid =
(ConfUInt16)cdbsess.getElem("/services/vlan[%d]/vid",
offset); String globId = String.format("%1$s-%2$s-%3$s", iface.toString(),
unit.toString(), vid.toString());
ConfPath gidpath = new ConfPath("/services/vlan{%s}/global-id",
name.toString());
maapi.setElem(th, new ConfBuf(globId), gidpath); s1.close();
s2.close();module tunnel-service {
namespace "http://example.com/tunnel-service";
prefix tl;
import tailf-common {
prefix tailf;
}
import tailf-ncs {
prefix ncs;
}
description
"This service creates a tunnel assembly on all routers in our network. ";
revision 2013-01-08 {
description
"Initial revision.";
}
augment /ncs:services {
list tunnel {
key tunnel-name;
leaf tunnel-name {
tailf:info "Unique service id";
tailf:cli-allow-range;
type string;
}
uses ncs:service-data;
ncs:servicepoint tunnelspnt;
tailf:action self-test {
tailf:info "Perform self-test of the service";
tailf:actionpoint tunnelselftest;
output {
leaf success {
type boolean;
}
leaf message {
type string;
description
"Free format message.";
}
}
}
leaf gid {
type string;
mandatory true;
}
leaf interface {
type string;
mandatory true;
}
leaf assembly {
type int32;
mandatory true;
}
leaf tunnel-id {
type uint16;
mandatory true;
}
leaf descr {
type string;
mandatory true;
}
}
}
}public class UpgradeService {
public UpgradeService() {
}
public static void main(String[] args) throws Exception {
ArrayList<ConfNamespace> nsList = new ArrayList<ConfNamespace>();
nsList.add(new vlanService());
Socket s1 = new Socket("localhost", Conf.NCS_PORT);
Cdb cdb = new Cdb("cdb-upgrade-sock", s1);
cdb.setUseForCdbUpgrade(nsList);
CdbUpgradeSession cdbsess =
cdb.startUpgradeSession(
CdbDBType.CDB_RUNNING,
EnumSet.of(CdbLockType.LOCK_SESSION,
CdbLockType.LOCK_WAIT));
Socket s2 = new Socket("localhost", Conf.NCS_PORT);
Maapi maapi = new Maapi(s2);
int th = maapi.attachInit();
int no = cdbsess.getNumberOfInstances("/services/vlan");
for(int i = 0; i < no; i++) {
ConfBuf name =(ConfBuf)cdbsess.getElem("/services/vlan[%d]/name",
Integer.valueOf(i));
String nameStr = name.toString();
System.out.println("SERVICENAME = " + nameStr);
ConfCdbUpgradePath oldPath =
new ConfCdbUpgradePath("/ncs:services/vl:vlan{%s}",
name.toString());
ConfPath newPath = new ConfPath("/services/tunnel{%x}", name);
maapi.create(th, newPath);
ConfXMLParam[] oldparams = new ConfXMLParam[] {
new ConfXMLParamLeaf("vl", "global-id"),
new ConfXMLParamLeaf("vl", "iface"),
new ConfXMLParamLeaf("vl", "unit"),
new ConfXMLParamLeaf("vl", "vid"),
new ConfXMLParamLeaf("vl", "description"),
};
ConfXMLParam[] data =
cdbsess.getValues(oldparams, oldPath);
ConfXMLParam[] newparams = new ConfXMLParam[] {
new ConfXMLParamValue("tl", "gid", data[0].getValue()),
new ConfXMLParamValue("tl", "interface", data[1].getValue()),
new ConfXMLParamValue("tl", "assembly", data[2].getValue()),
new ConfXMLParamValue("tl", "tunnel-id", data[3].getValue()),
new ConfXMLParamValue("tl", "descr", data[4].getValue()),
};
maapi.setValues(th, newparams, newPath);
maapi.ncsMovePrivateData(th, oldPath, newPath);
}
s1.close();
s2.close();
}
} ArrayList<ConfNamespace> nsList = new ArrayList<ConfNamespace>();
nsList.add(new vlanService());
Socket s1 = new Socket("localhost", Conf.NCS_PORT);
Cdb cdb = new Cdb("cdb-upgrade-sock", s1);
cdb.setUseForCdbUpgrade(nsList);
CdbUpgradeSession cdbsess =
cdb.startUpgradeSession(
CdbDBType.CDB_RUNNING,
EnumSet.of(CdbLockType.LOCK_SESSION,
CdbLockType.LOCK_WAIT));nslist.add(new ConfNamespaceStub(500805321,
"http://example.com/vlan-service",
"http://example.com/vlan-service",
"vl")); ConfPath newPath = new ConfPath("/services/tunnel{%x}", name);
maapi.create(th, newPath); ConfXMLParam[] oldparams = new ConfXMLParam[] {
new ConfXMLParamLeaf("vl", "global-id"),
new ConfXMLParamLeaf("vl", "iface"),
new ConfXMLParamLeaf("vl", "unit"),
new ConfXMLParamLeaf("vl", "vid"),
new ConfXMLParamLeaf("vl", "description"),
};
ConfXMLParam[] data =
cdbsess.getValues(oldparams, oldPath); ConfXMLParam[] newparams = new ConfXMLParam[] {
new ConfXMLParamValue("tl", "gid", data[0].getValue()),
new ConfXMLParamValue("tl", "interface", data[1].getValue()),
new ConfXMLParamValue("tl", "assembly", data[2].getValue()),
new ConfXMLParamValue("tl", "tunnel-id", data[3].getValue()),
new ConfXMLParamValue("tl", "descr", data[4].getValue()),
};
maapi.setValues(th, newparams, newPath); maapi.ncsMovePrivateData(th, oldPath, newPath);true


false
true
primary
Attempt to join HA setup as secondary by querying for current primary. Retries will be attempted. Retry attempt interval is defined by /high-availability/settings/reconnect-interval.
false
true
secondary
Attempt to join HA setup as secondary by querying for current primary. Retries will be attempted. Retry attempt interval is defined by /high-availability/settings/reconnect-interval. If all retry attempts fail, assume none role.
false
true
none
Assume none role.
true
true
primary
Query HA setup once for a node with primary role. If found, attempt to connect as secondary to that node. If no current primary is found, assume primary role.
true
true
secondary
Attempt to join HA setup as secondary by querying for current primary. Retries will be attempted. Retry attempt interval is defined by /high-availability/settings/reconnect-interval. If all retry attempts fail, assume none role.
true
true
none
Assume none role.
false
false
-
Assume none role.
Installed with most Linux distributions.
arping
iputils or arping
optional
Installation recommended. Will reduce the propagation of changes to the virtual IP for layer-2 configurations.
gobgpd and gobgp
GoBGP 2.x
optional
Required for layer-3 configurations. gobgpd is started by the HCC package and advertises the virtual IP using BGP. gobgp is used to get advertised routes.
nsupdate
bind-tools or knot-dnsutils
optional
Required for layer-3 DNS update functionality and is used to submit Dynamic DNS Update requests to a name server.
enabled
boolean
If set to true, then an outgoing BGP connection to this neighbor is established by the HA group primary node.
server
inet:ip-address
DNS Server IP Address.
port
uint32
DNS Server port, default 53.
zone
inet:host
DNS Zone to update on the server.
timeout
uint32
Timeout for nsupdate command, default 300.
BGP-enabled router
vip4
192.168.23.122
Primary node IPv4 VIP address


admin@node2# show ha-raft
ha-raft status role stalled
ha-raft status local-node node2.example.org
> ... output omitted ... <admin@node1# show ha-raft
ha-raft status role stalled
ha-raft status connected-node [ node2.example.org ]
ha-raft status local-node node1.example.org
> ... output omitted ... <admin@node1# ha-raft create-cluster member [ node2.example.org ]
admin@node1# show ha-raft
ha-raft status role leader
ha-raft status leader node1.example.org
ha-raft status member [ node1.example.org node2.example.org ]
ha-raft status connected-node [ node2.example.org ]
ha-raft status local-node node1.example.org
> ... output omitted ... <admin@node1# ha-raft adjust-membership add-node node3.example.org
admin@node1# show ha-raft status member
ha-raft status member [ node1.example.org node2.example.org node3.example.org ]$ sudo echo "admin ALL = (root) NOPASSWD: /bin/ip" >> /etc/sudoers
$ sudo echo "admin ALL = (root) NOPASSWD: /path/to/arping" >> /etc/sudoerssftp://<user>:<password>@<host>/<path>Implement staged provisioning in your network using nano services.
Typical NSO services perform the necessary configuration by using the create() callback, within a transaction tracking the changes. This approach greatly simplifies service implementation, but it also introduces some limitations. For example, all provisioning is done at once, which may not be possible or desired in all cases. In particular, network functions implemented by containers or virtual machines often require provisioning in multiple steps.
Another limitation is that the service mapping code must not produce any side effects. Side effects are not tracked by the transaction and therefore cannot be automatically reverted. For example, imagine that there is an API call to allocate an IP address from an external system as part of the create() code. The same code runs for every service change or a service re-deploy, even during a commit dry-run, unless you take special precautions. So, a new IP address would be allocated every time, resulting in a lot of waste, or worse, provisioning failures.
Nano services help you overcome these limitations. They implement a service as several smaller (nano) steps or stages, by using a technique called reactive FASTMAP (RFM), and provide a framework to safely execute actions with side effects. Reactive FASTMAP can also be implemented directly, using the CDB subscribers, but nano services offer a more streamlined and robust approach for staged provisioning.
The section starts by gradually introducing the nano service concepts in a typical use case. To aid readers working with nano services for the first time, some of the finer points are omitted in this part and discussed later on, in . The latter is designed as a reference to aid you during implementation, so it focuses on recapitulating the workings of nano services at the expense of examples. The rest of the chapter covers individual features with associated use cases and the complete working examples, which you may find in the examples.ncs folder.
Services ideally perform the configuration all at once, with all the benefits of a transaction, such as automatic rollback and cleanup on errors. For nano services, this is not possible in the general case. Instead, a nano service performs as much configuration as possible at the moment and leaves the rest for later. When an event occurs that allows more work to be done, the nano service instance restarts provisioning, by using a re-deploy action called reactive-re-deploy. It allows the service to perform additional configuration that was not possible before. The process of automatic re-deploy, called reactive FASTMAP, is repeated until the service is fully provisioned.
This is most evident with, for example, virtual machine (VM) provisioning, during virtual network function (VNF) orchestration. Consider a service that deploys and configures a router in a VM. When the service is first instantiated, it starts provisioning a router VM. However, it will likely take some time before the router has booted up and is ready to accept a new configuration. In turn, the service cannot configure the router just yet. The service must wait for the router to become ready. That is the event that triggers a re-deploy and the service can finish configuring the router, as the following figure illustrates:
While each step of provisioning happens inside a transaction and is still atomic, the whole service is not. Instead of a simple fully-provisioned or not-provisioned-at-all status, a nano service can be in a number of other states, depending on how far in the provisioning process it is.
The figure shows that the router VM goes through multiple states internally, however, only two states are important for the service. These two are shown as arrows, in the lower part of the figure. When a new service is configured, it requests a new VM deployment. Having completed this first step, it enters the “VM is requested but still provisioning” state. In the following step, the VM is configured and so enters the second state, where the router VM is deployed and fully configured. The states obviously follow individual provisioning steps and are used to report progress. What is more, each state tracks if an error occurred during provisioning.
For these reasons, service states are central to the design of a nano service. A list of different states, their order, and transitions between them is called a plan outline and governs the service behavior.
By default, the plan outline consists of a single component, the self component, with the two states init and ready. It can be used to track the progress of the service as a whole. You can add any number of additional components and states to form the nano service.
The following YANG snippet, also part of the examples.ncs/development-guide/nano-services/basic-vrouter example, shows a plan outline with the two VM-provisioning states presented above:
The first part contains a definition of states as identities, deriving from the ncs:plan-state base. These identities are then used with the ncs:plan-outline, inside an ncs:component-type statement. Also, note that it is customary to use past tense for state names, for example, configured-vm or vm-configured instead of configure-vm and configuring-vm.
At present, the plan contains one component and two states but no logic. If you wish to do any provisioning for a state, the state must declare a special nano create callback, otherwise, it just acts as a checkpoint. The nano create callback is similar to an ordinary create service callback, allowing service code or templates to perform configuration. To add a callback for a state, extend the definition in the plan outline:
The service automatically enters each state one by one when a new service instance is configured. However, for the vm-configured state, the service should wait until the router VM has had the time to boot and is ready to accept a new configuration. An ncs:pre-condition statement in YANG provides this functionality. Until the condition becomes fulfilled, the service will not advance to that state.
The following YANG code instructs the nano service to check the value of the vm-up-and-running leaf, before entering and performing the configuration for a state.
The main reason for defining multiple nano service states is to specify what part of the overall configuration belongs in each state. For the VM-router example, that entails splitting the configuration into a part for deploying a VM on a virtual infrastructure and a part for configuring it. In this case, a router VM is requested simply by adding an entry to a list of VM requests, while making the API calls is left to an external component, such as the VNF Manager.
If a state defines a nano callback, you can register a configuration template to it. The XML template file is very similar to an ordinary service template but requires additional componenttype and state attributes in the config-template root element. These attributes identify which component and state in the plan outline the template belongs to, for example:
Likewise, you can implement a callback in the service code. The registration requires you to specify the component and state, as the following Python example demonstrates:
The selected NanoServiceCallbacks class then receives callbacks in the cb_nano_create() function:
The component and state parameters allow the function to distinguish calls for different callbacks when registered for more than one.
For most flexibility, each state defines a separate callback, allowing you to implement some with a template and others with code, all as part of the same service. You may even use Java instead of Python, as explained in .
The set of states used in the plan outline describes the stages that a service instance goes through during provisioning. Naturally, these are service-specific, which presents a problem if you just want to tell whether a service instance is still provisioning or has already finished. It requires the knowledge of which state is the last, final one, making it hard to check in a generic way.
That is why each service component must have the built-in ncs:init state as the first state and ncs:ready as the last state. Using the two built-in states allows for interoperability with other services and tools. The following is a complete four-state plan outline for the VM-based router service, with the two states added:
For the service to use it, the plan outline must be linked to a service point with the help of a behavior tree. The main purpose of a behavior tree is to allow a service to dynamically instantiate components, based on service parameters. Dynamic instantiation is not always required and the behavior tree for a basic, static, single-component scenario boils down to the following:
This behavior tree always creates a single “vrouter” component for the service. The service point is provided as an argument to the ncs:service-behavior-tree statement, while the ncs:plan-outline-ref statement provides the name for the plan outline to use.
The following figure visualizes the resulting service plan and its states.
Along with the behavior tree, a nano service also relies on the ncs:nano-plan-data grouping in its service model. It is responsible for storing state and other provisioning details for each service instance. Other than that, the nano service model follows the standard YANG definition of a service:
This model includes the operational vm-up-and-running leaf, that the example plan outline depends on. In practice, however, a plan outline is more likely to reference values provided by another part of the system, such as the actual, externally provided, state of the provisioned VM.
A nano service does not directly use its service point for configuration. Instead, the service point invokes a behavior tree to generate a plan, and the service starts executing according to this plan. As it reaches a certain state, it performs the relevant configuration for that state.
For example, when you create a new instance of the VM-router service, the vm-up-and-running leaf is not set, so only the first part of the service runs. Inspecting the service instance plan reveals the following:
Since neither the init nor the vm-requested states have any pre-conditions, they are reached right away. In fact, NSO can optimize it into a single transaction (this behavior can be disabled if you use forced commits, discussed later on).
But the process has stopped at the vm-configured state, denoted by the not-reached status in the output. It is waiting for the pre-condition to become fulfilled with the help of a kicker. The job of the kicker is to watch the value and perform an action, the reactive re-deploy, when the conditions are satisfied. The kickers are managed by the nano service subsystem: when an unsatisfied precondition is encountered, a kicker is configured, and when the precondition becomes satisfied, the kicker is removed.
You may also verify, through the get-modifications action, that only the first part, the creation of the VM, was performed:
At the same time, a kicker was installed under the kickers container but you may need to use the unhide debug command to inspect it. More information on kickers in general is available in .
At a later point in time, the router VM becomes ready, and the vm-up-and-running leaf is set to a true value. The installed kicker notices the change and automatically calls the reactive-re-deploy action on the service instance. In turn, the service gets fully deployed.
The get-modifications output confirms this fact. It contains the additional IP address configuration, performed as part of the vm-configured step:
The ready state has no additional pre-conditions, allowing NSO to reach it along with the vm-configured state. This effectively breaks the provisioning process into two steps. To break it down further, simply add more states with corresponding pre-conditions and create logic.
Other than staged provisioning, nano services act the same as other services, allowing you to use the service check-sync and similar actions, for example. But please note the un-deploy and re-deploy actions may behave differently than expected, as they deal with provisioning. Chiefly, a re-deploy reevaluates the pre-conditions, possibly generating a different configuration if a pre-condition depends on operational values that have changed. The un-deploy action, on the other hand, removes all of the recorded modifications, along with the generated plan.
Every service in NSO has a YANG definition of the service parameters, a service point name, and an implementation of the service point create() callback. Normally, when a service is committed, the FASTMAP algorithm removes all previous data changes internally and presents the service data to the create() callback as if this was the initial create. When the create() callback returns, the FASTMAP algorithm compares the result and calculates a reverse diff-set from the data changes. This reverse diff-set contains the operations that are needed to restore the configuration data to the state as it was before the service was created. The reverse diff-set is required, for instance, if the service is deleted or modified.
This fundamental principle is what makes the implementation of services and the create() callback simple. In turn, a lot of the NSO functionality relies on this mechanism.
However, in the reactive FASTMAP pattern, the create() callback is re-entered several times by using the subsequent reactive-re-deploy calls. Storing all changes in a single reverse diff-set then becomes an impediment. For instance, if a staged delete is necessary, there is no way to single out which changes each RFM step performed.
A nano service abandons the single reverse diff-set by introducing nano-plan-data and a new NanoCreate() callback. The nano-plan-data YANG grouping represents an executable plan that the system can follow to provision the service. It has additional storage for reverse diff-set and pre-conditions per state, for each component of the plan.
This is illustrated in the following figure:
You can still use the service get-modifications action to visualize all data changes performed by the service as an aggregate. In addition, each state also has its own get-modifications action that visualizes the data changes for that particular state. It allows you to more easily identify the state and, by extension, the code that produced those changes.
Before nano services became available, RFM services could only be implemented by creating a CDB subscriber. With the subscriber approach, the service can still leverage the plan-data grouping, which nano-plan-data is based on, to report the progress of the service under the resulting plan container. But the create() callback becomes responsible for creating the plan components, their states, and setting the status of the individual states as the service creation progresses.
Moreover, implementing a staged delete with a subscriber often requires keeping the configuration data outside of the service. The code is then distributed between the service create() callback and the correlated CDB subscriber. This all results in several sources that potentially contain errors that are complicated to track down. Nano services, on the other hand, do not require any use of CDB subscribers or other mechanisms outside of the service code itself to support the full-service life cycle.
Resource de-provisioning is an important part of the service life cycle. The FASTMAP algorithm ensures that no longer needed configuration changes in NSO are removed automatically but that may be insufficient by itself. For example, consider the case of a VM-based router, such as the one described earlier. Perhaps provisioning of the router also involves assigning a license from a central system to the VM and that license must be returned when the VM is decommissioned. If releasing the license must be done by the VM itself, simply destroying it will not work.
Another example is the management of a web server VM for a web application. Here, each VM is part of a larger pool of servers behind a load balancer that routes client requests to these servers. During de-provisioning, simply stopping the VM interrupts the currently processing requests and results in client timeouts. This can be avoided with a graceful shutdown, which stops the load balancer from sending new connections to the server and waits for the current ones to finish, before removing the VM.
Both examples require two distinct steps for de-provisioning. Can nano services be of help in this case? Certainly. In addition to the state-by-state provisioning of the defined components, the nano service system in NSO is responsible for back-tracking during their removal. This process traverses all reached states in the reverse order, removing the changes previously done for each state one by one.
In doing so, the back-tracking process checks for a 'delete pre-condition' of a state. A delete pre-condition is similar to the create pre-condition, but only relevant when back-tracking. If the condition is not fulfilled, the back-tracking process stops and waits until it becomes satisfied. Behind the scenes, a kicker is configured to restart the process when that happens.
If the state's delete pre-condition is fulfilled, back-tracking first removes the state's 'create' changes recorded by FASTMAP and then invokes the nano delete() callback, if defined. The main use of the callback is to override or veto the default status calculation for a back-tracking state. That is why you can't implement the delete() callback with a template, for example. Very importantly, delete() changes are not kept in a service's reverse diff-set and may stay even after the service is completely removed. In general, you are advised to avoid writing any configuration data because this callback is called under a removal phase of a plan component where new configuration is seldom expected.
Since the 'create' configuration is automatically removed, without the need for a separate delete() callback, these callbacks are used only in specific cases and are not very common. Regardless, the delete() callback may run as part of the commit dry-run command, so it must not invoke further actions or cause side effects.
Backtracking is invoked when a component of a nano service is removed, such as when deleting a service. It is also invoked when evaluating a plan and a reached state's 'create' pre-condition is no longer satisfied. In this case, the affected component is temporarily set to a back-tracking mode for as long as it contains such nonconforming states. It allows the service to recover and return to a well-defined state.
To implement the delete pre-condition or the delete() callback, you must add the ncs:delete statement to the relevant state in the plan outline. Applying it to the web server example above, you might have:
While, in general, the delete() callback should not produce any configuration, the graceful shutdown scenario is one of the few exceptional cases where this may be required. Here, the delete() callback allows you to re-configure the load balancer to remove the server from actively accepting new connections, such as marking it 'under maintenance'. The 'delete' pre-condition allows you to further delay the VM removal until the ongoing requests are completed.
Similar to the create() callback, the ncs:nano-callback statement instructs NSO to also process a delete() callback. A Python class that you have registered for the nano service must then implement the following method:
As explained, there are some uncommon cases where additional configuration with the delete() callback is required. However, a more frequent use of the ncs:delete statement is in combination with side-effect actions.
In some scenarios, side effects are an integral part of the provisioning process and cannot be avoided. The aforementioned example on license management may require calling a specific device action. Even so, the create() or delete() callbacks, nano service or otherwise, are a bad fit for such work. Since these callbacks are invoked during the transaction commit, no RPCs or other access outside of the NSO datastore are allowed. If allowed, they would break the core NSO functionality, such as a dry run, where side effects are not expected.
A common solution is to perform these actions outside of the configuration transaction. Nano services provide this functionality through the post-actions mechanism, using a post-action-node statement for a state. It is a definition of an action that should be invoked after the state has been reached and the commit performed. To ensure the latter, NSO will commit the current transaction before executing the post-action and advancing to the next state.
The service's plan state data also carries a post-action status leaf, which reflects whether the action was executed and if it was successful. The leaf will be set to not-reached, create-reached, delete-reached, or failed, depending on the case and result. If the action is still executing, then the leaf will show either a create-init or delete-init status instead.
Moreover, post actions can be run either asynchronously (default) or synchronously. To run them synchronously, add a sync statement to the post-action statement. When a post action is run asynchronously, further states will not wait for the action to finish, unless you define an explicit post-action-status precondition. While for a synchronous post action, later states in the same component will be invoked only after the post action is run successfully.
The exception to this setting is when a component switches to a backtracking mode. In that case, the system will not wait for any create post action to complete (synchronous or not) but will start executing backtracking right away. It means a delete callback or a delete post action for a state may run before its synchronous create post action has finished executing.
The side-effect-queue and a corresponding kicker are responsible for invoking the actions on behalf of the nano service and reporting the result in the respective state's post-action-status leaf. The following figure shows an entry is made in the side-effect-queue (2) after the state is reached (1) and its post-action status is updated (3) once the action finishes executing.
You can use the show side-effect-queue command to inspect the queue. The queue will run multiple actions in parallel and keep the failed ones for you to inspect. Please note that High Availability (HA) setups require special consideration: the side effect queue is disabled when High Availability is enabled and the High Availability mode is NONE. See for more details.
In case of a failure, a post action sets the post-action-status accordingly and, if the action is synchronous, the nano service stops progressing. To retry the failed action, you can perform the action reschedule.
Or, execute a (reactive) re-deploy, which will also restart the nano service if it was stopped.
Using the post-action mechanism, it is possible to define side effects for a nano service in a safe way. A post-action is only executed one time. That is if the post-action-status is already at the create-reached in the create case or delete-reached in the delete case, then new calls of the post-actions are suppressed. In dry-run operations, post-actions are never called.
These properties make post actions useful in a number of scenarios. A widely applicable use case is invoking a service self-test as part of initial service provisioning.
Another example, requiring the use of post-actions, is the IP address allocation scenario from the chapter introduction. By its nature, the allocation or assignment call produces a side effect in an external system: it marks the assigned IP address in use. The same is true for releasing the address. Since NSO doesn't know how to reverse these effects on its own, they can't be part of any create() callback. Instead, the API calls can be implemented as post-actions.
The following snippet of a plan outline defines a create and delete post-action to handle IP management:
Let's see how this plan manifests during provisioning. After the first (init) state is reached and committed, it fires off an allocation action on the service instance, called allocate-ip. The job of the allocate-ip action is to communicate with the external system, the IP Address Management (IPAM), and allocate an address for the service instance. This process may take a while, however, it does not tie up NSO, since it runs outside of the configuration transaction and other configuration sessions can proceed in the meantime.
The $SERVICE XPath variable is automatically populated by the system and allows you to easily reference the service instance. There are other automatic variables defined. You can find the complete list inside the tailf-ncs-plan.yang submodule, in the $NCS_DIR/src/ncs/yang/ folder.
Due to the ncs:sync statement, service provisioning can continue only after the allocation process (the action) completes. Once that happens, the service resumes processing in the ip-allocated state, with the IP value now available for configuration.
On service deprovisioning, the back-tracking mechanism works backwards through the states. When it is the ip-allocated state's turn to deprovision, NSO reverts any configuration done as part of this state, and then runs the release-ip action, defined inside the ncs:delete block. Of course, this only happens if the state previously had a reached status. Implemented as a post-action, release-ip can safely use the external IPAM API to deallocate the IP address, without impacting other sessions.
The actions, as defined in the example, do not take any parameters. When needed, you may pass additional parameters from the service's opaque and component_proplist object. These parameters must be set in advance, for example in some previous create callback. For details, please refer to the YANG definition of post-action-input-params in the tailf-ncs-plan.yang file.
The discussion on basic concepts briefly mentions the role of a nano behavior tree but it does not fully explore its potential. Let's now consider in which situations you may find a non-trivial behavior tree beneficial.
Suppose that you are implementing a service that requires not one but two VMs. While you can always add more states to the component, these states are processed sequentially. However, you might want to provision the two VMs in parallel, since they take a comparatively long time, and it makes little sense having to wait until the first one is finished before starting with the second one. Nano services provide an elegant solution to this challenge in the form of multiple plan components: provisioning of each VM can be tracked by a separate plan component, allowing the two to advance independently, in parallel.
If the two VMs go through the same states, you can use a single component type in the plan outline for both. It is the job of the behavior tree to create or synthesize actual components for each service instance. Therefore, you could use a behavior tree similar to the following example:
The two ncs:create-component statements instruct NSO to create two components, named vm1 and vm2, of the same vr:router-vm type. Note the required use of single quotes around component names, because the value is actually an XPath expression. The quotes ensure the name is used verbatim when the expression is evaluated.
With multiple components in place, the implicit self component reflects the cumulative status of the service. The ready state of the self component will never have its status set to reached until all other components have the ready state status set to reached and all post-actions have been run, too. Likewise, during backtracking, the init state will never be set to not-reached until all other components have been fully backtracked and all delete post actions have been run. Additionally, the self ready or init state status will be set to failed
As you can see, all the ncs:create-component statements are placed inside an ncs:selector block. A selector is a so-called control flow node. It selects a group of components and allows you to decide whether they are created or not, based on a pre-condition. The pre-condition can reference a service parameter, which in turn controls if the relevant components are provisioned for this service instance. The mechanism enables you to dynamically produce just the necessary plan components.
The pre-condition is not very useful on the top selector node, but selectors can also be nested. For example, having a use-virtual-devices configuration leaf in the service YANG model, you could modify the behavior tree to the following:
The described behavior tree always synthesizes the router component and evaluates the child selector. However, the child selector only synthesizes the two VM components if the service configuration requested so by setting the use-virtual-devices to true.
What is more, if the pre-condition value changes, the system re-evaluates the behavior tree and starts the backtracking operation for any removed components.
For even more complex cases, where a variable number of components needs to be synthesized, the ncs:multiplier control flow node becomes useful. Its ncs:foreach statement selects a set of elements and each element is processed in the following way:
If the optional when statement is not satisfied, the element is skipped.
All variable statements are evaluated as XPath expressions for this element, to produce a unique name for the component and any other element-specific values.
All ncs:create-component and other control flow nodes are processed, creating the necessary components for this element.
The multiplier node is often used to create a component for each item in a list. For example, if the service model contains a list of VMs, with a key name, then the following code creates a component for each of the items:
In this particular case, it might be possible to avoid the variable altogether, by using the expression for the create-component statement directly. However, defining a variable also makes it available to service create() callbacks.
This is extremely useful, since you can access these values, as well as the ones from the service opaque object, directly in the nano service XML templates. The opaque, especially, allows you to separate the logic in code from applying the XML templates.
The examples.ncs/development-guide/nano-services/netsim-vrouter folder contains a complete implementation of a service that provisions a netsim device instance, onboards it to NSO, and pushes a sample interface configuration to the device. Netsim device creation is neither instantaneous nor side-effect-free and thus requires the use of a nano service. It more closely resembles a real-world use case for nano services.
To see how the service is used through a prearranged scenario, execute the make demo command from the example folder. The scenario provisions and de-provisions multiple netsim devices to show different states and behaviors, characteristic of nano services.
The service, called vrouter, defines three component types in the src/yang/vrouter.yang file:
vr:vrouter: A “day-0” component that creates and initializes a netsim process as a virtual router device.
vr:vrouter-day1: A “day-1” component for configuring the created device and tracking NETCONF notifications.
As the name implies, the day-0 component must be provisioned before the day-1 component. Since the two provision in sequence, in general, a single component would suffice. However the components are kept separate to illustrate component dependencies.
The behavior tree synthesizes each of the components for a service instance using some service-specific names. To do so, the example defines three variables to hold different names:
The vr:vrouter (day-0) component has a number of plan states that it goes through during provisioning:
ncs:init
vr:requested
vr:onboarded
ncs:ready
The init and ready states are required as the first and last state in all components for correct overall state tracking in ncs:self. They have no additional logic tied to them.
The vr:requested state represents the first step in virtual router provisioning. While it does not perform any configuration itself (no nano-callback statement), it calls a post-action that does all the work. The following is a snippet of the plan outline for this state:
The create-router action calls the Python code inside the python/vrouter/main.py file, which runs a couple of system commands, such as the ncs-netsim create-device and the ncs-netsim start commands. These commands do the same thing as you would if you performed the task manually from the shell.
The vr:requested state also has a delete post-action, analogous to create, which stops and removes the netsim device during service de-provisioning or backtracking.
Inspecting the Python code for these post actions will reveal that a semaphore is used to control access to the common netsim resource. It is needed because multiple vrouter instances may run the create and delete action callbacks in parallel. The Python semaphore is shared between the delete and create action processes using a Python multiprocessing manager, as the example configures the NSO Python VM to start the actions in multiprocessing mode. See for details.
In vr:onboarded, the nano Python callback function from the main.py file adds the relevant NSO device entry for a newly created netsim device. It also configures NSO to receive notifications from this device through a NETCONF subscription. When the NSO configuration is complete, the state transitions into the reached status, denoting the onboarding has completed successfully.
The vr:vrouter component handles so-called day-0 provisioning. Alongside this component, the vr:vrouter-day1 component starts provisioning in parallel. During provisioning, it transitions through the following states:
ncs:init
vr:configured
vr:deployed
ncs:ready
The component reaches the init state right away. However, the vr:configured state has a precondition:
Provisioning can continue only after the first component, vr:vrouter, has executed its vr:onboarded post-action. The precondition demonstrates how one component can depend on another component reaching some particular state or successfully executing a post-action.
The vr:onboarded post-action performs a sync-from command for the new device. After that happens, the vr:configured state can push the device configuration according to the service parameters, by using an XML template, templates/vrouter-configured.xml. The service simply configures an interface with a VLAN ID and a description.
Similarly, the vr:deployed state has its own precondition, which makes use of the ncs:any statement. It specifies either (any) of the two monitor statements will satisfy the precondition.
One of them checks the last received NETCONF notification contains a link-status value of up for the configured interface. In other words, it will wait for the interface to become operational.
However, relying solely on notifications in the precondition can be problematic, as the received notifications list in NSO can be cleared and would result in unintentional backtracking on a service re-deploy. For this reason, there is the other monitor statement, checking the device live-status.
Once either of the conditions is satisfied, it marks the end of provisioning. Perhaps the use of notifications in this case feels a little superficial but it illustrates a possible approach to waiting for the steady state, such as routing adjacencies to form and alike.
Altogether, the example shows how to use different nano service mechanisms in a single, complex, multistage service that combines configuration and side effects. The example also includes a Python script that uses the RESTCONF protocol to configure a service instance and monitor its provisioning status. You are encouraged to configure a service instance yourself and explore the provisioning process in detail, including service removal. Regarding removal, have you noticed how nano services can de-provision in stages, but the service instance is gone from the configuration right away?
By removing the service instance configuration from NSO, you start a service de-provisioning process. For an ordinary service, a stored reverse diff-set is applied, ensuring that all of the service-induced configuration is removed in the same transaction. For nano services, having a staged, multistep service delete operation, is not possible. The provisioned states must be backtracked one by one, often across multiple transactions. With the service instance deleted, NSO must track the de-provisioning progress elsewhere.
For this reason, NSO mutates a nano service instance when it is removed. The instance is transformed into a zombie service, which represents the original service that still requires de-provisioning. Once the de-provisioning is complete, with all the states backtracked, the zombie is automatically removed.
Zombie service instances are stored with their service data, their plan states, and diff-sets in a /ncs:zombies/services list. When a service mutates to a zombie, all plan components are set to back-tracking mode and all service pre-condition kickers are rewritten to reference the zombie service instead. Also, the nano service subsystem now updates the zombie plan states as de-provisioning progresses. You can use the show zombies service command to inspect the plan.
Under normal conditions, you should not see any zombies, except for the service instances that are actively de-provisioning. However, if an error occurs, the de-provisioning process will stop with an error status and a zombie will remain. With a zombie present, NSO will not allow creating the same service instance in the configuration tree. The zombie must be removed first.
After addressing the underlying problem, you can restart the de-provisioning process with the re-deploy or the reactive-re-deploy actions. The difference between the two is which user the action uses. The re-deploy uses the current user that initiated the action whilst the reactive-re-deploy action keeps using the same user that last modified the zombie service.
These zombie actions behave a bit differently than their normal service counterparts. In particular, the zombie variants perform the following steps to better serve the de-provisioning process:
Start a temporary transaction in which the service is reinstated (created). The service plan will have the same status as it had when it mutated.
Back-track plan components in a normal fashion, that is, removing device changes for states with delete pre-conditions satisfied.
If all components are completely back-tracked, the zombie is removed from the zombie list. Otherwise, the service and the current plan states are stored back into the zombie list, with new kickers waiting to activate the zombie when some delete pre-condition is satisfied.
In addition, zombie services support the resurrect action. The action reinstates the zombie back in the configuration tree as a real service, with the current plan status, and reverts plan components back from back-tracking to normal mode. It is an “undo” for a nano service delete.
In some situations, especially during nano service development, a zombie may get stuck because of a misconfigured precondition or similar issues. A re-deploy is unlikely to help in that case and you may need to forcefully remove the problematic plan component. The force-back-track action performs this job and allows you to backtrack to a specific state if specified. But beware that using the action avoids calling any post-actions or delete callbacks for the forcefully backtracked states, even though the recorded configuration modifications are reverted. It can and will leave your systems in an inconsistent or broken state if you are not careful.
When a service is provisioned in stages, as nano services are, the success of the initial commit no longer indicates the service is provisioned. Provisioning may take a while and may fail later, requiring you to consult the service plan to observe the service status. This makes it harder to tell when a service finishes provisioning, for example. Fortunately, services provide a set of notifications that indicate important events in the service's life-cycle, including a successful completion. These events enable NETCONF and RESTCONF clients to subscribe to events instead of polling the plan and commit queue status.
The built-in service-state-changes NETCONF/RESTCONF stream is used by NSO to generate northbound notifications for services, including nano services. The event stream is enabled by default in ncs.conf, however, individual notification events must be explicitly configured to be sent.
plan-state-change NotificationWhen a service's plan component changes state, the plan-state-change notification is generated with the new state of the plan. It includes the status, which indicates one of not-reached, reached, or failed. The notification is sent when the state is created, modified, or deleted, depending on the configuration. For reference on the structure and all the fields present in the notification, please see the YANG model in the tailf-ncs-plan.yang file.
As a common use case, an event with status reached for the self component ready state signifies that all nano service components have reached their ready state and provisioning is complete. A simple example of this scenario is included in the examples.ncs/development-guide/nano-services/netsim-vrouter/demo.py Python script, using RESTCONF.
To enable the plan-state-change notifications to be sent, you must enable them for a specific service in NSO. For example, can load the following configuration into the CDB as an XML initialization file:
This configuration enables notifications for the self component's ready state when created or modified.
service-commit-queue-event NotificationWhen a service is committed through the commit queue, this notification acts as a reference regarding the state of the service. Notifications are sent when the service commit queue item is waiting to run, executing, waiting to be unlocked, completed, failed, or deleted. More details on the service-commit-queue-event notification content can be found in the YANG model inside tailf-ncs-services.yang .
For example, the failed event can be used to detect that a nano service instance deployment failed because a configuration change committed through the commit queue has failed. Measures to resolve the issue can then be taken and the nano service instance can be re-deployed. A simple example of this scenario is included in the examples.ncs/development-guide/nano-services/netsim-vrouter/demo.py Python script where the service is committed through the commit queue, using RESTCONF. By design, the configuration commit to a device fails, resulting in a commit-queue-notification with the failed event status for the commit queue item.
To enable the service-commit-queue-event notifications to be sent, you can load the following example configuration into NSO, as an XML initialization file or some other way:
service-state-changes Stream SubscriptionsThe following examples demonstrate the usage and sample events for the notification functionality, described in this section, using RESTCONF, NETCONF, and CLI northbound interfaces.
RESTCONF subscription request using curl:
See in Northbound APIs for further reference.
NETCONF creates subscription using netconf-console:
See in Northbound APIs for further reference.
CLI shows received notifications using ncs_cli:
trace-id in the NotificationYou have likely noticed the trace-id field at the end of the example notifications above. The trace ID is an optional but very useful parameter when committing the service configuration. It helps you trace the commit in the emitted log messages and the service-state-changes stream notifications. The above notifications, taken from the examples.ncs/development-guide/nano-services/netsim-vrouter example, are emitted after applying a RESTCONF plain patch:
Note that the trace ID is specified as part of the URL. If missing, NSO will generate and assign one on its own.
At times, especially when you use an iterative development approach or simply due to changing requirements, you might need to update (change) an existing nano service and its implementation. In addition to other service update best practices, such as model upgrades, you must carefully consider the nano-service-specific aspects. The following discussion mostly focuses on migrating an already provisioned service instance to a newer version; however, the same concepts also apply while you are initially developing the service.
In the simple case, updating the model of a nano service and getting the changes to show up in an already created instance is a matter of executing a normal re-deploy. This will synthesize any new components and provision them, along with the new configuration, just like you would expect from a non-nano service.
A major difference occurs if a service instance is deleted and is in a zombie state when the nano service is updated. You should be aware that no synthetization is done for that service instance. The only goal of a deleted service is to revert any changes made by the service instance. Therefore, in that case, the synthetization is not needed. It means that, if you've made changes to callbacks, post-actions, or pre-conditions, those changes will not be applied to zombies of the nano service. If a service instance requires the new changes to be applied, you must re-deploy it before it is deleted.
When updating nano services, you also need to be aware that any old callbacks, post actions and any other models that the service depends on, need to be available in the new nano service package until all service instances created before the update have either been updated (through a re-deploy) or fully deleted. Therefore, you must take great care with any updates to a service if there are still zombies left in the system.
Adding new components to the behavior tree will create the new components during the next re-deploy (synthetization) and execute the states in the new components as is normally done.
When removing components from the behavior tree, the components that are removed are set to backtracking and are backtracked fully before they are removed from the plan.
When you remove a component, do so carefully so that any callbacks, post actions or any other model data that the component depends on are not removed until all instances of the old component are removed.
If the identity for a component type is removed, then NSO removes the component from the database when upgrading the package. If this happens, the component is not backtracked and the reverse diffsets are not applied.
Replacing components in the behavior tree is the same as having unrelated components that are deleted and added in the same update. The deleted components are backtracked as far as possible, and then the added components are created and their states executed in order.
In some cases, this is not the desired behavior when replacing a component. For example, if you only want to rename a component, backtracking and then adding the component again might make NSO push unnecessary changes to the network or run delete callbacks and post actions that should not be run. To remedy this, you might add the ncs:deprecates-component statements to the new component, detailing which components it replaces. NSO then skips the backtracking of the old component and just applies all reverse diffsets of the deprecated component. In the same re-deploy, it then executes the new component as usual. Therefore, if the new component produces the same configuration as the old component, nothing is pushed to the network.
If any of the deprecated components are backtracking, the backtracking will be handled before the component is removed. When there are multiple components that are deprecated in the same update, the components will not be removed, as detailed above, until all of them are done backtracking (if any one of them are backtracking).
When adding or removing states in a component, the component is backtracked before a new component with the new states is added and executed. If the updated component produces the same configuration as the old one (and no preconditions halt the execution), this should lead to no configuration being pushed to the network. So, if changes to the states are done, you need to take care when writing the preconditions and post actions for a component if no new changes should be pushed to the network.
Any changes to the already present states that are kept in the updated component will not have their configuration updated until the new component is created, which happens after the old one has been fully backtracked.
For a component where only the configuration for one or more states have changed, the synthetization process will update the component with the new configuration and make sure that any new callbacks or similar are called during future execution of the component.
The text in this section sums up as well as adds additional detail on the way nano services operate, which you will hopefully find beneficial during implementation.
To reiterate, the purpose of a nano service is to break down an RFM service into its isolated steps. It extends the normal ncs:servicepoint YANG mechanism and requires the following:
A YANG definition of the service input parameters, with a service point name and the additional nano-plan-data grouping.
A YANG definition of the plan component types and their states in a plan outline.
A YANG definition of a behavior tree for the service. The behavior tree defines how and when to instantiate components in the plan.
Code or templates for individual state transfers in the plan.
When a nano service is committed, the system evaluates its behavior tree. The result of this evaluation is a set of components that form the current plan for the service. This set of components is compared with the previous plan (before the commit). If there are new components, they are processed one by one.
For each component in the plan, it is executed state by state in the defined order. Before entering a new state, the create pre-condition for the state is evaluated if it exists. If a create pre-condition exists and if it is not satisfied, the system stops progressing this component and jumps to the next one. A kicker is then defined for the pre-condition that was not satisfied. Later, when this kicker triggers and the pre-condition is satisfied, it performs a reactive-re-deploy and the kicker is removed. This kicker mechanism becomes a self-sustained RFM loop.
If a state's pre-conditions are met, the callback function or template associated with the state is invoked, if it exists. If the callback is successful, the state is marked as reached, and the next state is executed.
A component, that is no longer present but was in the previous plan, goes into back-tracking mode, during which the goal is to remove all reached states and eventually remove the component from the plan. Removing state data changes is performed in a strict reverse order, beginning with the last reached state and taking into account a delete pre-condition if defined.
A nano service is expected to have a component. All components are expected to have ncs:init as its first state and ncs:ready as its last state. A component-type can have any number of specific states in between ncs:init and ncs:ready.
Back-tracking is completely automatic and occurs in the following scenarios:
State pre-condition not satisfied: A reached state's pre-condition is no longer satisfied, and there are subsequent states that are reached and contain reverse diff-sets.
Plan component is removed: When a plan component is removed and has reached states that contain reverse diff-sets.
Service is deleted: When a service is deleted, NSO will set all plan components to back-tracking mode before deleting the service.
For each RFM loop, NSO traverses each component and state in order. For each non-satisfied create pre-condition, a kicker is started that monitors and triggers when the pre-condition becomes satisfied.
While traversing the states, a create pre-condition that was previously satisfied may become unsatisfied. If there are subsequent reached states that contain reverse diff-sets, then the component must be set to back-tracking mode. The back-tracking mode has as its goal to revert all changes up to the state that originally failed to satisfy its create pre-condition. While back-tracking, the delete pre-condition for each state is evaluated, if it exists. If the delete pre-condition is satisfied, the state's reverse diff-set is applied, and the next state is considered. If the delete pre-condition is not satisfied, a kicker is created to monitor this delete pre-condition. When the kicker triggers, a reactive-re-deploy is called and the back-tracking will continue until the goal is reached.
When the back-tracking plan component has reached its goal state, the component is set to normal mode again. The state's create pre-condition is evaluated and if it is satisfied the state is entered or otherwise a kicker is created as described above.
In some circumstances, a complete plan component is removed (for example, if the service input parameters are changed). If this happens, the plan component is checked if it contains reached states that contain reverse diff-sets.
If the removed component contains reached states with reverse diff-sets, the deletion of the component is deferred and the component is set to back-tracking mode.
In this case, there is no specified goal state for the back-tracking. This means that when all the states have been reverted, the component is automatically deleted.
If a service is deleted, all components are set to back-tracking mode. The service becomes a zombie, storing away its plan states so that the service configuration can be removed.
All components of a deleted service are set in backtracking mode.
When a component becomes completely back-tracked, it is removed.
When all components in the plan are deleted, the service is removed.
A nano service behavior tree is a data structure defined for each service type. Without a behavior tree defined for the service point, the nano service cannot execute. It is the behavior tree that defines the currently executing nano-plan with its components.
The purpose of a behavior tree is to have a declarative way to specify how the service's input parameters are mapped to a set of component instances.
A behavior tree is a directed tree in which the nodes are classified as control flow nodes and execution nodes. For each pair of connected nodes, the outgoing node is called parent and the incoming node is called child. A control flow node has zero or one parent and at least one child and the execution nodes have one parent and no children.
There is exactly one special control flow node called the root, which is the only control flow node without a parent.
This definition implies that all interior nodes are control flow nodes, and all leaves are execution nodes. When creating, modifying, or deleting a nano service, NSO evaluates the behavior tree to render the current nano plan for the service. This process is called synthesizing the plan.
The control flow nodes have a different behavior, but in the end, they all synthesize its children in zero or more instances. When the a control flow node is synthesized, the system executes its rules for synthesizing the node's children. Synthesizing an execution node adds the corresponding plan component instance to the nano service's plan.
All control flow and execution nodes may define pre-conditions, which must be satisfied to synthesize the node. If a pre-condition is not satisfied, a kicker is started to monitor the pre-condition.
All control flow and execution nodes may define an observe monitor which results in a kicker being started for the monitor when the node is synthesized.
If an invocation of an RFM loop (for example, a re-deploy) synthesizes the behavior tree and a pre-condition for a child is no longer satisfied, the sub-tree with its plan-components is removed (that is, the plan-components are set to back-tracking mode).
The following control flow nodes are defined:
Selector: A selector node has a set of children which are synthesized as described above.
Multiplier: A multiplier has a 'foreach_'_ mechanism that produces a list of elements. For each resulting element, the children are synthesized as described above. This can be used, for example, to create several plan-components of the same type.
There is just one type of execution node:
Create component: The create-component execution node creates an instance of the component type that it refers to in the plan.
It is recommended to keep the behavior tree as flat as possible. The most trivial case is when the behavior tree creates a static nano-plan, that is, all the plan-components are defined and never removed. The following is an example of such a behavior tree:
Having a selector on root implies that all plan-components are created if they don't have any pre-conditions, or for which the pre-conditions are satisfied.
An example of a more elaborated behavior tree is the following:
This behavior tree has a selector node as the root. It will always synthesize the "base-config" plan component and then evaluate then pre-condition for the selector child. If that pre-condition is satisfied, it then creates four other plan-components.
The multiplier control flow node is used when a plan component of a certain type should be cloned into several copies depending on some service input parameters. For this reason, the multiplier node defines a foreach, a when, and a variable. The foreach is evaluated and for each node in the nodeset that satisfies the when, the variable is evaluated as the outcome. The value is used for parameter substitution to a unique name for a duplicated plan component.
The value is also added to the nano service opaque which enables the individual state nano service create() callbacks to retrieve the value.
Variables might also have “when” expressions, which are used to decide if the variable should be added to the list of variables or not.
Pre-conditions are what drive the execution of a nano service. A pre-condition is a prerequisite for a state to be executed or a component to be synthesized. If the pre-condition is not satisfied, it is then turned into a kicker which in turn re-deploys the nano service once the condition is fulfilled.
When working with pre-conditions, you need to be aware that they work a bit differently when used as a kicker to redeploy the service and when they are used in the execution of the service. When the pre-condition is used in the re-deploy kicker, it then works as explained in the kicker documentation (that is, the trigger expression is evaluated before and after the change-set of the commit when the monitored nodeset is changed). When used during the execution of a nano service, you can only evaluate it on the current state of the database, which means that it only checks that the monitor returns a nodeset of one or more nodes and that trigger expression (if there is one) is fulfilled for any of the nodes in the nodeset.
Support for pre-conditions checking, if a node has been deleted, is handled a bit differently due to the difference in how the pre-condition is evaluated. Kickers always trigger for changed nodes (add, deleted, or modified) and can check that the node was deleted in the commit that triggered the kicker. While in the nano service evaluation, you only have the current state of the database and the monitor expression will not return any nodes for evaluation of the trigger expression, consequently evaluating the pre-condition to false. To support deletes in both cases, you can create a pre-condition with a monitor expression and a child node ncs:trigger-on-delete which then both create a kicker that checks for deletion of the monitored node and also does the right thing in the nano service evaluation of the pre-condition. For example, you could have the following component:
The component would only trigger the init states delete pre-condition when the device named test is deleted.
It is possible to add multiple monitors to a pre-condition by using the ncs:all or ncs:any extensions. Both extensions take one or multiple monitors as argument. A pre-condition using the ncs:all extension is satisfied if all monitors given as arguments evaluate to true. A pre-condition using the ncs:any extension is satisfied if at least one of the monitors given as argument evaluates to true. The following component uses the ncs:all and ncs:any extensions for its self state's create and delete pre-condition, respectively:
The service opaque is a name-value list that can optionally be created/modified in some of the service callbacks, and then travels the chain of callbacks (pre-modification, create, post-modification). It is returned by the callbacks and stored persistently in the service private data. Hence, the next service invocation has access to the current opaque and can make subsequent read/write operations to the same object. The object is usually called opaque in Java and proplist in Python callbacks.
The nano services handle the opaque in a similar fashion, where a callback for every state has access to and can modify the opaque. However, the behavior tree can also define variables, which you can use in preconditions or to set component names. These variables are also available in the callbacks, as component properties. The mechanism is similar but separate from the opaque. While the opaque is a single service-instance-wide object set only from the service code, component variables are set in and scoped according to the behavior tree. That is, component properties contain only the behavior tree variables which are in scope when a component is synthesized.
For example, take the following behavior tree snippet:
The callbacks for states in the “base-config” component only see the VAR1 variable, while those in “component1” see both VAR1 and VAR2 as component properties.
Additionally, both the service opaque and component variables (properties) are used to look up substitutions in nano service XML templates and in the behavior tree. If used in the behavior tree, the same rules apply for the opaque as for component variables. So, a value needs to contain single quotes if you wish to use it verbatim in preconditions and similar constructs, for example:
Using this scheme at an early state, such as the “base-config” component's “ncs:init”, you can have a callback that sets name-value pairs for all other states that are then implemented solely with templates and preconditions.
The nano service can have several callback registrations, one for each plan component state. But note that some states may have no callbacks at all. The state may simply act as a checkpoint, that some condition is satisfied, using pre-condition statements. A component's ncs:ready state is a good example of this.
The drawback with this flexible callback registration is that there must be a way for the NSO Service Manager to know if all expected nano service callbacks have been registered. For this reason, all nano service plan component states that require callbacks are marked with this information. When the plan is executed and the callback markings in the plan mismatch with the actual registrations, this results in an error.
All callback registrations in NSO require a daemon to be instantiated, such as a Python or Java process. For nano services, it is allowed to have many daemons where each daemon is responsible for a subset of the plan state callback registrations. The neat thing here is that it becomes possible to mix different callback types (Template/Python/Java) for different plan states.
The mixed callback feature caters to the case where most of the callbacks are templates and only some are Java or Python. This works well because nano services try to resolve the template parameters using the nano service opaque when applying a template. This is a unique functionality for nano services that makes Java or Python apply-template callbacks unnecessary.
You can implement nano service callbacks as Templates as well as Python, Java, Erlang, and C code. The following examples cover the implementation of Template, Python and Java.
A plan state template, if defined, replaces the need of a create() callback. In this case, there are no delete() callbacks and the status definitions must in this case be handled by the states delete pre-condition. The template must in addition to the servicepoint attribute, have a componenttype and a state attribute to be registered on the plan state:
Specific to nano services, you can use parameters, such as $SOMEPARAM in the template. The system searches for the parameter value in the service opaque and in the component properties. If it is not defined, applying the template will fail.
A Python create() callback is very similar to its ordinary service counterpart. The difference is that it has additional arguments. plan refers to the synthesized plan, while component and state specify the component and state for which it is invoked. The proplist argument is the nano service opaque (same naming as for ordinary services) and component_proplist contains component variables, along with their values.
In the majority of cases, you should not need to manage the status of nano states yourself. However, should you need to override the default behavior, you can set the status explicitly, in the callback, using code similar to the following :
The Python nano service callback needs a registration call for the specific service point, componentType, and state that it should be invoked for.
For Java, annotations are used to define the callbacks for the component states. The registration of these callbacks is performed by the ncs-java-vm. The NanoServiceContext argument contains methods for retrieving the component and state for the invoked callback as well as methods for setting the resulting plan state status.
Several componentType and state callbacks can be defined in the same Java class and are then registered by the same daemon.
In some scenarios, there is a need to be able to register a callback for a certain state in several components with different component types. For this reason, it is possible to register a callback with a wildcard, using “*” as the component type. The invoked state sends the actual component name to the callback, allowing the callback to still distinguish component types if required.
In Python, the component type is provided as an argument to the callback (component) and a generic callback is registered with an asterisk for a component, such as:
In Java, you can perform the registration in the method annotation, as before. To retrieve the calling component type, use the NanoServiceContext.getComponent() method. For example:
The generic callback can then act for the registered state in any component type.
The ordinary service pre/post modification callbacks still exist for nano services. They are registered as for an ordinary service and are invoked before the behavior tree synthetization and after the last component/state invocation.
Registration of the ordinary create() will not fail for a nano service. But they will never be invoked.
When implementing a nano service, you might end up in a situation where a commit is needed between states in a component to make sure that something has happened before the service can continue executing. One example of such behavior is if the service is dependent on the notifications from a device. In such a case, you can set up a notification kicker in the first state and then trigger a forced commit before any later states can proceed, therefore making sure that all future notifications are seen by the later states of the component.
To force a commit in between two states of a component, add the ncs:force-commit tag in a ncs:create or ncs:delete tag. See the following example:
When defining a nano service, it is assumed that the plan is stored under the service path, as ncs:plan-data is added to the service definition. When the service instance is deleted, the plan is moved to the zombie instead, since the instance has been removed and the plan cannot be stored under it anymore. When writing other services or when working with a nano service in general, you need to be aware that the plan for a service might be in one of these two places depending on if the service instance has been deleted or not.
To make it easier to work with a service, you can define a custom location for the plan and its history. In the ncs:service-behaviour-tree, you can specify that the plan should be stored outside of the service by setting the ncs:plan-location tag to a custom location. The location where the plan should be stored must be either a list or a container and include the ncs:plan-data tag. The plan data is then created in this location, no matter if the service instance has been deleted (turned into a zombie) or not, making it easy to base decisions on the state of the service as all plan queries can query the same plan.
You can use XPath with the ncs:plan-location statement. The XPath is evaluated based on the nano service context. When the list or container, which contains the plan, is nested under another list, the outer list instance must exist before creating the nano service. At the same time, the outer list instance of the plan location must also remain intact for further service's life-cycle management, such as redeployment, deletion, etc. Otherwise, an error will be returned and logged, and any service interaction (create, re-deploy, delete, etc.) won't succeed.
The commit queue feature, described in , allows for increased overall throughput of NSO by committing configuration changes into an outbound queue item instead of directly to affected devices. Nano services are aware of the commit queue and will make use of it, however, this interaction requires additional consideration.
When the commit queue is enabled and there are outstanding commit queue items, the network is lagging behind the CDB. The CDB is forward-looking and shows the desired state of the network. Hence, the nano plan shows the desired state as well, since changes to reach this state may not have been pushed to the devices yet.
To keep the convergence of the nano service in sync with the commit queue, nano services behave more asynchronously:
A nano service state does not make any progression while the service has an outstanding commit queue item. The outstanding item is listed under plan/commit-queue for the service, in normal or in zombie mode.
On completion of the commit queue item, the nano plan comes in sync with the network. The outstanding commit queue item is removed from the list above and the system issues a reactive-re-deploy action to resume the progression of the nano service.
Post-actions are delayed, while there is an outstanding commit queue item.
The reason for such behavior is that commit queue items can fail. In case of a failure, the CDB and the network have diverged. In turn, the nano plan may have diverged and not reflect the actual network state if the failed commit queue item contained changes related to the nano service.
What is worse, the network may be left in an inconsistent state. To counter that, NSO supports multiple recovery options for the commit queue. Since NSO release 5.7, using the rollback-on-error is the recommended option, as it undoes all the changes that are part of the same transaction. If the transaction includes the initial service instance creation, the instance is removed as well. That is usually not desired for nano services. A nano service will avoid such removal by only committing the service intent (the instance configuration) in the initial transaction. In this case, the service avoids potential rollback, as it does not perform any device configuration in the same transaction but progresses solely through (reactive) re-deploy.
While error recovery helps keeping the network consistent, the end result remains that the requested change was not deployed. If a commit queue item with nano service-related changes fails, that signifies a failure for the nano service and NSO does the following:
Service progression stops.
The nano plan is marked as failed by creating the failed leaf under the plan.
The scheduled post-actions are canceled. Canceled post actions stay in the side-effect-queue with status canceled and are not going to be executed.
After such an event, manual intervention is required. If not using the rollback-on-error option or the rollback transaction fails, consult for the correct procedure to follow. Once the cause of the commit queue failure is resolved, you can manually resume the service progression by invoking the reactive-re-deploy action on a nano service or a zombie.
The service-commit-queue-event helps detect that a nano service instance deployment failed because a configuration change committed through the commit queue has failed. See section for details.
You can find another nano service example under examples.ncs/getting-started/developing-with-ncs/20-nano-services. The example illustrates a situation with a simple VPN link that should be set up between two devices. The link is considered established only after it is tested and a test-passed leaf is set to true. If the VPN link changes, the new endpoints must be set up before removing the old endpoints, to avoid disturbing customer traffic during the operation.
The package named link contains the nano service definition. The service has a list containing at most one element, which constitutes the VPN link and is keyed on a-device a-interface b-device b-interface. The list element corresponds to a component type link:vlan-link in the nano service plan.
In the plan definition, note that there is only one nano service callback registered for the service. This callback is defined for the link:dev-setup state in the link:vlan-link component type. In the plan, it is represented as follows:
The callback is a template. You can find it under packages/link/templates as link-template.xml.
For the state ncs:ready in the link:vlan-link component type there are both a create and a delete pre-condition. The create pre-condition for this state is as follows:
This pre-condition implies that the components based on this component type are not considered finished until the test-passed leaf is set to a true value. The pre-condition implements the requirement that after the initial setup of a link configured by the link:dev-setup state, a manual test and setting of the test-passed leaf is performed before the link is considered finished.
The delete pre-condition for the same state is as follows:
This pre-condition implies that before you start deleting (back-tracking) an old component, the new component must have reached the ncs:ready state, that is, after being successfully tested. The first part of the pre-condition checks the status of the vlan-link components. Since there can be at most one link configured in the service instance, the only non-backtracking component, other than self, is the new link component. However, that condition on its own prevents the component to be deleted when deleting the service. So, the second part, after the or statement, checks if all components are back-tracking, which signifies service deletion. This approach illustrates a "create-before-break" scenario where the new link is created first, and only when it is set up, the old one is removed.
The ncs:service-behavior-tree is registered on the servicepoint link-servicepoint that is defined by the nano service. It refers to the plan definition named link:link-plan. The behavior tree has a selector on top, which chooses to synthesize its children depending on their pre-conditions. In this tree, there are no pre-conditions, so all children will be synthesized.
The multiplier control node chooses a node set. A variable named VALUE is created with a unique value for each node in that node-set and creates a component of the link:vlan-link type for each node in the chosen node-set. The name for each individual component is the value of the variable VALUE.
Since the chosen node-set is the "endpoints" list that can contain at most one element, it produces only one component. However, if the link in the service is changed, that is, the old list entry is deleted and a new one is created, then the multiplier creates a component with a new name.
This forces the old component (which is no longer synthesized) to be back-tracked and the plan definition above handles the "create-before-break" behavior of the back-tracking.
To run the example, do the following:
Build the example:
Start the example:
Run the example:
Now you create a service that sets up a VPN link between devices ex1 and ex2, and is completed immediately since the test-passed leaf is set to true.
You can inspect the result of the commit:
The service sets up the link between the devices. Inspect the plan:
All components in the plan have reached their ready state.
Now, change the link by changing the interface on one of the devices. To do this, you must remove the old list entry in "endpoints" and create a new one.
Commit a dry-run to inspect what happens:
Upon committing, the service just adds the new interface and does not remove anything at this point. The reason is that the test-passed leaf is not set to true for the new component. Commit this change and inspect the plan:
Notice that the new component ex1-eth0-ex2-eth1 has not reached its ready state yet. Therefore, the old component ex1-eth0-ex2-eth0 still exists in back-track mode but is still waiting for the new component to finish.
If you check what the service has configured at this point, you get the following:
Both the old and the new link exist at this point. Now, set the test-passed leaf to true to force the new component to reach its ready state.
If you now check the service plan, you see the following:
The old component has been completely backtracked and is removed because the new component is finished. You should also check the service modifications. You should see that the old link endpoint is removed:
failedDeleting a nano service always (even without a commit queue) creates a zombie and schedules its re-deploy to perform backtracking. Again, the re-deploy and, consequently, removal will not take place while there is an outstanding commit queue item.
module vrouter {
prefix vr;
identity vm-requested {
base ncs:plan-state;
}
identity vm-configured {
base ncs:plan-state;
}
identity vrouter {
base ncs:plan-component-type;
}
ncs:plan-outline vrouter-plan {
description "Plan for configuring a VM-based router";
ncs:component-type "vr:vrouter" {
ncs:state "vr:vm-requested";
ncs:state "vr:vm-configured";
}
}
}ncs:state "vr:vm-requested" {
ncs:create {
ncs:nano-callback;
}
}ncs:state "vr:vm-configured" {
ncs:create {
ncs:nano-callback;
ncs:pre-condition {
ncs:monitor "$SERVICE" {
ncs:trigger-expr "vm-up-and-running = 'true'";
}
}
}
}<config-template xmlns="http://tail-f.com/ns/config/1.0"
servicepoint="vrouter-servicepoint"
componenttype="vr:vrouter"
state="vr:vm-configured">
<devices xmlns="http://tail-f.com/ns/ncs">
<!-- ... -->
</devices>
</config-template>class NanoApp(ncs.application.Application):
def setup(self):
self.register_nano_service('vrouter-servicepoint', # Service point
'vr:vrouter', # Component
'vr:vm-requested', # State
NanoServiceCallbacks)class NanoServiceCallbacks(ncs.application.NanoService):
@ncs.application.NanoService.create
def cb_nano_create(self, tctx, root, service, plan, component, state,
proplist, component_proplist):
...ncs:plan-outline vrouter-plan {
description "Plan for configuring a VM-based router";
ncs:component-type "vr:vrouter" {
ncs:state "ncs:init";
ncs:state "vr:vm-requested" {
ncs:create {
ncs:nano-callback;
}
}
ncs:state "vr:vm-configured" {
ncs:create {
ncs:nano-callback;
ncs:pre-condition {
ncs:monitor "$SERVICE" {
ncs:trigger-expr "vm-up-and-running = 'true'";
}
}
}
}
ncs:state "ncs:ready";
}
}ncs:service-behavior-tree vrouter-servicepoint {
description "A static, single component behavior tree";
ncs:plan-outline-ref "vr:vrouter-plan";
ncs:selector {
ncs:create-component "'vrouter'" {
ncs:component-type-ref "vr:vrouter";
}
}
}list vrouter {
description "Trivial VM-based router nano service";
uses ncs:nano-plan-data;
uses ncs:service-data;
ncs:servicepoint vrouter-servicepoint;
key name;
leaf name {
type string;
}
leaf vm-up-and-running {
type boolean;
config false;
}
}admin@ncs# show vrouter vr-01 plan
POST
BACK ACTION
TYPE NAME TRACK GOAL STATE STATUS WHEN ref STATUS
---------------------------------------------------------------------------------------------
self self false - init reached 2023-08-11T07:45:20 - -
ready not-reached - - -
vrouter vrouter false - init reached 2023-08-11T07:45:20 - -
vm-requested reached 2023-08-11T07:45:20 - -
vm-configured not-reached - - -
ready not-reached - - -admin@ncs# vrouter vr-01 get-modifications
cli {
local-node {
data +vm-instance vr-01 {
+ type csr-small;
+}
}
}admin@ncs# show vrouter vr-01 plan
POST
BACK ACTION
TYPE NAME TRACK GOAL STATE STATUS WHEN ref STATUS
-----------------------------------------------------------------------------------------
self self false - init reached 2023-08-11T07:45:20 - -
ready reached 2023-08-11T07:47:36 - -
vrouter vrouter false - init reached 2023-08-11T07:45:20 - -
vm-requested reached 2023-08-11T07:45:20 - -
vm-configured reached 2023-08-11T07:47:36 - -
ready reached 2023-08-11T07:47:36 - -admin@ncs# vrouter vr-01 get-modifications
cli {
local-node {
data +vm-instance vr-01 {
+ type csr-small;
+ address 198.51.100.1;
+}
}
} ncs:state "vr:vm-requested" {
ncs:create { ... }
ncs:delete {
ncs:pre-condition {
ncs:monitor "$SERVICE" {
ncs:trigger-expr "requests-in-processing = '0'";
}
}
}
}
ncs:state "vr:vm-configured" {
ncs:create { ... }
ncs:delete {
ncs:nano-callback;
}
} @NanoService.delete
def cb_nano_delete(self, tctx, root, service, plan, component, state,
proplist, component_proplist):
...$ ncs_cli -u admin
admin@ncs> show side-effect-queue side-effect status
ID STATUS
------------
2 failed
[ok][2023-08-15 11:01:10]
admin@ncs> request side-effect-queue side-effect 2 reschedule
side-effect-id 2
[ok][2023-08-15 11:01:18] ncs:state "ncs:init" {
ncs:create {
ncs:post-action-node "$SERVICE" {
ncs:action-name "allocate-ip";
ncs:sync;
}
}
}
ncs:state "vr:ip-allocated" {
ncs:delete {
ncs:post-action-node "$SERVICE" {
ncs:action-name "release-ip";
}
}
}ncs:service-behavior-tree multirouter-servicepoint {
description "A 2-VM behavior tree";
ncs:plan-outline-ref "vr:multirouter-plan";
ncs:selector {
ncs:create-component "'vm1'" {
ncs:component-type-ref "vr:router-vm";
}
ncs:create-component "'vm2'" {
ncs:component-type-ref "vr:router-vm";
}
}
}ncs:service-behavior-tree multirouter-servicepoint {
description "A conditional 2-VM behavior tree";
ncs:plan-outline-ref "vr:multirouter-plan";
ncs:selector {
ncs:create-component "'router'" { ... }
ncs:selector {
ncs:pre-condition {
ncs:monitor "$SERVICE" {
ncs:trigger-expr "use-virtual-devices = 'true'";
}
}
ncs:create-component "'vm1'" { ... }
ncs:create-component "'vm2'" { ... }
}
}
}ncs:multiplier {
ncs:foreach "vms" {
ncs:variable "NAME" {
ncs:value-expr "concat('vm-', name)";
}
ncs:create-component "$NAME" { ... }
}
} // vrouter name
ncs:variable "NAME" {
ncs:value-expr "current()/name";
}
// vrouter component name
ncs:variable "D0NAME" {
ncs:value-expr "concat(current()/name, '-day0')";
}
// vrouter day1 component name
ncs:variable "D1NAME" {
ncs:value-expr "concat(current()/name, '-day1')";
} ncs:state "vr:requested" {
ncs:create {
// Call a Python action to create and start a netsim vrouter
ncs:post-action-node "$SERVICE" {
ncs:action-name "create-vrouter";
ncs:result-expr "result = 'true'";
ncs:sync;
}
}
} ncs:state "vr:configured" {
ncs:create {
// Wait for the onboarding to complete
ncs:pre-condition {
ncs:monitor "$SERVICE/plan/component[type='vr:vrouter']" +
"[name=$D0NAME]/state[name='vr:onboarded']" {
ncs:trigger-expr "post-action-status = 'create-reached'";
}
}
// Invoke a service template to configure the vrouter
ncs:nano-callback;
}
}<services xmlns="http://tail-f.com/ns/ncs">
<plan-notifications>
<subscription>
<name>nano1</name>
<service-type>/vr:vrouter</service-type>
<component-type>self</component-type>
<state>ready</state>
<operation>modified</operation>
</subscription>
<subscription>
<name>nano2</name>
<service-type>/vr:vrouter</service-type>
<component-type>self</component-type>
<state>ready</state>
<operation>created</operation>
</subscription>
</plan-notifications>
</services><services xmlns="http://tail-f.com/ns/ncs">
<commit-queue-notifications>
<subscription>
<name>nano1</name>
<service-type>/vr:vrouter</service-type>
</subscription>
</commit-queue-notifications>
</services>$ curl -isu admin:admin -X GET -H "Accept: text/event-stream"
http://localhost:8080/restconf/streams/service-state-changes/json
data: {
data: "ietf-restconf:notification": {
data: "eventTime": "2021-11-16T20:36:06.324322+00:00",
data: "tailf-ncs:service-commit-queue-event": {
data: "service": "/vrouter:vrouter[name='vr7']",
data: "id": 1637135519125,
data: "status": "completed",
data: "trace-id": "vr7-1"
data: }
data: }
data: }
data: {
data: "ietf-restconf:notification": {
data: "eventTime": "2021-11-16T20:36:06.728911+00:00",
data: "tailf-ncs:plan-state-change": {
data: "service": "/vrouter:vrouter[name='vr7']",
data: "component": "self",
data: "state": "tailf-ncs:ready",
data: "operation": "modified",
data: "status": "reached",
data: "trace-id": "vr7-1"
data: }
data: }
data: }$ netconf-console create-subscription=service-state-changes
<?xml version="1.0" encoding="UTF-8"?>
<notification xmlns="urn:ietf:params:xml:ns:netconf:notification:1.0">
<eventTime>2021-11-16T20:36:06.324322+00:00</eventTime>
<service-commit-queue-event xmlns="http://tail-f.com/ns/ncs">
<service xmlns:vr="http://com/example/vrouter">/vr:vrouter[vr:name='vr7']</service>
<id>1637135519125</id>
<status>completed</status>
<trace-id>vr7-1</trace-id>
</service-commit-queue-event>
</notification>
<?xml version="1.0" encoding="UTF-8"?>
<notification xmlns="urn:ietf:params:xml:ns:netconf:notification:1.0">
<eventTime>2021-11-16T20:36:06.728911+00:00</eventTime>
<plan-state-change xmlns="http://tail-f.com/ns/ncs">
<service xmlns:vr="http://com/example/vrouter">/vr:vrouter[vr:name='vr7']</service>
<component>self</component>
<state>ready</state>
<operation>modified</operation>
<status>reached</status>
<trace-id>vr7-1</trace-id>
</plan-state-change>
</notification>$ ncs_cli -u admin -C <<<'show notification stream service-state-changes'
notification
eventTime 2021-11-16T20:36:06.324322+00:00
service-commit-queue-event
service /vrouter[name='vr17']
id 1637135519125
status completed
trace-id vr7-1
!
!
notification
eventTime 2021-11-16T20:36:06.728911+00:00
plan-state-change
service /vrouter[name='vr7']
component self
state ready
operation modified
status reached
trace-id vr7-1
!
!$ curl -isu admin:admin -X PATCH
-H "Content-type: application/yang-data+json"
'http://localhost:8080/restconf/data?commit-queue=sync&trace-id=vr7-1'
-d '{ "vrouter:vrouter": [ { "name": "vr7" } ] }'
ncs:component "base-config" {
ncs:state "init" {
ncs:delete {
ncs:pre-condition {
ncs:monitor "/devices/device[name='test']" {
ncs:trigger-on-delete;
}
}
}
}
ncs:state "ready";
} ncs:component "base-config" {
ncs:state "init" {
ncs:create {
ncs:pre-condition {
ncs:all {
ncs:monitor $SERVICE/syslog {
ncs:trigger-expr: "current() = true"
}
ncs:monitor $SERVICE/dns {
ncs:trigger-expr: "current() = true"
}
}
}
}
}
ncs:delete {
ncs:pre-condition {
ncs:any {
ncs:monitor $SERVICE/syslog {
ncs:trigger-expr: "current() = false"
}
ncs:monitor $SERVICE/dns {
ncs:trigger-expr: "current() = false"
}
}
}
}
}
}
ncs:state "ready";
} ncs:selector {
ncs:variable "VAR1" {
ncs:value-expr "'value1'";
}
ncs:create-component "'base-config'" {
ncs:component-type-ref "t:base-config";
}
ncs:selector {
ncs:variable "VAR2" {
ncs:value-expr "'value2'";
}
ncs:create-component "'component1'" {
ncs:component-type-ref "t:my-component";
}
}
}proplist.append(('VARX', "'some value'"))<config-template xmlns="http://tail-f.com/ns/config/1.0"
servicepoint="my-servicepoint"
componenttype="my:some-component"
state="my:some-state">
<devices xmlns="http://tail-f.com/ns/ncs">
<!-- ... -->
</devices>
</config-template>class NanoServiceCallbacks(ncs.application.NanoService):
@ncs.application.NanoService.create
def cb_nano_create(self, tctx, root, service, plan, component, state,
proplist, component_proplist):
...
@ncs.application.NanoService.delete
def cb_nano_delete(self, tctx, root, service, plan, component, state,
proplist, component_proplist):
...plan.component[component].state[state].status = 'failed'class Main(ncs.application.Application):
def setup(self):
...
self.register_nano_service('my-servicepoint',
'my:some-component',
'my:some-state',
NanoServiceCallbacks)public class myRFS {
@NanoServiceCallback(servicePoint="my-servicepoint",
componentType="my:some-component",
state="my:some-state",
callType=NanoServiceCBType.CREATE)
public Properties createSomeComponentSomeState(
NanoServiceContext context,
NavuNode service,
NavuNode ncsRoot,
Properties opaque,
Properties componentProperties)
throws DpCallbackException {
// ...
}
@NanoServiceCallback(servicePoint="my-servicepoint",
componentType="my:some-component",
state="my:some-state",
callType=NanoServiceCBType.DELETE)
public Properties deleteSomeComponentSomeState(
NanoServiceContext context,
NavuNode service,
NavuNode ncsRoot,
Properties opaque,
Properties componentProperties)
throws DpCallbackException {
// ...
}self.register_nano_service('my-servicepoint', '*', state, ServiceCallbacks) @NanoServiceCallback(servicePoint="my-servicepoint",
componentType="*", state="my:some-state",
callType=NanoServiceCBType.CREATE)
public Properties genericNanoCreate(NanoServiceContext context,
NavuNode service,
NavuNode ncsRoot,
Properties opaque,
Properties componentProperties)
throws DpCallbackException {
String currentComponent = context.getComponent();
// ...
} ncs:component "base-config" {
ncs:state "init" {
ncs:create {
ncs:force-commit;
}
}
ncs:state "ready" {
ncs:delete {
ncs:force-commit;
}
}
} identity base-config {
base ncs:plan-component-type;
}
list custom {
description "Custom plan location example service.";
key name;
leaf name {
tailf:info "Unique service id";
tailf:cli-allow-range;
type string;
}
uses ncs:service-data;
ncs:servicepoint custom-plan-servicepoint;
}
list custom-plan {
description "Custom plan location example plan.";
key name;
leaf name {
tailf:info "Unique service id";
tailf:cli-allow-range;
type string;
}
uses ncs:nano-plan-data;
}
ncs:plan-outline custom-plan {
description
"Custom plan location example outline";
ncs:component-type "p:base-config" {
ncs:state "ncs:init";
ncs:state "ncs:ready";
}
}
ncs:service-behavior-tree custom-plan-location-servicepoint {
description
"Custom plan location example service behaviour three.";
ncs:plan-outline-ref custom:custom-plan;
ncs:plan-location "/custom-plan";
ncs:selector {
ncs:create-component "'base-config'" {
ncs:component-type-ref "p:base-config";
}
}
} identity vlan-link {
base ncs:plan-component-type;
}
identity dev-setup {
base ncs:plan-state;
}
ncs:plan-outline link:link-plan {
description
"Make before brake vlan plan";
ncs:component-type "link:vlan-link" {
ncs:state "ncs:init";
ncs:state "link:dev-setup" {
ncs:create {
ncs:nano-callback;
}
}
ncs:state "ncs:ready" {
ncs:create {
ncs:pre-condition {
ncs:monitor "$SERVICE/endpoints" {
ncs:trigger-expr "test-passed = 'true'";
}
}
}
ncs:delete {
ncs:pre-condition {
ncs:monitor "$SERVICE/plan" {
ncs:trigger-expr
"component[type = 'vlan-link'][back-track = 'false']"
+ "/state[name = 'ncs:ready'][status = 'reached']"
+ " or not(component[back-track = 'false'])";
}
}
}
}
}
} ncs:state "link:dev-setup" {
ncs:create {
ncs:nano-callback;
}
} ncs:create {
ncs:pre-condition {
ncs:monitor "$SERVICE/endpoints" {
ncs:trigger-expr "test-passed = 'true'";
}
}
} ncs:delete {
ncs:pre-condition {
ncs:monitor "$SERVICE/plan" {
ncs:trigger-expr
"component[type = 'vlan-link'][back-track = 'false']"
+ "/state[name = 'ncs:ready'][status = 'reached']"
+ " or not(component[back-track = 'false'])";
}
}
} ncs:service-behavior-tree link-servicepoint {
description
"Make before brake vlan example";
ncs:plan-outline-ref "link:link-plan";
ncs:selector {
ncs:multiplier {
ncs:foreach "endpoints" {
ncs:variable "VALUE" {
ncs:value-expr "concat(a-device, '-', a-interface,
'-', b-device, '-', b-interface)";
}
}
ncs:create-component "$VALUE" {
ncs:component-type-ref "link:vlan-link";
}
}
}$ cd examples.ncs/getting-started/developing-with-ncs/20-nano-services
$ make all$ cd ncs-netsim restart
$ ncs$ ncs_cli -C -u admin
admin@ncs(config)# devices sync-from
sync-result {
device ex0
result true
}
sync-result {
device ex1
result true
}
sync-result {
device ex2
result true
}
admin@ncs(config)# config
Entering configuration mode terminaladmin@ncs(config)# link t2 unit 17 vlan-id 1
admin@ncs(config-link-t2)# link t2 endpoints ex1 eth0 ex2 eth0 test-passed true
admin@ncs(config-endpoints-ex1/eth0/ex2/eth0)# commit
admin@ncs(config-endpoints-ex1/eth0/ex2/eth0)# topadmin@ncs(config)# exit
admin@ncs# link t2 get-modifications
cli devices {
device ex1 {
config {
r:sys {
interfaces {
interface eth0 {
+ unit 17 {
+ vlan-id 1;
+ }
}
}
}
}
}
device ex2 {
config {
r:sys {
interfaces {
interface eth0 {
+ unit 17 {
+ vlan-id 1;
+ }
}
}
}
}
}
}admin@ncs# show link t2 plan component * state * status
NAME STATE STATUS
---------------------------------------
self init reached
ready reached
ex1-eth0-ex2-eth0 init reached
dev-setup reached
ready reachedadmin@ncs# config
Entering configuration mode terminal
admin@ncs(config)# no link t2 endpoints ex1 eth0 ex2 eth0
admin@ncs(config)# link t2 endpoints ex1 eth0 ex2 eth1admin@ncs(config-endpoints-ex1/eth0/ex2/eth1)# commit dry-run
cli devices {
device ex1 {
config {
r:sys {
interfaces {
interface eth0 {
}
}
}
}
}
device ex2 {
config {
r:sys {
interfaces {
+ interface eth1 {
+ unit 17 {
+ vlan-id 1;
+ }
+ }
}
}
}
}
}
link t2 {
- endpoints ex1 eth0 ex2 eth0 {
- test-passed true;
- }
+ endpoints ex1 eth0 ex2 eth1 {
+ }
}admin@ncs(config-endpoints-ex1/eth0/ex2/eth1)# commit
admin@ncs(config-endpoints-ex1/eth0/ex2/eth1)# top
admin@ncs(config)# exit
admin@ncs# show link t2 plan
...
BACK ...
NAME TYPE TRACK GOAL STATE STATUS ...
-------------------------------------------------------------------...
self self false - init reached ...
ready reached ...
ex1-eth0-ex2-eth1 vlan-link false - init reached ...
dev-setup reached ...
ready not-reached ...
ex1-eth0-ex2-eth0 vlan-link true - init reached ...
dev-setup reached ...
ready reached ...admin@ncs# link t2 get-modifications
cli devices {
device ex1 {
config {
r:sys {
interfaces {
interface eth0 {
+ unit 17 {
+ vlan-id 1;
+ }
}
}
}
}
}
device ex2 {
config {
r:sys {
interfaces {
interface eth0 {
+ unit 17 {
+ vlan-id 1;
+ }
}
+ interface eth1 {
+ unit 17 {
+ vlan-id 1;
+ }
+ }
}
}
}
}
}admin@ncs(config)# link t2 endpoints ex1 eth0 ex2 eth1 test-passed true
admin@ncs(config-endpoints-ex1/eth0/ex2/eth1)# commitadmin@ncs(config-endpoints-ex1/eth0/ex2/eth1)# top
admin@ncs(config)# exit
admin@ncs# show link t2 plan
...
BACK ...
NAME TYPE TRACK GOAL STATE STATUS ...
---------------------------------------------------------------...
self self false - init reached ...
ready reached ...
ex1-eth0-ex2-eth1 vlan-link false - init reached ...
dev-setup reached ...
ready reached ...admin@ncs# link t2 get-modifications
cli devices {
device ex1 {
config {
r:sys {
interfaces {
interface eth0 {
+ unit 17 {
+ vlan-id 1;
+ }
}
}
}
}
}
device ex2 {
config {
r:sys {
interfaces {
+ interface eth1 {
+ unit 17 {
+ vlan-id 1;
+ }
+ }
}
}
}
}
}


















Learn about the NSO Java API and its usage.
The NSO Java library contains a variety of APIs for different purposes. In this section, we introduce these and explain their usage. The Java library deliverables are found as two jar files (ncs.jar and conf-api.jar). The jar files and their dependencies can be found under $NCS_DIR/java/jar/.
For convenience, the Java build tool Apache ant (https://ant.apache.org/) is used to run all of the examples. However, this tool is not a requirement for NSO.
General for all APIs is that they communicate with NSO using TCP sockets. This makes it possible to use all APIs from a remote location.
The following APIs are included in the library:
MAAPI (Management Agent API) Northbound interface that is transactional and user session-based. Using this interface both configuration and operational data can be read. Configuration data can be written and committed as one transaction. The API is complete in the way that it is possible to write a new northbound agent using only this interface. It is also possible to attach to ongoing transactions in order to read uncommitted changes and/or modify data in these transactions.
In addition, the Conf API framework contains utility classes for data types, keypaths, etc.
The Management Agent API (MAAPI) provides an interface to the Transaction engine in NSO. As such it is very versatile. Here are some examples of how the MAAPI interface can be used.
Read and write configuration data stored by NSO or in an external database.
Write our own northbound interface.
We could access data inside a not yet committed transaction, e.g. as validation logic where our Java code can attach itself to a running transaction and read through the not yet committed transaction, and validate the proposed configuration change.
During database upgrade we can access and write data to a special upgrade transaction.
The first step of a typical sequence of MAAPI API calls when writing a management application would be to create a user session. Creating a user session is the equivalent of establishing an SSH connection from a NETCONF manager. It is up to the MAAPI application to authenticate users. The TCP connection between MAAPI and NSO is neither encrypted, nor authenticated. The Maapi Java package does however include an authenticate() method that can be used by the application to hook into the AAA framework of NSO and let NSO authenticate the user.
When a Maapi socket has been created the next step is to create a user session and supply the relevant information about the user for authentication.
When the user has been authenticated and a user session has been created the Maapi reference is now ready to establish a new transaction toward a data store. The following code snippet starts a read/write transaction towards the running data store.
\
The startTrans(int db,int mode) method of the Maapi class returns an integer that represents a transaction handler. This transaction handler is used when invoking the various Maapi methods.
An example of a typical transactional method is the getElem() method:
The getElem(int th, String fmt, Object ... arguments) first parameter is the transaction handle which is the integer that was returned by the startTrans() method. The fmt is a path that leads to a leaf in the data model. The path is expressed as a format string that contain fixed text with zero to many embedded format specifiers. For each specifier, one argument in the variable argument list is expected.
The currently supported format specifiers in the Java API are:
%d - requiring an integer parameter (type int) to be substituted.
%s - requiring a java.lang.String parameter to be substituted.
%x - requiring subclasses of type com.tailf.conf.ConfValue to be substituted.
The return value val contains a reference to a ConfValue which is a superclass of all the ConfValues that maps to the specific yang data type. If the Yang data type ip in the Yang model is ietf-inet-types:ipv4-address, we can narrow it to the subclass which is the corresponding com.tailf.conf.ConfIPv4.
The opposite operation of the getElem() is the setElem() method which set a leaf with a specific value.
We have not yet committed the transaction so no modification is permanent. The data is only visible inside the current transaction. To commit the transaction we call:
The method applyTrans() commits the current transaction to the running datastore.
It is also possible to run the code above without lock(Conf.DB_RUNNING).
Calling the applyTrans() method also performs additional validation of the new data as required by the data model and may fail if the validation fails. You can perform the validation beforehand, using the validateTrans() method.
Additionally, applying transaction can fail in case of a conflict with another, concurrent transaction. The best course of action in this case is to retry the transaction. Please see for details.
The MAAPI is also intended to attach to already existing NSO transaction to inspect not yet committed data for example if we want to implement validation logic in Java. See the example below (Attach Maapi to the Current Transaction).
This API provides an interface to the CDB Configuration database which stores all configuration data. With this API the user can:
Start a CDB Session to read configuration data.
Subscribe to changes in CDB - The subscription functionality makes it possible to receive events/notifications when changes occur in CDB.
CDB can also be used to store operational data, i.e., data which is designated with a "config false" statement in the YANG data model. Operational data is read/write trough the CDB API. NETCONF and the other northbound agents can only read operational data.
Java CDB API is intended to be fast and lightweight and the CDB read Sessions are expected to be short lived and fast. The NSO transaction manager is surpassed by CDB and therefore write operations on configurational data is prohibited. If operational data is stored in CDB both read and write operations on this data is allowed.
CDB is always locked for the duration of the session. It is therefore the responsibility of the programmer to make CDB interactions short in time and assure that all CDB sessions are closed when interaction has finished.
To initialize the CDB API a CDB socket has to be created and passed into the API base class com.tailf.cdb.Cdb:
After the cdb socket has been established, a user could either start a CDB Session or start a subscription of changes in CDB:
We can refer to an element in a model with an expression like /servers/server. This type of string reference to an element is called keypath or just path. To refer to element underneath a list, we need to identify which instance of the list elements that is of interest.
This can be performed either by pinpointing the sequence number in the ordered list, starting from 0. For instance the path: /servers/server[2]/port refers to the port leaf of the third server in the configuration. This numbering is only valid during the current CDB session. Note, the database is locked during this session.
We can also refer to list instances using the key values for the list. Remember that we specify in the data model which leaf or leafs in list that constitute the key. In our case, a server has the name leaf as key. The syntax for keys is a space-separated list of key values enclosed within curly brackets: { Key1 Key2 ...}. So, /servers/server{www}/ip refers to the ip leaf of the server whose name is www.
A YANG list may have more than one key for example the keypath: /dhcp/subNets/subNet{192.168.128.0 255.255.255.0}/routers refers to the routers list of the subnet which has key 192.168.128.0, 255.255.255.0.
The keypath syntax allows for formatting characters and accompanying substitution arguments. For example, getElem("server[%d]/ifc{%s}/mtu",2,"eth0") is using a keypath with a mix of sequence number and keyvalues with formatting characters and argument. Expressed in text the path will reference the MTU of the third server instance's interface named eth0.
The CdbSession Java class have a number of methods to control current position in the model.
CdbSession.cwd() to get current position.
CdbSession.cd() to change current position.
CdbSession.pushd() to change and push a new position to a stack.
Using relative paths and e.g. CdbSession.pushd(), it is possible to write code that can be re-used for common sub-trees.
The current position also includes the namespace. If an element of another namespace should be read, then the prefix of that namespace should be set in the first tag of the keypath, like: /smp:servers/server where smp is the prefix of the namespace. It is also possible to set the default namespace for the CDB session with the method CdbSession.setNamespace(ConfNamespace).
The CDB subscription mechanism allows an external Java program to be notified when different parts of the configuration changes. For such a notification, it is also possible to iterate through the change set in CDB for that notification.
Subscriptions are primarily to the running data store. Subscriptions towards the operational data store in CDB is possible, but the mechanism is slightly different see below.
The first thing to do is to register in CDB which paths should be subscribed to. This is accomplished with the CdbSubscription.subscribe(...) method. Each registered path returns a subscription point identifier. Each subscriber can have multiple subscription points, and there can be many different subscribers.
Every point is defined through a path - similar to the paths we use for read operations, with the difference that instead of fully instantiated paths to list instances we can choose to use tag paths i.e. leave out key value parts to be able to subscribe on all instances. We can subscribe either to specific leaves, or entire sub trees. Assume a YANG data model on the form of:
Explaining this by example we get:
A subscription on a leaf. Only changes to this leaf will generate a notification.
Means that we subscribe to any changes in the subtree rooted at /servers. This includes additions or removals of server instances, as well as changes to already existing server instances.
Means that we only want to be notified when the server "www" changes its ip address.
Means we want to be notified when the leaf ip is changed in any server instance.
When adding a subscription point the client must also provide a priority, which is an integer. As CDB is changed, the change is part of a transaction. For example, the transaction is initiated by a commit operation from the CLI or an edit-config operation in NETCONF resulting in the running database being modified. As the last part of the transaction, CDB will generate notifications in lock-step priority order. First, all subscribers at the lowest numbered priority are handled; once they all have replied and synchronized by calling sync(CdbSubscriptionSyncType synctype), the next set - at the next priority level - is handled by CDB. Not until all subscription points have been acknowledged, is the transaction complete.
This implies that if the initiator of the transaction was, for example, a commit command in the CLI, the command will hang until notifications have been acknowledged.
Note that even though the notifications are delivered within the transaction, a subscriber can't reject the changes (since this would break the two-phase commit protocol used by the NSO backplane towards all data providers).
When a client is done subscribing, it needs to inform NSO it is ready to receive notifications. This is done by first calling subscribeDone(), after which the subscription socket is ready to be polled.
As a subscriber has read its subscription notifications using read(), it can iterate through the changes that caused the particular subscription notification using the diffIterate() method.
It is also possible to start a new read-session to the CDB_PRE_COMMIT_RUNNING database to read the running database as it was before the pending transaction.
Subscriptions towards the operational data in CDB are similar to the above, but because the operational data store is designed for light-weight access (and thus, does not have transactions and normally avoids the use of any locks), there are several differences, in particular:
Subscription notifications are only generated if the writer obtains the subscription lock, by using the startSession() with the CdbLockType.LOCKREQUEST. In addition, when starting a session towards the operation data, we need to pass the CdbDBType.CDB_OPERATIONAL when starting a CDB session:\
No priorities are used.
Neither the writer that generated the subscription notifications nor other writers to the same data are blocked while notifications are being delivered. However, the subscription lock remains in effect until notification delivery is complete.
Essentially a write operation towards the operational data store, combined with the subscription lock, takes on the role of a transaction for configuration data as far as subscription notifications are concerned. This means that if operational data updates are done with many single-element write operations, this can potentially result in a lot of subscription notifications. Thus, it is a good idea to use the multi-element setObject() taking an array of ConfValues which sets a complete container or setValues() taking an array of ConfXMLParam and potent of setting an arbitrary part of the model. This to keep down notifications to subscribers when updating operational data.
Write operations that do not attempt to obtain the subscription lock, are allowed to proceed even during notification delivery. Therefore, it is the responsibility of the programmer to obtain the lock as needed when writing to the operational data store. E.g. if subscribers should be able to reliably read the exact data that resulted from the write that triggered their subscription, the subscription lock must always be obtained when writing that particular set of data elements. One possibility is of course to obtain the lock for all writes to operational data, but this may have an unacceptable performance impact.
To view registered subscribers, use the ncs --status command. For details on how to use the different subscription functions, see the Javadoc for NSO Java API.
The code in the example ${NCS_DIR}/examples.ncs/getting-started/developing-with-ncs/1-cdb. illustrates three different types of CDB subscribers.
A simple Cdb config subscriber that utilizes the low-level Cdb API directly to subscribe to changes in the subtree of the configuration.
Two Navu Cdb subscribers, one subscribing to configuration changes, and one subscribing to changes in operational data.
The DP API makes it possible to create callbacks which are called when certain events occur in NSO. As the name of the API indicates, it is possible to write data provider callbacks that provide data to NSO that is stored externally. However, this is only one of several callback types provided by this API. There exist callback interfaces for the following types:
Service Callbacks - invoked for service callpoints in the YANG model. Implements service to device information mappings. See for example ${NCS_DIR}/examples.ncs/getting-started/developing-with-ncs/4-rfs-service
Action Callbacks - invoked for a certain action in the YANG model which is defined with a callpoint directive.
Authentication Callbacks - invoked for external authentication functions.
The callbacks are methods in ordinary java POJOs. These methods are adorned with a specific Java Annotations syntax for that callback type. The annotation makes it possible to add metadata information to NSO about the supplied method. The annotation includes information about which callType and, when necessary, which callpoint the method should be invoked for.
By default, NSO stores all configuration data in its CDB data store. We may wish to store and configure other data in NSO than what is defined by the NSO built-in YANG models, alternatively, we may wish to store parts of the NSO tree outside NSO (CDB) i.e. in an external database. Say, for example, that we have our customer database stored in a relational database disjunct from NSO. To implement this, we must do a number of things: We must define a callpoint somewhere in the configuration tree, and we must implement what is referred to as a data provider. Also, NSO executes all configuration changes inside transactions and if we want NSO (CDB) and our external database to participate in the same two-phase commit transactions, we must also implement a transaction callback. Altogether, it will appear as if the external data is part of the overall NSO configuration, thus the service model data can refer directly to this external data - typically to validate service instances.
The basic idea for a data provider is that it participates entirely in each NSO transaction, and it is also responsible for reading and writing all data in the configuration tree below the callpoint. Before explaining how to write a data provider and what the responsibilities of a data provider are, we must explain how the NSO transaction manager drives all participants in a lock-step manner through the phases of a transaction.
A transaction has a number of phases, the external data provider gets called in all the different phases. This is done by implementing a transaction callback class and then registering that class. We have the following distinct phases of an NSO transaction:
init(): In this phase, the transaction callback class init() methods get invoked. We use annotation on the method to indicate that it's the init() method as in:\
Each different callback method we wish to register must be annotated with an annotation from TransCBType.
The callback is invoked when a transaction starts, but NSO delays the actual invocation as an optimization. For a data provider providing configuration data, init() is invoked just before the first data-reading callback, or just before the transLock() callback (see below), whichever comes first. When a transaction has started, it is in a state we refer to as READ
The following picture illustrates the conceptual state machine an NSO transaction goes through.
All callback methods are optional. If a callback method is not implemented, it is the same as having an empty callback which simply returns.
Similar to how we have to register transaction callbacks, we must also register data callbacks. The transaction callbacks cover the life span of the transaction, and the data callbacks are used to read and write data inside a transaction. The data callbacks have access to what is referred to as the transaction context in the form of a DpTrans object.
We have the following data callbacks:
getElem(): This callback is invoked by NSO when NSO needs to read the actual value of a leaf element. We must also implement the getElem() callback for the keys. NSO invokes getElem() on a key as an existence test.\
We define the getElem callback inside a class as:\
existsOptional(): This callback is called for all type less and optional elements, i.e. presence containers and leafs of type
We also have two additional optional callbacks that may be implemented for efficiency reasons.
getObject(): If this optional callback is implemented, the work of the callback is to return an entire object, i.e., a list instance. This is not the same getObject() as the one that is used in combination with the iterator()
numInstances(): When NSO needs to figure out how many instances we have of a certain element, by default NSO will repeatedly invoke the iterator() callback. If this callback is installed, it will be called instead.
The following example illustrates an external data provider. The example is possible to run from the examples collection. It resides under ${NCS_DIR}/examples.ncs/getting-started/developing-with-ncs/6-extern-db.
The example comes with a tailor-made database - MyDb. That source code is provided with the example but not shown here. However, the functionality will be obvious from the method names like newItem(), lock(), save(), etc.
Two classes are implemented, one for the transaction callbacks and another for the data callbacks.
The data model we wish to incorporate into NSO is a trivial list of work items. It looks like:
Note the callpoint directive in the model, it indicates that an external Java callback must register itself using that name. That callback will be responsible for all data below the callpoint.
To compile the work.yang data model and then also to generate Java code for the data model, we invoke make all in the example package src directory. The Makefile will compile the yang files in the package, generate Java code for those data models, and then also invoke ant in the Java src directory.
The Data callback class looks as follows:
First, we see how the Java annotations are used to declare the type of callback for each method. Secondly, we see how the getElem() callback inspects the keyPath parameter passed to it to figure out exactly which element NSO wants to read. The keyPath is an array of ConfObject values. Keypaths are central to the understanding of the NSO Java library since they are used to denote objects in the configuration. A keypath uniquely identifies an element in the instantiated configuration tree.
Furthermore, the getElem() switches on the tag keyPath[0] which is a ConfTag using symbolic constants from the class "work". The "work" class was generated through the call to ncsc --emit-java ....
The three write callbacks, setElem(), create() and remove() all return the value Conf.REPLY_ACCUMULATE. If our backend database has real support to abort transactions, it is a good idea to initiate a new backend database transaction in the Transaction callback init() (more on that later), whereas if our backend database doesn't support proper transactions, we can fake real transactions by returning Conf.REPLY_ACCUMULATE instead of actually writing the data. Since the final verdict of the NSO transaction as a whole may very well be to abort the transaction, we must be prepared to undo all write operations. The Conf.REPLY_ACCUMULATE return value means that we ask the library to cache the write for us.
The transaction callback class looks like this:
We can see how the prepare() callback goes through all write operations and actually executes them towards our database MyDb.
Both service and action callbacks are fundamental in NSO.
Implementing a service callback is one way of creating a service type. This and other ways of creating service types are in-depth described in the section.
Action callbacks are used to implement arbitrary operations in Java. These operations can be basically anything, e.g. downloading a file, performing some test, resetting alarms, etc, but they should not modify the modeled configuration.
The actions are defined in the YANG model by means of rpc or tailf:action statements. Input and output parameters can optionally be defined via input and output statements in the YANG model. To specify that the rpc or action is implemented by a callback, the model uses a tailf:actionpoint statement.
The action callbacks are:
init() Similar to the transaction init() callback. However note that, unlike the case with transaction and data callbacks, both init() and action() are registered for each actionpoint (i.e. different action points can have different init() callbacks), and there is no finish() callback - the action is completed when the action() callback returns.
In the examples.ncs/service-provider/mpls-vpn example, we can define a self-test action. In the packages/l3vpn/src/yang/l3vpn.yang, we locate the service callback definition:
Beneath the service callback definition, we add an action callback definition so the resulting YANG looks like the following:
The packages/l3vpn/src/java/src/com/example/l3vpnRFS.java already contains an action implementation but it has been suppressed since no actionpoint with the corresponding name has been defined in the YANG model, before now.
In the VALIDATE state of a transaction, NSO will validate the new configuration. This consists of verification that specific YANG constraints such as min-elements, unique, etc, as well as arbitrary constraints specified by must expressions, are satisfied. The use of must expressions is the recommended way to specify constraints on relations between different parts of the configuration, both due to its declarative and concise form and due to performance considerations, since the expressions are evaluated internally by the NSO transaction engine.
In some cases, it may still be motivated to implement validation logic via callbacks in code. The YANG model will then specify a validation point by means of a tailf:validate statement. By default, the callback registered for a validation point will be invoked whenever a configuration is validated, since the callback logic will typically be dependent on data in other parts of the configuration, and these dependencies are not known by NSO. Thus it is important from a performance point of view to specify the actual dependencies by means of tailf:dependency substatements to the validate statement.
Validation callbacks use the MAAPI API to attach to the current transaction. This makes it possible to read the configuration data that is to be validated, even though the transaction is not committed yet. The view of the data is effectively the pre-existing configuration "shadowed" by the changes in the transaction, and thus exactly what the new configuration will look like if it is committed.
Similar to the case of transaction and data callbacks, there are transaction validation callbacks that are invoked when the validation phase starts and stops, and validation callbacks that are invoked for the specific validation points in the YANG model.
The transaction validation callbacks are:
init(): This callback is invoked when the validation phase starts. It will typically attach to the current transaction:
stop(): This callback is invoked when the validation phase ends. If init() attached to the transaction, stop() should detach from it.
The actual validation logic is implemented in a validation callback:
validate(): This callback is invoked for a specific validation point.
Transforms implement a mapping between one part of the data model - the front-end of the transform - and another part - the back-end of the transform. Typically the front-end is visible to northbound interfaces, while the back-end is not, but for operational data (config false in the data model), a transform may implement a different view (e.g. aggregation) of data that is also visible without going through the transform.
The implementation of a transform uses techniques already described in this section: Transaction and data callbacks are registered and invoked when the front-end data is accessed, and the transform uses the MAAPI API to attach to the current transaction and accesses the back-end data within the transaction.
To specify that the front-end data is provided by a transform, the data model uses the tailf:callpoint statement with a tailf:transform true substatement. Since transforms do not participate in the two-phase commit protocol, they only need to register the init() and finish() transaction callbacks. The init() callback attaches to the transaction and finish() detaches from it. Also, a transform for operational data only needs to register the data callbacks that read data, i.e. getElem(), existsOptional(), etc.
Hooks make it possible to have changes to the configuration trigger additional changes. In general, this should only be done when the data that is written by the hook is not visible to northbound interfaces since otherwise, the additional changes will make it difficult e.g. EMS or NMS systems to manage the configuration - the complete configuration resulting from a given change cannot be predicted. However, one use case in NSO for hooks that trigger visible changes is precisely to model-managed devices that have this behavior: hooks in the device model can emulate what the device does on certain configuration changes, and thus the device configuration in NSO remains in sync with the actual device configuration.
The implementation technique for a hook is very similar to that for a transform. Transaction and data callbacks are registered, and the MAAPI API is used to attach to the current transaction and write the additional changes into the transaction. As for transforms, only the init() and finish() transaction callbacks need to be registered, to do the MAAPI attach and detach. However only data callbacks that write data, i.e. setElem(), create(), etc need to be registered, and depending on which changes should trigger the hook invocation, it is possible to register only a subset of those. For example, if the hook is registered for a leaf in the data model, and only changes to the value of that leaf should trigger invocation of the hook, it is sufficient to register setElem().
To specify that changes to some part of the configuration should trigger a hook invocation, the data model uses the tailf:callpoint statement with a tailf:set-hook or tailf:transaction-hook substatement. A set-hook is invoked immediately when a northbound agent requests a write operation on the data, while a transaction-hook is invoked when the transaction is committed. For the NSO-specific use case mentioned above, a set-hook should be used. The tailf:set-hook and tailf:transaction-hook statements take an argument specifying the extent of the data model the hook applies to.
NSO can speak southbound to an arbitrary management interface. This is of course not entirely automatic like with NETCONF or SNMP, and depending on the type of interface the device has for configuration, this may involve some programming. Devices with a Cisco-style CLI can however be managed by writing YANG models describing the data in the CLI, and a relatively thin layer of Java code to handle the communication to the devices. Refer to for more information.
The NAVU API provides a DOM-driven approach to navigate the NSO service and device models. The main features of the NAVU API are dynamic schema loading at start-up and lazy loading of instance data. The navigation model is based on the YANG language structure. In addition to navigation and reading of values, NAVU also provides methods to modify the data model. Furthermore, it supports the execution of actions modeled in the service model.
By using NAVU, it is easy to drill down through tree structures with minimal effort using the node-by-node navigation primitives. Alternatively, we can use the NAVU search feature. This feature is especially useful when we need to find information deep down in the model structures.
NAVU requires all models i.e. the complete NSO service model with all its augmented sub-models. This is loaded at runtime from NSO. NSO has in turn acquired these from loaded .fxs files. The .fxs files are a product from the ncsc tool with compiles these from the .yang files.
The ncsc tool can also generate Java classes from the .yang files. These files, extending the ConfNamespace base class, are the Java representation of the models and contain all defined nametags and their corresponding hash values. These Java classes can, optionally, be used as help classes in the service applications to make NAVU navigation type-safe, e.g. eliminating errors from misspelled model container names.
The service models are loaded at start-up and are always the latest version. The models are always traversed in a lazy fashion i.e. data is only loaded when it is needed. This is to minimize the amount of data transferred between NSO and the service applications.
The most important classes of NAVU are the classes implementing the YANG node types. These are used to navigate the DOM. These classes are as follows.
NavuContainer: the NavuContainer is a container representing either the root of the model, a YANG module root, or a YANG container.
NavuList: the NavuList represents a YANG list node.
NavuListEntry: list node entry.
The remaining part of this section will guide us through the most useful features of the NAVU. Should further information be required, please refer to the corresponding Javadoc pages.
NAVU relies on MAAPI as the underlying interface to access NSO. The starting point in NAVU configuration is to create a NavuContext instance using the NavuContext(Maapi maapi) constructor. To read and/or write data a transaction has to be started in Maapi. There are methods in the NavuContext class to start and handle this transaction.
If data has to be written, the Navu transaction has to be started differently depending on the data being the configuration or operational data. Such a transaction is started by the methods NavuContext.startRunningTrans() or NavuContext.startOperationalTrans() respectively. The Javadoc describes this in more detail.
When navigating using NAVU we always start by creating a NavuContainer and passing in the NavuContext instance, this is a base container from which navigation can be started. Furthermore, we need to create a root NavuContainer which is the top of the YANG module in which to navigate down. This is done by using the NavuContainer.container(int hash) method. Here the argument is the hash value for the module namespace.
NAVU maps the YANG node types; container, list, leaf, and leaf-list into its own structure. As mentioned previously NavuContainer is used to represent both the module and the container node type. The NavuListEntry is also used to represent a list node instance (actually NavuListEntry extends NavuContainer). i.e. an element of a list node.
Consider the YANG excerpt below.
If the purpose is to directly access a list node, we would typically do a direct navigation to the list element using the NAVU primitives.
Or if we want to iterate over all elements of a list we could do as follows.
The above example uses the select() which uses a recursive regexp match against its children.
Alternatively, if the purpose is to drill down deep into a structure we should use select(). The select() offers a wild card-based search. The search is relative and can be performed from any node in the structure.
All of the above are valid ways of traversing the lists depending on the purpose. If we know what we want, we use direct access. If we want to apply something to a large amount of nodes, we use select().
An alternative method is to use the xPathSelect() where an XPath query could be issued instead.
NavuContainer and NavuList are structural nodes with NAVU. i.e. they have no values. Values are always kept by NavuLeaf. A NavuLeaf represents the YANG node types leaf. A NavuLeaf can be both read and set. NavuLeafList represents the YANG node type leaf-list and has some features in common with both NavuLeaf (which it inherits from) and NavuList.
To read and update a leaf, we simply navigate to the leaf and request the value. And in the same manner, we can update the value.
In addition to the YANG standard node types, NAVU also supports the Tailf proprietary node type action. An action is considered being a NavuAction. It differs from an ordinary container in that it can be executed using the call() primitive. Input and output parameters are represented as ordinary nodes. The action extension of YANG allows an arbitrary structure to be defined both for input and output parameters.
Consider the excerpt below. It represents a module on a managed device. When connected and synchronized to the NSO, the module will appear in the /devices/device/config container.
To execute the action below we need to access a device with this module loaded. This is done in a similar way to non-action nodes.
Or, we could do it with xPathSelect().
The examples above have described how to attach to the NSO module and navigate through the data model using the NAVU primitives. When using NAVU in the scope of the NSO Service manager, we normally don't have to worry about attaching the NavuContainer to the NSO data model. NSO does this for us providing NavuContainer nodes pointing at the nodes of interest.
Since this API is potent for both producing and consuming alarms, this becomes an API that can be used both north and eastbound. It adheres to the NSO Alarm model.
For more information see .
The com.tailf.ncs.alarmman.consumer.AlarmSource class is used to subscribe to alarms. This class establishes a listener towards an alarm subscription server called com.tailf.ncs.alarmman.consumer.AlarmSourceCentral. The AlarmSourceCentral needs to be instantiated and started prior to the instantiation of the AlarmSource listener. The NSO Java VM takes care of starting the AlarmSourceCentral so any use of the ALARM API inside the NSO Java VM can expect this server to be running.
For situations where alarm subscription outside of the NSO Java VM is desired, starting the AlarmSourceCentral is performed by opening a Cdb socket, passing this Cdb to the AlarmSourceCentral class, and then calling the start() method.
To retrieve alarms from the AlarmSource listener, an initial startListening() is required. Then either a blocking takeAlarm() or a timeout-based pollAlarm() can be used to retrieve the alarms. The first method will wait indefinitely for new alarms to arrive while the second will timeout if an alarm has not arrived in the stipulated time. When a listener no longer is needed then a stopListening() call should be issued to deactivate it.
Both the takeAlarm() and the pollAlarm() method returns a Alarm object from which all alarm information can be retrieved.
The com.tailf.ncs.alarmman.producer.AlarmSink is used to persistently store alarms in NSO. This can be performed either directly or by the use of an alarm storage server called com.tailf.ncs.alarmman.producer.AlarmSinkCentral.
To directly store alarms an AlarmSink instance is created using the AlarmSink(Maapi maapi) constructor.
On the other hand, if the alarms are to be stored using the AlarmSinkServer then the AlarmSink() constructor without arguments is used.
However, this case requires that the AlarmSinkServer is started prior to the instantiation of the AlarmSink. The NSO Java VM will take care of starting this server so any use of the ALARM API inside the Java VM can expect this server to be running. If it is desired to store alarms in an application outside of the NSO java VM, the AlarmSinkServer needs to be started like the following example:
To store an alarm using the AlarmSink, an Alarm instance must be created. This alarm alarm instance is then stored by a call to the submitAlarm() method.
Applications can subscribe to certain events generated by NSO. The event types are defined by the com.tailf.notif.NotificationType enumeration. The following notification can be subscribed to:
NotificationType.NOTIF_AUDIT: all audit log events are sent from NSO on the event notification socket.
NotificationType.NOTIF_COMMIT_SIMPLE: an event indicating that a user has somehow modified the configuration.
NotificationType.NOTIF_COMMIT_DIFF: an event indicating that a user has somehow modified the configuration. The main difference between this event and the above-mentioned NOTIF_COMMIT_SIMPLE is that this event is synchronous, i.e. the entire transaction hangs until we have explicitly called
To receive events from the NSO the application opens a socket and passes it to the notification base class com.tailf.notif.Notif together with an EnumSet of NotificationType for all types of notifications that should be received. Looping over the Notif.read() method will read and deliver notifications which are all subclasses of the com.tailf.notif.Notification base class.
The HA API is used to set up and control High-Availability cluster nodes. This package is used to connect to the High Availability (HA) subsystem. Configuration data can then be replicated on several nodes in a cluster. (see )
The following example configures three nodes in a HA cluster. One is set as primary and the other two as secondaries.
This section describes the types and how these types map to various YANG types and Java classes.
All types inherit the base class com.tailf.conf.ConfObject.
Following the type hierarchy of ConfObject subclasses are distinguished by:
Value: A concrete value classes which inherits ConfValue that in turn is a subclass of ConfObject.
TypeDescriptor: a class representing the type of a ConfValue. A type-descriptor is represented as an instance of ConfTypeDescriptor. Usage is primarily to be able to map a ConfValue to its internal integer value representation or vice versa.
The class ConfObject defines public int constants for the different value types. Each value type is mapped to a specific YANG type and is also represented by a specific subtype of ConfValue. Having a ConfValue instance it is possible to retrieve its integer representation by the use of the static method getConfTypeDescriptor() in class ConfTypeDescriptor. This function returns a ConfTypeDescriptor instance representing the value from which the integer representation can be retrieved. The values represented as integers are:
The table lists ConfValue types.
An important class in the com.tailf.conf package, not inheriting ConfObject, is ConfPath. ConfPath is used to represent a keypath that can point to any element in an instantiated model. As such it is constructed from an array of ConfObject[] instances where each element is expected to be either a ConfTag or a ConfKey.
As an example take the keypath /ncs:devices/device{d1}/iosxr:interface/Loopback{lo0}. The following code snippets show the instantiating of a ConfPath object representing this keypath:
Another more commonly used option is to use the format string + arguments constructor from ConfPath. Where ConfPath parsers and creates the ConfTag/ConfKey representation from the string representation instead.
The usage of ConfXMLParam is in tagged value arrays ConfXMLParam[] of subtypes of ConfXMLParam. These can in collaboration represent an arbitrary YANG model subtree. It does not view a node as a path but instead, it behaves as an XML instance document representation. We have 4 subtypes of ConfXMLParam:
ConfXMLParamStart: Represents an opening tag. Opening node of a container or list entry.
ConfXMLParamStop: Represents a closing tag. The closing tag of a container or a list entry.
ConfXMLParamValue: Represent a value and a tag. Leaf tag with the corresponding value.
Each element in the array is associated with the node in the data model.
The array corresponding to the /servers/server{www} which is a representation of the instance XML document:
The list entry above could be populated as:
A namespace class represents the namespace for a YANG module. As such it maps the symbol name of each element in the YANG module to its corresponding hash value.
A namespace class is a subclass of ConfNamespace and comes in one of two shapes. Either created at compile time using the ncsc compiler or created at runtime with the use of Maapi.loadSchemas. These two types also indicate two main usages of namespace classes. The first is in programming where the symbol names are used e.g. in Navu navigation. This is where the compiled namespaces are used. The other is for internal mapping between symbol names and hash values. This is where the runtime type normally is used, however, compiled namespace classes can be used for these mappings too.
The compiled namespace classes are generated from compiled .fxs files through ncsc,(ncsc --emit-java).
Runtime namespace classes are created by calling Maapi.loadschema(). That's it, the rest is dynamic. All namespaces known by NSO are downloaded and runtime namespace classes are created. these can be retrieved by calling Maapi.getAutoNsList().
The schema information is loaded automatically at the first connect of the NSO server, so no manual method call to Maapi.loadSchemas() is needed.
With all schemas loaded, the Java engine can make mappings between hash codes and symbol names on the fly. Also, the ConfPath class can find and add namespace information when parsing keypaths provided that the namespace prefixes are added in the start element for each namespace.
As an option, several APIs e.g. MAAPI can set the default namespace which will be the expected namespace for paths without prefixes. For example, if the namespace class smp is generated with the legal path /smp:servers/server an option in Maapi could be the following:
Description of the RESTCONF API.
RESTCONF is an HTTP-based protocol as defined in . RESTCONF standardizes a mechanism to allow Web applications to access the configuration data, state data, data-model-specific Remote Procedure Call (RPC) operations, and event notifications within a networking device.
RESTCONF uses HTTP methods to provide Create, Read, Update, Delete (CRUD) operations on a conceptual datastore containing YANG-defined data, which is compatible with a server that implements NETCONF datastores as defined in .
Configuration data and state data are exposed as resources that can be retrieved with the GET method. Resources representing configuration data can be modified with the DELETE, PATCH, POST, and PUT methods. Data is encoded with either XML () or JSON ().
This section describes the NSO implementation and extension to or deviation from respectively.
As of this writing, the server supports the following specifications:
CdbSession.popd() to change back to an stacked position.The previous value for modified leaf is not available when using the diffIterate() method.
Data Callbacks - invoked for data provision and manipulation for certain data elements in the YANG model which is defined with a callpoint directive.
DB Callbacks - invoked for external database stores.
Range Action Callbacks - A variant of action callback where ranges are defined for the key values.
Range Data Callbacks - A variant of data callback where ranges are defined for the data values.
Snmp Inform Response Callbacks - invoked for response on Snmp inform requests on a certain element in the Yang model which is defined by a callpoint directive.
Transaction Callbacks - invoked for external participants in the two-phase commit protocol.
Transaction Validation Callbacks - invoked for external transaction validation in the validation phase of a two-phase commit.
Validation Callbacks - invoked for validation of certain elements in the YANG Model which is designed with a callpoint directive.
READ
Any write operations performed by the management station are accumulated by NSO and the data provider doesn't see them while in the READ state.
transLock(): This callback gets invoked by NSO at the end of the transaction. NSO has accumulated a number of write operations and will now initiate the final write phases. Once the transLock() callback has returned, the transaction is in the VALIDATEstate. In the VALIDATE state, NSO will (possibly) execute a number of read operations to validate the new configuration. Following the read operations for validations comes the invocation of one of the writeStart() or transUnlock() callbacks.
transUnlock(): This callback gets invoked by NSO if the validation fails or if the validation was done separately from the commit (e.g. by giving a validate command in the CLI). Depending on where the transaction originated, the behavior after a call to transUnlock() differs. If the transaction originated from the CLI, the CLI reports to the user that the configuration is invalid and the transaction remains in the READ state whereas if the transaction originated from a NETCONF client, the NETCONF operation fails and a NETCONF rpc error is reported to the NETCONF client/manager.
writeStart(): If the validation succeeded, the writeStart() callback will be called and the transaction will enter the WRITE state. While in WRITE state, a number of calls to the write data callbacks setElem(), create() and remove() will be performed.
If the underlying database supports real atomic transactions, this is a good place to start such a transaction.
The application should not modify the real running data here. If, later, the abort() callback is called, all write operations performed in this state must be undone.
prepare(): Once all write operations are executed, the prepare() callback is executed. This callback ensures that all participants have succeeded in writing all elements. The purpose of the callback is merely to indicate to NSO that the data provider is ok, and has not yet encountered any errors.
abort(): If any of the participants die or fail to reply in the prepare() callback, the remaining participants all get invoked in the abort() callback. All data written so far in this transaction should be disposed of.
commit(): If all participants successfully replied in their respective prepare() callbacks, all participants get invoked in their respective commit() callbacks. This is the place to make all data written by the write callbacks in WRITE state permanent.
finish(): And finally, the finish() callback gets invoked at the end. This is a good place to deallocate any local resources for the transaction. The finish() callback can be called from several different states.
emptyemptygetElem()emptygetElem()An example of a data model could be:\
The above YANG fragment has three nodes that may or may not exist and that do not have a type. If we do not have any such elements, nor any operational data lists without keys (see below), we do not need to implement the existsOptional() callback.
If we have the above data model, we must implement the existsOptional(), and our implementation must be prepared to reply to calls of the function for the paths /bs, /bs/b/opt, and /bs/b/foo. The leaf /bs/b/opt/ii is not mandatory, but it does have a type namely int32, and thus the existence of that leaf will be determined through a call to the getElem() callback.
The existsOptional() callback may also be invoked by NSO as an "existence test" for an entry in an operational data list without keys. Normally this existence test is done with a getElem() request for the first key, but since there are no keys, this callback is used instead. Thus, if we have such lists, we must also implement this callback, and handle a request where the keypath identifies a list entry.
iterator() and getKey(): This pair of callbacks is used when NSO wants to traverse a YANG list. The job of the iterator() callback is to return an Iterator object that is invoked by the library. For each Object returned by the iterator, the NSO library will invoke the getKey() callback on the returned object. The getkey callback shall return a ConfKey value.
An alternative to the getKey() callback is to register the optional getObject() callback whose job it is to return not just the key, but the entire YANG list entry. It is possible to register both getKey() and getObject() or either. If the getObject() is registered, NSO will attempt to use it only when bulk retrieval is executed.
action() This callback is invoked to actually execute the rpc or action. It receives the input parameters (if any) and returns the output parameters (if any).NavuLeaf: the NavuLeaf represents a YANG leaf node.Notif.diffNotificationDone()Maapi.attach()Maapi.diffIterate()NotificationType.NOTIF_COMMIT_FAILED: This event is generated when a data provider fails in its commit callback. NSO executes a two-phase commit procedure towards all data providers when committing transactions. When a provider fails to commit, the system is an unknown state. If the provider is "external", the name of the failing daemon is provided. If the provider is another NETCONF agent, the IP address and port of that agent is provided.
NotificationType.NOTIF_COMMIT_PROGRESS: This event provides progress information about the commit of a transaction.
NotificationType.NOTIF_PROGRESS: This event provides progress information about the commit of a transaction or an action being applied. Subscribing to this notification type means that all notifications of the type NotificationType.NOTIF_COMMIT_PROGRESS are subscribed to as well.
NotificationType.NOTIF_CONFIRMED_COMMIT: This event is generated when a user has started a confirmed commit, when a confirming commit is issued, or when a confirmed commit is aborted; represented by ConfirmNotification.confirm_type. For a confirmed commit, the timeout value is also present in the notification.
NotificationType.NOTIF_FORWARD_INFO: This event is generated whenever the server forwards (proxies) a northbound agent.
NotificationType.NOTIF_HA_INFO: an event related to NSO's perception of the current cluster configuration.
NotificationType.NOTIF_HEARTBEAT: This event can be used by applications that wish to monitor the health and liveness of the server itself. It needs to be requested through a Notif instance which has been constructed with a heartbeat_interval. The server will continuously generate heartbeat events on the notification socket. If the server fails to do so, the server is hung. The timeout interval is measured in milliseconds. The recommended value is 10000 milliseconds to cater for truly high load situations. Values less than 1000 are changed to 1000.
NotificationType.NOTIF_SNMPA: This event is generated whenever an SNMP PDU is processed by the server. The application receives an SnmpaNotification with a list of all varbinds in the PDU. Each varbind contains subclasses that are internal to the SnmpaNotification.
NotificationType.NOTIF_SUBAGENT_INFO: Only sent if NSO runs as a primary agent with subagents enabled. This event is sent when the subagent connection is lost or reestablished. There are two event types, defined in SubagentNotification.subagent_info_type}: "subagent up" and "subagent down".
NotificationType.NOTIF_DAEMON: all log events that also go to the /NCSConf/logs/NSCLog log are sent from NSO on the event notification socket.
NotificationType.NOTIF_NETCONF: All log events that also go to the /NCSConf/logs/netconfLog log are sent from NSO on the event notification socket.
NotificationType.NOTIF_DEVEL: All log events that also go to the /NCSConf/logs/develLog log are sent from NSO on the event notification socket.
NotificationType.NOTIF_TAKEOVER_SYSLOG: If this flag is present, NSO will stop Syslogging. The idea behind the flag is that we want to configure Syslogging for NSO to let NSO log its startup sequence. Once NSO is started we wish to subsume the syslogging done by NSO. Typical applications that use this flag want to pick up all log messages, reformat them, and use some local logging method. Once all subscriber sockets with this flag set are closed, NSO will resume to syslog.
NotificationType.NOTIF_UPGRADE_EVENT: This event is generated for the different phases of an in-service upgrade, i.e. when the data model is upgraded while the server is running. The application receives an UpgradeNotification where the UpgradeNotification.event_type gives the specific upgrade event. The events correspond to the invocation of the Maapi functions that drive the upgrade.
NotificationType.NOTIF_COMPACTION: This event is generated after each CDB compaction performed by NSO. The application receives a CompactionNotification where CompactionNotification.dbfile indicates which datastore was compacted, and CompactionNotification.compaction_type indicates whether the compaction was triggered manually or automatically by the system.
NotificationType.NOTIF_USER_SESSION: An event related to user sessions. There are 6 different user session-related event types, defined in UserSessNotification.user_sess_type: session starts/stops, session locks/unlocks database, and session starts/stop database transaction.
Tag: A tag is a representation of an element in the YANG model. A Tag is represented as an instance of com.tailf.conf.Tag. The primary usage of tags are in the representation of keypaths.Key: a key is a representation of the instance key for an element instance. A key is represented as an instance of com.tailf.conf.ConfKey. A ConfKey is constructed from an array of values (ConfValue[]). The primary usage of keys is in the representation of keypaths.
XMLParam: subclasses of ConfXMLParam which are used to represent a, possibly instantiated, subtree of a YANG model. Useful in several APIs where multiple values can be set or retrieved in one function call.
J_INT16
int16
ConfInt16
16-bit signed integer
J_INT32
int32
ConfInt32
32-bit signed integer
J_INT64
int64
ConfInt64
64-bit signed integer
J_UINT8
uint8
ConfUInt8
8-bit unsigned integer
J_UINT16
uint16
ConfUInt16
16-bit unsigned integer
J_UINT32
uint32
ConfUInt32
32-bit unsigned integer
J_UINT64
uint64
ConfUInt64
64-bit unsigned integer
J_IPV4
inet:ipv4-address
ConfIPv4
64-bit unsigned
J_IPV6
inet:ipv6-address
ConfIPv6
IP v6 Address
J_BOOL
boolean
ConfBoolean
Boolean value
J_QNAME
xs:QName
ConfQName
A namespace/tag pair
J_DATETIME
yang:date-and-time
ConfDateTime
Date and Time Value
J_DATE
xs:date
ConfDate
XML schema Date
J_ENUMERATION
enum
ConfEnumeration
An enumeration value
J_BIT32
bits
ConfBit32
32 bit value
J_BIT64
bits
ConfBit64
64 bit value
J_LIST
leaf-list
-
-
J_INSTANCE_IDENTIFIER
instance-identifier
ConfObjectRef
yang builtin
J_OID
tailf:snmp-oid
ConfOID
-
J_BINARY
tailf:hex-list, tailf:octet-list
ConfBinary, ConfHexList
-
J_IPV4PREFIX
inet:ipv4-prefix
ConfIPv4Prefix
-
J_IPV6PREFIX
-
ConfIPv6Prefix
-
J_IPV6PREFIX
inet:ipv6-prefix
ConfIPv6Prefix
-
J_DEFAULT
-
ConfDefault
default value indicator
J_NOEXISTS
-
ConfNoExists
no value indicator
J_DECIMAL64
decimal64
ConfDecimal64
yang builtin
J_IDENTITYREF
identityref
ConfIdentityRef
yang builtin
ConfXMLParamLeaf: Represents a leaf tag without the leafs value.
CDB API The southbound interface provides access to the CDB configuration database. Using this interface configuration data can be read. In addition, operational data that is stored in CDB can be read and written. This interface has a subscription mechanism to subscribe to changes. A subscription is specified on a path that points to an element in a YANG model or an instance in the instance tree. Any change under this point will trigger the subscription. CDB has also functions to iterate through the configuration changes when a subscription has been triggered.
DP API Southbound interface that enables callbacks, hooks, and transforms. This API makes it possible to provide the service callbacks that handle service-to-device mapping logic. Other usual cases are external data providers for operational data or action callback implementations. There are also transaction and validation callbacks, etc. Hooks are callbacks that are fired when certain data is written and the hook is expected to do additional modifications of data. Transforms are callbacks that are used when complete mediation between two different models is necessary.
NED API (Network Element Driver) Southbound interface that mediates communication for devices that do not speak either NETCONF or SNMP. All prepackaged NEDs for different devices are written using this interface. It is possible to use the same interface to write your own NED. There are two types of NEDs, CLI NEDs and Generic NEDs. CLI NEDs can be used for devices that can be controlled by a Cisco-style CLI syntax, in this case the NED is developed primarily by building a YANG model and a relatively small part in Java. In other cases the Generic NED can be used for any type of communication protocol.
NAVU API (Navigation Utilities) API that resides on top of the Maapi and Cdb APIs. It provides schema model navigation and instance data handling (read/write). Uses either a Maapi or Cdb context as data access and incorporates a subset of functionality from these (navigational and data read/write calls). Its major use is in service implementations which normally is about navigating device models and setting device data.
ALARM API Eastbound API that is used both to consume and produce alarms in alignment with the NSO Alarm model. To consume alarms the AlarmSource interface is used. To produce a new alarm the AlarmSink interface is used. There is also a possibility to buffer produced alarms and make asynchronous writes to CDB to improve alarm performance.
NOTIF API Northbound API that is used to subscribe to system events from NSO. These events are generated for audit log events, for different transaction states, for HA state changes, upgrade events, user sessions, etc.
HA API (High Availability) Northbound api used to manage a High Availability cluster of NSO instances. An NSO instance can be in one of three states NONE, PRIMARY or SECONDARY. With the HA API the state can be queried and changed for NSO instances in the cluster.
J_STR
string
ConfBuf
Human readable string
J_BUF
string
ConfBuf
Human readable string
J_INT8
int8
ConfInt8




8-bit signed integer
RFC 6021 - Common YANG Data Types
RFC 6470 - NETCONF Base Notifications
RFC 6536 - NETCONF Access Control Model
RFC 6991 - Common YANG Data Types
RFC 7950 - The YANG 1.1 Data Modeling Language
RFC 7951 - JSON Encoding of Data Modeled with YANG
RFC 7952 - Defining and Using Metadata with YANG
RFC 8040 - RESTCONF Protocol
RFC 8072 - YANG Patch Media Type
RFC 8341 - Network Configuration Access Control Model
RFC 8525 - YANG Library
RFC 8528 - YANG Schema Mount
I-D.draft-ietf-netconf-restconf-trace-ctx-headers-00 - RESTCONF Extension to support Trace Context Headers
To enable RESTCONF in NSO, RESTCONF must be enabled in the ncs.conf configuration file. The web server configuration for RESTCONF is shared with the WebUI's config, but you may define a separate RESTCONF transport section. The WebUI does not have to be enabled for RESTCONF to work.
Here is a minimal example of what is needed in the ncs.conf.
If you want to run RESTCONF with a different transport configuration than what the WebUI is using, you can specify a separate RESTCONF transport section.
It is now possible to do a RESTCONF requests towards NSO. Any HTTP client can be used, in the following examples curl will be used. The example below will show what a typical RESTCONF request could look like.
In the rest of the document, in order to simplify the presentation, the example above will be expressed as:
Note the HTTP return code (200 OK) in the example, which will be displayed together with any relevant HTTP headers returned and a possible body of content.
Send a RESTCONF query to get a representation of the top-level resource, which is accessible through the path: /restconf.
As can be seen from the result, the server exposes three additional resources:
data: This mandatory resource represents the combined configuration and state data resources that can be accessed by a client.
operations: This optional resource is a container that provides access to the data-model-specific RPC operations supported by the server.
yang-library-version: This mandatory leaf identifies the revision date of the ietf-yang-library YANG module that is implemented by this server. This resource exposes which YANG modules are in use by the NSO system.
To fetch configuration, operational data, or both, from the server, a request to the data resource is made. To restrict the amount of returned data, the following example will prune the amount of output to only consist of the topmost nodes. This is achieved by using the depth query argument as shown in the example below:
Let's assume we are interested in the dhcp/subnet resource in our configuration. In the following examples, assume that it is defined by a corresponding Yang module that we have named dhcp.yang, looking like this:
We can issue an HTTP GET request to retrieve the value content of the resource. In this case, we find that there is no such data, which is indicated by the HTTP return code 204 No Content.
Note also how we have prefixed the dhcp:dhcp resource. This is how RESTCONF handles namespaces, where the prefix is the YANG module name and the namespace is as defined by the namespace statement in the YANG module.
We can now create the dhcp/subnet resource by sending an HTTP POST request + the data that we want to store. Note the Content-Type HTTP header, which indicates the format of the provided body. Two formats are supported: XML or JSON. In this example, we are using XML, which is indicated by the Content-Type value: application/yang-data+xml.
Note the HTTP return code (201 Created) indicating that the resource was successfully created. We also got a Location header, which always is returned in a reply to a successful creation of a resource, stating the resulting URI leading to the created resource.
If we now want to modify a part of our dhcp/subnet config, we can use the HTTP PATCH method, as shown below. Note that the URI used in the request needs to be URL-encoded, such that the key value: 10.254.239.0/27 is URL-encoded as: 10.254.239.0%2F27.
Also, note the difference of the PATCH URI compared to the earlier POST request. With the latter, since the resource does not yet exist, we POST to the parent resource (dhcp:dhcp), while with the PATCH request we address the (existing) resource (10.254.239.0%2F27).
We can also replace the subnet with some new configuration. To do this, we make use of the PUT HTTP method as shown below. Since the operation was successful and no body was returned, we will get a 204 No Content return code.
To delete the subnet, we make use of the DELETE HTTP method as shown below. Since the operation was successful and no body was returned, we will get a 204 No Content return code.
RESTCONF makes it possible to specify where the RESTCONF API is located, as described in the RESTCONF RFC 8040.
As per default, the RESTCONF API root is /restconf. Typically there is no need to change the default value although it is possible to change this by configuring the RESTCONF API root in the ncs.conf file as:
The RESTCONF API root will now be /my_own_restconf_root.
A client may discover the root resource by getting the /.well-known/host-meta resource as shown in the example below:
A RESTCONF capability is a set of functionality that supplements the base RESTCONF specification. The capability is identified by a uniform resource identifier (URI). The RESTCONF server includes a capability URI leaf-list entry identifying each supported protocol feature. This includes the basic-mode default-handling mode, optional query parameters, and may also include other, NSO-specific, capability URIs.
To view currently enabled capabilities, use the ietf-restconf-monitoring YANG model, which is available as: /restconf/data/ietf-restconf-monitoring:restconf-state.
This Capability identifies the basic-mode default-handling mode that is used by the server for processing default leafs in requests for data resources.
The capability URL will contain a query parameter named basic-mode which value tells us what the default behavior of the RESTCONF server is when it returns a leaf. The possible values are shown in the table below (basic-mode values):
report-all
Values set to the YANG default value are reported.
trim
Values set to the YANG default value are not reported.
explicit
Values that has been set by a client to the YANG default value will be reported.
The values presented in the table above can also be used by the Client together with the with-defaults query parameter to override the default RESTCONF server behavior. Added to these values, the Client can also use the report-all-tagged value.
The table below lists additional with-defaults value.
report-all-tagged
Works as the report-all but a default value will include an XML/JSON attribute to indicate that the value is in fact a default value.
Referring back to the example: Example: NSO RESTCONF Capabilities, where the RESTCONF server returned the default capability:
It tells us that values that have been set by a client to the YANG default value will be reported but default values that have not been set by the Client will not be returned. Again, note that this is the default RESTCONF server behavior which can be overridden by the Client by using the with-defaults query argument.
A set of optional RESTCONF Capability URIs are defined to identify the specific query parameters that are supported by the server. They are defined as:
The table shows query parameter capabilities.
depth
urn:ietf:params:restconf:capability:depth:1.0
fields
urn:ietf:params:restconf:capability:fields:1.0
filter
urn:ietf:params:restconf:capability:filter:1.0
replay
urn:ietf:params:restconf:capability:replay:1.0
with.defaults
urn:ietf:params:restconf:capability:with.defaults:1.0
For a description of the query parameter functionality, see Query Parameters.
Each RESTCONF operation allows zero or more query parameters to be present in the request URI. Query parameters can be given in any order, but can appear at most once. Supplying query parameters when invoking RPCs and actions is not supported, if supplied the response will be 400 (Bad Request) and the error-app-tag will be set to invalid-value. However, the query parameters trace-id and unhide are exempted from this rule and supported for RPC and action invocation. The defined query parameters and in what type of HTTP request they can be used are shown in the table below (Query parameters).
content
GET,HEAD
Select config and/or non-config data resources.
depth
GET,HEAD
Request limited subtree depth in the reply content.
fields
GET,HEAD
Request a subset of the target resource contents.
exclude
GET,HEAD
Exclude a subset of the target resource contents.
The content query parameter controls if configuration, non-configuration, or both types of data should be returned. The content query parameter values are listed below.
The allowed values are:
config
Return only configuration descendant data nodes.
nonconfig
Return only non-configuration descendant data nodes.
all
Return all descendant data nodes.
The depth query parameter is used to limit the depth of subtrees returned by the server. Data nodes with a value greater than the depth parameter are not returned in response to a GET request.
The value of the depth parameter is either an integer between 1 and 65535 or the string unbounded. The default value is: unbounded.
The fields query parameter is used to optionally identify data nodes within the target resource to be retrieved in a GET method. The client can use this parameter to retrieve a subset of all nodes in a resource.
For a full definition of the fields value can be constructed, refer to the RFC 8040, Section 4.8.3.
Note that the fields query parameter cannot be used together with the exclude query parameter. This will result in an error.
The exclude query parameter is used to optionally exclude data nodes within the target resource from being retrieved with a GET request. The client can use this parameter to exclude a subset of all nodes in a resource. Only nodes below the target resource can be excluded, not the target resource itself.
Note that the exclude query parameter cannot be used together with the fields query parameter. This will result in an error.
The exclude query parameter uses the same syntax and has the same restrictions as the fields query parameter, as defined in RFC 8040, Section 4.8.3.
Selecting multiple nodes to exclude can be done the same way as for the fields query parameter, as described in RFC 8040, Section 4.8.3.
exclude using wildcards (*) will exclude all child nodes of the node. For lists and presence containers, the parent node will be visible in the output but not its children, i.e. it will be displayed as an empty node. For non-presence containers, the parent node will be excluded from the output as well.
exclude can be used together with the depth query parameter to limit the depth of the output. In contrast to fields, where depth is counted from the node selected by fields, for exclude the depth is counted from the target resource, and the nodes are excluded if depth is deep enough to encounter an excluded node.
When exclude is not used:
Using exclude to exclude low and high from range, note that these are absent in the output:
These query parameters are only allowed on an event stream resource and are further described in Streams.
The insert query parameter is used to specify how a resource should be inserted within an ordered-by user list. The allowed values are shown in the table below (The content query parameter values).
first
Insert the new data as the new first entry.
last
Insert the new data as the new last entry. This is the default value.
before
Insert the new data before the insertion point, as specified by the value of the point parameter.
after
Insert the new data after the insertion point, as specified by the value of the point parameter.
This parameter is only valid if the target data represents a YANG list or leaf-list that is ordered-by user. In the example below, we will insert a new router value, first, in the ordered-by user leaf-list of dhcp-options/router values. Remember that the default behavior is for new entries to be inserted last in an ordered-by user leaf-list.
To verify that the router value really ended up first:
The point query parameter is used to specify the insertion point for a data resource that is being created or moved within an ordered-by user list or leaf-list. In the example below, we will insert the new router value: two.acme.org, after the first value: one.acme.org in the ordered-by user leaf-list of dhcp-options/router values.
To verify that the router value really ended up after our insertion point:
There are additional NSO query parameters available for the RESTCONF API. These additional query parameters are described in the table below (Additional Query Parameters).
dry-run
POST
PUT
PATCH
DELETE
Validate and display the configuration changes but do not perform the actual commit. Neither CDB nor the devices are affected. Instead, the effects that would have taken place are shown in the returned output. Possible values are: xml, cli, and native. The value used specifies in what format we want the returned diff to be.
dry-run-reverse
POST
PUT
PATCH
DELETE
Used together with the dry-run=native parameter to display the device commands for getting back to the current running state in the network if the commit is successfully executed. Beware that if any changes are done later on the same data the reverse device commands returned are invalid.
no-networking
POST
PUT
PATCH
DELETE
Do not send any data to the devices. This is a way to manipulate CDB in NSO without generating any southbound traffic.
no-out-of-sync-check
POST
PUT
PATCH
DELETE
Continue with the transaction even if NSO detects that a device's configuration is out of sync. Can't be used together with no-overwrite.
Two edit collision detection and prevention mechanisms are provided in RESTCONF for the datastore resource: a timestamp and an entity tag. Any change to configuration data resources will update the timestamp and entity tag of the datastore resource. This makes it possible for a client to apply precondition HTTP headers to a request.
The NSO RESTCONF API honors the following HTTP response headers: Etag and Last-Modified, and the following request headers: If-Match, If-None-Match, If-Modified-Since, and If-Unmodified-Since.
Etag: This header will contain an entity tag which is an opaque string representing the latest transaction identifier in the NSO database. This header is only available for the running datastore and hence, only relates to configuration data (non-operational).
Last-Modified: This header contains the timestamp for the last modification made to the NSO database. This timestamp can be used by a RESTCONF client in subsequent requests, within the If-Modified-Since and If-Unmodified-Since header fields. This header is only available for the running datastore and hence, only relates to configuration data (non-operational).
If-None-Match: This header evaluates to true if the supplied value does not match the latest Etag entity-tag value. If evaluated to false, an error response with status 304 (Not Modified) will be sent with no body. This header carries only meaning if the entity tag of the Etag response header has previously been acquired. The usage of this could for example be a HEAD operation to get information if the data has changed since the last retrieval.
If-Modified-Since: This request-header field is used with an HTTP method to make it conditional, i.e if the requested resource has not been modified since the time specified in this field, the request will not be processed by the RESTCONF server; instead, a 304 (Not Modified) response will be returned without any message-body. Usage of this is for instance for a GET operation to retrieve the information if (and only if) the data has changed since the last retrieval. Thus, this header should use the value of a Last-Modified response header that has previously been acquired.
If-Match: This header evaluates to true if the supplied value matches the latest Etag value. If evaluated to false, an error response with status 412 (Precondition Failed) will be sent with no body. This header carries only meaning if the entity tag of the Etag response header has previously been acquired. The usage of this can be in the case of a PUT, where If-Match can be used to prevent the lost update problem. It can check if the modification of a resource that the user wants to upload will not override another change that has been done since the original resource was fetched.
If-Unmodified-Since: This header evaluates to true if the supplied value has not been last modified after the given date. If the resource has been modified after the given date, the response will be a 412 (Precondition Failed) error with no body. This header carries only meaning if the Last-Modified response header has previously been acquired. The usage of this can be the case of a POST, where editions are rejected if the stored resource has been modified since the original value was retrieved.
If rollbacks have been enabled in the configuration using the rollback-id query parameter, the fixed ID of the rollback file created during an operation is returned in the results. The below examples show the creation of a new resource and the removal of that resource using the rollback created in the first step.
Then using the fixed ID returned above as input to the apply-rollback-file action:
The RESTCONF protocol supports YANG-defined event notifications. The solution preserves aspects of NETCONF event notifications [RFC5277] while utilizing the Server-Sent Events, W3C.REC-eventsource-20150203, transport strategy.
RESTCONF event notification streams are described in Sections 6 and 9.2 of RFC 8040, where also notification examples can be found.
RESTCONF event notification is a way for RESTCONF clients to retrieve notifications for different event streams. Event streams configured in NSO can be subscribed to using different channels such as the RESTCONF or the NETCONF channel.
More information on how to define a new notification event using Yang is described in RFC 6020.
How to add and configure notifications support in NSO is described in the ncs.conf(3) man page.
The design of RESTCONF event notification is inspired by how NETCONF event notification is designed. More information on NETCONF event notification can be found in RFC 5277.
For this example, we will define a notification stream, named interface in the ncs.conf configuration file as shown below.
We also enable the built-in replay store which means that NSO automatically stores all notifications on disk, ready to be replayed should a RESTCONF event notification subscriber ask for logged notifications. The replay store uses a set of wrapping log files on a disk (of a certain number and size) to store the notifications.
To view the currently enabled event streams, use the ietf-restconf-monitoring YANG model. The streams are available under the /restconf/data/ietf-restconf-monitoring:restconf-state/streams container.
Note the URL value we get in the location element in the example above. This URL should be used when subscribing to the notification events as is shown in the next example.
RESTCONF clients can determine the URL for the subscription resource (to receive notifications) by sending an HTTP GET request for the location leaf with the stream list entry. The value returned by the server can be used for the actual notification subscription.
The client will send an HTTP GET request for the (location) URL returned by the server with the Accept type text/event-stream as shown in the example below. Note that this request works like a long polling request which means that the request will not return. Instead, server-side notifications will be sent to the client where each line of the notification will be prepended with data:.
Since we have enabled the replay store, we can ask the server to replay any notifications generated since the specific date we specify. After those notifications have been delivered, we will continue waiting for new notifications to be generated.
Errors occurring during streaming of events will be reported as Server-Sent Events (SSE) comments as described in W3C.REC-eventsource-20150203 as shown in the example below.
RFC 8040, Section 3.7 describes the retrieval of YANG modules used by the server via the RPC operation get-schema. The YANG source is made available by NSO in two ways: compiled into the fxs file or put in the loadPath. See Monitoring of the NETCONF Server.
The example below shows how to list the available Yang modules. Since we are interested in the dhcp module, we only show that part of the output:
We can now retrieve the dhcp Yang module via the URL we got in the schema leaf of the reply. Note that the actual URL may point anywhere. The URL is configured by the schemaServerUrl setting in the ncs.conf file.
The NSO RESTCONF API also supports the YANG Patch Media Type, as defined in RFC 8072.
A YANG Patch is an ordered list of edits that are applied to the target datastore by the RESTCONF server. A YANG Patch request is sent as an HTTP PATCH request containing a body describing the edit operations to be performed. The format of the body is defined in the RFC 8072.
Referring to the example above (DHCP Yang model) in the Getting Started section; we will show how to use YANG Patch to achieve the same result but with fewer amount of requests.
To create the resources, we send an HTTP PATCH request where the Content-Type indicates that the body in the request consists of a Yang-Patch message. Our Yang-Patch request will initiate two edit operations where each operation will create a new subnet. In contrast, compare this with using plain RESTCONF where we would have needed two POST requests to achieve the same result.
Let us modify the max-lease-time of one subnet and delete the max-lease-time value of the second subnet. Note that the delete will cause the default value of max-lease-time to take effect, which we will verify using a RESTCONF GET request.
To verify that our modify and delete operations took place we make use of two RESTCONF GET requests as shown below.
Note how we in the last GET request make use of the with-defaults query parameter to request that a default value should be returned and also be tagged as such.
Network Management Datastore Architecture (NMDA), as defined in RFC 8527, extends the RESTCONF protocol. This enables RESTCONF clients to discover which datastores are supported by the RESTCONF server, determine which modules are supported in each datastore, and interact with all the datastores supported by the NMDA.
A RESTCONF client can test if a server supports the NMDA by using either the HEAD or GET methods on /restconf/ds/ietf- datastores:operational, as shown below:
A RESTCONF client can discover which datastores and YANG modules the server supports by reading the YANG library information from the operational state datastore. Note in the example below that, since the result consists of three top nodes, it can't be represented in XML; hence we request the returned content to be in JSON format. See also Collections.
To avoid any potential future conflict with the RESTCONF standard, any extensions made to the NSO implementation of RESTCONF are located under the URL path: /restconf/tailf, or is controlled by means of a vendor-specific media type.
The RESTCONF specification states that a result containing multiple instances (e.g. a number of list entries) is not allowed if XML encoding is used. The reason for this is that an XML document can only have one root node.
This functionality is supported if the http://tail-f.com/ns/restconf/collection/1.0 capability is presented. See also How to View the Capabilities of the RESTCONF Server.
To remedy this, an HTTP GET request can make use of the Accept: media type: application/vnd.yang.collection+xml as shown in the following example. The result will then be wrapped within a collection element.
The NSO RESTCONF Query API consists of a number of operations to start a query which may live over several RESTCONF requests, where data can be fetched in suitable chunks. The data to be returned is produced by applying an XPath expression where the data also may be sorted.
The RESTCONF client can check if the NSO RESTCONF server supports this functionality by looking for the http://tail-f.com/ns/restconf/query-api/1.0 capability. See also How to View the Capabilities of the RESTCONF Server.
The tailf-rest-query.yang and the tailf-common-query.yang YANG models describe the structure of the RESTCONF Query API messages. By using the Schema Resource functionality, as described in Schema Resource, you can get hold of them.
The API consists of the following requests:
start-query: Start a query and return a query handle.
fetch-query-result: Use a query handle to repeatedly fetch chunks of the result.
immediate-query: Start a query and return the entire result immediately.
reset-query: (Re)set where the next fetched result will begin from.
stop-query: Stop (and close) the query.
The API consists of the following replies:
start-query-result: Reply to the start-query request.
query-result: Reply to the fetch-query-result and immediate-query requests.
In the following examples, we'll use this data model:
The actual format of the payload should be represented either in XML or JSON. Note how we indicate the type of content using the Content-Type HTTP header. For XML, it could look like this:
The same request in JSON format would look like:
An informal interpretation of this query is:
For each /x/host where enabled is true, select its name, and address, and return the result sorted by name, in chunks of 100 result items at a time.
Let us discuss the various pieces of this request. To start with, when using XML, we need to specify the namespace as shown:
The actual XPath query to run is specified by the foreach element. The example below will search for all /x/host nodes that have the enabled node set to true:
Now we need to define what we want to have returned from the node set by using one or more select sections. What to actually return is defined by the XPath expression.
Choose how the result should be represented. Basically, it can be the actual value or the path leading to the value. This is specified per select chunk. The possible result types are string, path, leaf-value, and inline.
The difference between string and leaf-value is somewhat subtle. In the case of string, the result will be processed by the XPath function: string() (which if the result is a node-set will concatenate all the values). The leaf-value will return the value of the first node in the result. As long as the result is a leaf node, string and leaf-value will return the same result. In the example above, the string is used as shown below. Note that at least one result-type must be specified.
The result-type inline makes it possible to return the full sub-tree of data, either in XML or in JSON format. The data will be enclosed with a tag: data.
It is possible to specify an optional label for a convenient way of labeling the returned data:
The returned result can be sorted. This is expressed as an XPath expression, which in most cases is very simple and refers to the found node-set. In this example, we sort the result by the content of the name node:
With the offset element, we can specify at which node we should start to receive the result. The default is 1, i.e., the first node in the resulting node set.
It is possible to set a custom timeout when starting or resetting a query. Each time a function is called, the timeout timer resets. The default is 600 seconds, i.e. 10 minutes.
The reply to this request would look something like this:
The query handle (in this example '12345') must be used in all subsequent calls. To retrieve the result, we can now send:
Which will result in something like the following:
If we try to get more data with the fetch-query-result, we might get more result entries in return until no more data exists and we get an empty query result back:
Finally, when we are done we stop the query:
If we want to go back into the stream of received data chunks and have them repeated, we can do that with the reset-query request. In the example below, we ask to get results from the 42nd result entry:
If we want to get the entire result sent back to us, using only one request, we can do this by using the immediate-query. This function takes similar arguments as start-query and returns the entire result analogous with the result from a fetch-query-result request. Note that it is not possible to paginate or set an offset start node for the result list; i.e. the options limit and offset are ignored.
This functionality is supported if the http://tail-f.com/ns/restconf/partial-response/1.0 capability is presented. See also How to View the Capabilities of the RESTCONF Server.
By default, the server sends back the full representation of a resource after processing a request. For better performance, the server can be instructed to send only the nodes the client really needs in a partial response.
To request a partial response for a set of list entries, use the offset and limit query parameters to specify a limited set of entries to be returned.
In the following example, we retrieve only two entries, skipping the first entry and then returning the next two entries:
This functionality is supported if the http://tail-f.com/ns/restconf/unhide/1.0 capability is presented. See also How to View the Capabilities of the RESTCONF Server.
By default, hidden nodes are not visible in the RESTCONF interface. To unhide hidden nodes for retrieval or editing, clients can use the query parameter unhide or set parameter showHidden to true under /confdConfig/restconf in confd.conf file. The query parameter unhide is supported for RPC and action invocation.
The format of the unhide parameter is a comma-separated list of
As an example:
This example unhides the unprotected group extra and the password-protected group debug with the password secret;.
This functionality is supported if the urn:ietf:params:xml:ns:yang:traceparent:1.0 and urn:ietf:params:xml:ns:yang:tracestate:1.0 capability is presented. See also How to View the Capabilities of the RESTCONF Server.
RESTCONF supports the IETF standard draft I-D.draft-ietf-netconf-restconf-trace-ctx-headers-00, that is an adaption of the W3C Trace Context standard. Trace Context standardizes the format of trace-id, parent-id, and key-value pairs to be sent between distributed entities. The parent-id will become the parent-span-id for the next generated span-id in NSO.
Trace Context consists of two HTTP headers traceparent and tracestate. Header traceparent must be of the format
where version = "00" and flags = "01". The support for the values of version and flags may change in the future depending on the extension of the standard or functionality.
An example of header traceparent in use is:
Header tracestate is a vendor-specific list of key-value pairs. An example of the header tracestate in use is:
where a value may contain space characters but not end with a space.
NSO implements Trace Context alongside the legacy way of handling trace-id, where the trace-id comes as a query parameter. These two different ways of handling trace-id cannot be used at the same time. If both are used, the request generates an error response. If a request does not include trace-id or the header traceparent, a traceparent will be generated internally in NSO. NSO will consider the headers of Trace Context in RESTCONF requests if the trace-id element is enabled in the configuration file. Trace Context is handled by the progress trace functionality, see also Progress Trace in Development.
It is possible to associate metadata with the configuration data. For RESTCONF, resources such as containers, lists as well as leafs and leaf-lists can have such meta-data. For XML, this meta-data is represented as attributes attached to the XML element in question. For JSON, there does not exist a natural way to represent this info. Hence a special special notation has been introduced, based on the RFC 7952, see the example below.
The meta-data for an object is represented by another object constructed either of an "@" sign if the meta-data object refers to the parent object, or by the object name prefixed with an "@" sign if the meta-data object refers to a sibling object.
Note that the meta-data node types, e.g., tags and annotations, are prefixed by the module name of the YANG module where the meta-data object is defined. This representation conforms to RFC 7952 Section 5.2. The YANG module name prefixes for meta-data node types are listed below:
origin
ietf-origin
inactive/active
tailf-netconf-inactive
default
tailf-netconf-defaults
All other
tailf_netconf
Compare this to the encoding in NSO versions prior to 6.3, where we represented meta-data for an object by another object constructed of the object name prefixed with either one or two "@" signs. The meta-data object "@x" referred to the sibling object "x" and the "@@x" object referred to the parent object. No module name prefixes were included for the meta-data data object types. This did not conform to RFC 7952 for legacy reasons. See the example below.
To continue using the old meta-data format, set legacy-attribute-format to true in ncs.conf. The default is false, which uses the RFC 7952 format. The legacy-attribute-format setting is deprecated and will be removed in a future release.
It is also possible to set meta-data objects in JSON format, which was previously only possible with XML. Note that the new attribute format must be used and legacy-attribute-format set to false. Except for setting the default and insert meta-data types, which are not supported using JSON.
The RESTCONF server maintains an authentication cache. When authenticating an incoming request for a particular User:Password, it is first checked if the User exists in the cache and if so, the request is processed. This makes it possible to avoid the, potentially time-consuming, login procedure that will take place in case of a cache miss.
Cache entries have a maximum Time-To-Live (TTL) and upon expiry, a cache entry is removed which will cause the next request for that User to perform the normal login procedure. The TTL value is configurable via the auth-cache-ttl parameter, as shown in the example. Note that, by setting the TTL value to PT0S (zero), the cache is effectively turned off.
It is also possible to combine the Client's IP address with the User name as a key into the cache. This behavior is disabled by default. It can be enabled by setting the enable-auth-cache-client-ip parameter to true. With this enabled, only a Client coming from the same IP address may get a hit in the authentication cache.
It is possible to configure the NSO RESTCONF server to pick up the client IP address via an HTTP header in the request. A list of HTTP headers to look for is configurable via the proxy-headers parameter as shown in the example.
To avoid misuse of this feature, only requests from trusted sources will be searched for such an HTTP header. The list of trusted sources is configured via the allowed-proxy-ip-prefix as shown in the example.
The NSO RESTCONF server can be set up to pass a long, a token used for authentication and/or validation of the client. Note that this requires external authentication/validation to be set up properly. See External Token Validation and External Authentication for details.
With token authentication, we mean that the client sends a User:Password to the RESTCONF server, which will invoke an external executable that performs the authentication and upon success produces a token that the RESTCONF server will return in the X-Auth-Token HTTP header of the reply.
With token validation, we mean that the RESTCONF server will pass along any token, provided in the X-Auth-Token HTTP header, to an external executable that performs the validation. This external program may produce a new token that the RESTCONF server will return in the X-Auth-Token HTTP header of the reply.
To make this work, the following need to be configured in the ncs.conf file:
It is also possible to have the RESTCONF server to return a HTTP cookie containing the token.
An HTTP cookie (web cookie, browser cookie) is a small piece of data that a server sends to the user's web browser. The browser may store it and send it back with the next request to the same server. This can be convenient in certain solutions, where typically, it is used to tell if two requests came from the same browser, keeping a user logged in, for example.
To make this happen, the name of the cookie needs to be configured as well as a directives string which will be sent as part of the cookie.
The RESTCONF server can be configured to reply with particular HTTP headers in the HTTP response. For example, to support Cross-Origin Resource Sharing (CORS, https://www.w3.org/TR/cors/) there is a need to add a couple of headers to the HTTP Response.
We add the extra configuration parameter in ncs.conf.
A number of HTTP headers have been deemed so important by security reasons that they, with sensible default values, per default will be included in the RESTCONF reply. The values can be changed by configuration in the ncs.conf file. Note that a configured empty value will effectively turn off that particular header from being included in the RESTCONF reply. The headers and their default values are:
xFrameOptions: DENY
The default value indicates that the page cannot be displayed in a frame/iframe/embed/object regardless of the site attempting to do so.
xContentTypeOptions: nosniff
The default value indicates that the MIME types advertised in the Content-Type headers should not be changed and be followed. In particular, should requests for CSS or Javascript be blocked in case a proper MIME type is not used.
xXssProtection: 1; mode=block
This header is a feature of Internet Explorer, Chrome and Safari that stops pages from loading when they detect reflected cross-site scripting (XSS) attacks. It enables XSS filtering and tells the browser to prevent rendering of the page if an attack is detected.
strictTransportSecurity: max-age=31536000; includeSubDomains
The default value tells browsers that the RESTCONF server should only be accessed using HTTPS, instead of using HTTP. It sets the time that the browser should remember this and states that this rule applies to all of the server's subdomains as well.
contentSecurityPolicy: default-src 'self'; block-all-mixed-content; base-uri 'self'; frame-ancestors 'none';
The default value means that: Resources like fonts, scripts, connections, images, and styles will all only load from the same origin as the protected resource. All mixed contents will be blocked and frame-ancestors like iframes and applets are prohibited.
Swagger is a documentation language used to describe RESTful APIs. The resulting specifications are used to both document APIs as well as generating clients in a variety of languages. For more information about the Swagger specification itself and the ecosystem of tools available for it, see swagger.io.
The RESTCONF API in NSO provides an HTTP-based interface for accessing data. The YANG modules loaded into the system define the schema for the data structures that can be manipulated using the RESTCONF protocol. The yanger tool provides options to generate Swagger specifications from YANG files. The tool currently supports generating specifications according to OpenAPI/Swagger 2.0 using JSON encoding. The tool supports the validation of JSON bodies in body parameters and response bodies, and XML content validation is not supported.
YANG and Swagger are two different languages serving slightly different purposes. YANG is a data modeling language used to model configuration data, state data, Remote Procedure Calls, and notifications for network management protocols such as NETCONF and RESTCONF. Swagger is an API definition language that documents API resource structure as well as HTTP body content validation for applicable HTTP request methods. Translation from YANG to Swagger is not perfect in the sense that there are certain constructs and features in YANG that is not possible to capture completely in Swagger. The design of the translation is designed such that the resulting Swagger definitions are more restrictive than what is expressed in the YANG definitions. This means that there are certain cases where a client can do more in the RESTCONF API than what the Swagger definition expresses. There is also a set of well-known resources defined in the RESTCONF RFC 8040 that are not part of the generated Swagger specification, notably resources related to event streams.
The yanger tool is a YANG parser and validator that provides options to convert YANG modules to a multitude of formats including Swagger. You use the -f swagger option to generate a Swagger definition from one or more YANG files. The following command generates a Swagger file named example.json from the example.yang YANG file:
It is only supported to generate Swagger from one YANG module at a time. It is possible however to augment this module by supplying additional modules. The following command generates a Swagger document from base.yang which is augmented by base-ext-1.yang and base-ext-2.yang:
Only supplying augmenting modules is not supported.
Use the --help option to the yanger command to see all available options:
The complete list of options related to Swagger generation is:
Using the example-jukebox.yang from the RESTCONF RFC 8040, the following example generates a comprehensive Swagger definition using a variety of Swagger-related options:
container bs {
presence "";
tailf:callpoint bcp;
list b {
key name;
max-elements 64;
leaf name {
type string;
}
container opt {
presence "";
leaf ii {
type int32;
}
}
leaf foo {
type empty;
}
}
} Socket socket = new Socket("localhost",Conf.NCS_PORT);
Maapi maapi = new Maapi(socket); maapi.startUserSession("admin",
InetAddress.getByName("localhost"),
"maapi",
new String[] {"admin"},
MaapiUserSessionFlag.PROTO_TCP); int th = maapi.startTrans(Conf.DB_RUNNING,
Conf.MODE_READ_WRITE); public ConfValue getElem(int tid,
String fmt,
Object... arguments) ConfValue val = maapi.getElem(th,
"/hosts/host{%x}/interfaces{%x}/ip",
new ConfBuf("host1"),
new ConfBuf("eth0")); ConfIPv4 ipv4addr = (ConfIPv4)val; maapi.setElem(th ,
new ConfUInt16(1500),
"/hosts/host{%x}/interfaces{%x}/ip/mtu",
new ConfBuf("host1"),
new ConfBuf("eth0")); maapi.applyTrans(th) int th = maapi.startTrans(Conf.DB_RUNNING, Conf.MODE_READ_WRITE);
try {
maapi.lock(Conf.DB_RUNNING);
/// make modifications to th
maapi.setElem(th, .....);
maapi.applyTrans(th);
maapi.finishTrans(th);
} catch(Exception e) {
maapi.finishTrans(th);
} finally {
maapi.unLock(Conf.DB_RUNNING);
} Socket socket = new Socket("localhost", Conf.NCS_PORT);
Cdb cdb = new Cdb("MyCdbSock",socket); CdbSession session = cdb.startSession(CdbDBType.RUNNING);
/*
* Retrieve the number of children in the list and
* loop over these children
*/
for(int i = 0; i < session.numInstances("/servers/server"); i++) {
ConfBuf name =
(ConfBuf) session.getElem("/servers/server[%d]/hostname", i);
ConfIPv4 ip =
(ConfIPv4) session.getElem("/servers/server[%d]/ip", i);
} CdbSubscription sub = cdb.newSubscription();
int subid = sub.subscribe(1, new servers(), "/servers/server/");
// tell CDB we are ready for notifications
sub.subscribeDone();
// now do the blocking read
while (true) {
int[] points = sub.read();
// now do something here like diffIterate
.....
} container servers {
list server {
key name;
leaf name { type string;}
leaf ip { type inet:ip-address; }
leaf port type inet:port-number; }
...../servers/server/port /servers /servers/server{www}/ip /servers/server/ip CdbSession sess =
cdb.startSession(CdbDBType.CDB_OPERATIONAL,
EnumSet.of(CdbLockType.LOCK_REQUEST)); public class MyTransCb {
@TransCallback(callType=TransCBType.INIT)
public void init(DpTrans trans) throws DpCallbackException {
return;
}public static class DataCb {
@DataCallback(callPoint="foo", callType=DataCBType.GET_ELEM)
public ConfValue getElem(DpTrans trans, ConfObject[] kp)
throws DpCallbackException {
..... module work {
namespace "http://example.com/work";
prefix w;
import ietf-yang-types {
prefix yang;
}
import tailf-common {
prefix tailf;
}
description "This model is used as a simple example model
illustrating how to have NCS configuration data
that is stored outside of NCS - i.e not in CDB";
revision 2010-04-26 {
description "Initial revision.";
}
container work {
tailf:callpoint workPoint;
list item {
key key;
leaf key {
type int32;
}
leaf title {
type string;
}
leaf responsible {
type string;
}
leaf comment {
type string;
}
}
}
} @DataCallback(callPoint=work.callpoint_workPoint,
callType=DataCBType.ITERATOR)
public Iterator<Object> iterator(DpTrans trans,
ConfObject[] keyPath)
throws DpCallbackException {
return MyDb.iterator();
}
@DataCallback(callPoint=work.callpoint_workPoint,
callType=DataCBType.GET_NEXT)
public ConfKey getKey(DpTrans trans, ConfObject[] keyPath,
Object obj)
throws DpCallbackException {
Item i = (Item) obj;
return new ConfKey( new ConfObject[] { new ConfInt32(i.key) });
}
@DataCallback(callPoint=work.callpoint_workPoint,
callType=DataCBType.GET_ELEM)
public ConfValue getElem(DpTrans trans, ConfObject[] keyPath)
throws DpCallbackException {
ConfInt32 kv = (ConfInt32) ((ConfKey) keyPath[1]).elementAt(0);
Item i = MyDb.findItem( kv.intValue() );
if (i == null) return null; // not found
// switch on xml elem tag
ConfTag leaf = (ConfTag) keyPath[0];
switch (leaf.getTagHash()) {
case work._key:
return new ConfInt32(i.key);
case work._title:
return new ConfBuf(i.title);
case work._responsible:
return new ConfBuf(i.responsible);
case work._comment:
return new ConfBuf(i.comment);
default:
throw new DpCallbackException("xml tag not handled");
}
}
@DataCallback(callPoint=work.callpoint_workPoint,
callType=DataCBType.SET_ELEM)
public int setElem(DpTrans trans, ConfObject[] keyPath,
ConfValue newval)
throws DpCallbackException {
return Conf.REPLY_ACCUMULATE;
}
@DataCallback(callPoint=work.callpoint_workPoint,
callType=DataCBType.CREATE)
public int create(DpTrans trans, ConfObject[] keyPath)
throws DpCallbackException {
return Conf.REPLY_ACCUMULATE;
}
@DataCallback(callPoint=work.callpoint_workPoint,
callType=DataCBType.REMOVE)
public int remove(DpTrans trans, ConfObject[] keyPath)
throws DpCallbackException {
return Conf.REPLY_ACCUMULATE;
}
@DataCallback(callPoint=work.callpoint_workPoint,
callType=DataCBType.NUM_INSTANCES)
public int numInstances(DpTrans trans, ConfObject[] keyPath)
throws DpCallbackException {
return MyDb.numItems();
}
@DataCallback(callPoint=work.callpoint_workPoint,
callType=DataCBType.GET_OBJECT)
public ConfValue[] getObject(DpTrans trans, ConfObject[] keyPath)
throws DpCallbackException {
ConfInt32 kv = (ConfInt32) ((ConfKey) keyPath[0]).elementAt(0);
Item i = MyDb.findItem( kv.intValue() );
if (i == null) return null; // not found
return getObject(trans, keyPath, i);
}
@DataCallback(callPoint=work.callpoint_workPoint,
callType=DataCBType.GET_NEXT_OBJECT)
public ConfValue[] getObject(DpTrans trans, ConfObject[] keyPath,
Object obj)
throws DpCallbackException {
Item i = (Item) obj;
return new ConfValue[] {
new ConfInt32(i.key),
new ConfBuf(i.title),
new ConfBuf(i.responsible),
new ConfBuf(i.comment)
};
} @TransCallback(callType=TransCBType.INIT)
public void init(DpTrans trans) throws DpCallbackException {
return;
}
@TransCallback(callType=TransCBType.TRANS_LOCK)
public void transLock(DpTrans trans) throws DpCallbackException {
MyDb.lock();
}
@TransCallback(callType=TransCBType.TRANS_UNLOCK)
public void transUnlock(DpTrans trans) throws DpCallbackException {
MyDb.unlock();
}
@TransCallback(callType=TransCBType.PREPARE)
public void prepare(DpTrans trans) throws DpCallbackException {
Item i;
ConfInt32 kv;
for (Iterator<DpAccumulate> it = trans.accumulated();
it.hasNext(); ) {
DpAccumulate ack= it.next();
// check op
switch (ack.getOperation()) {
case DpAccumulate.SET_ELEM:
kv = (ConfInt32) ((ConfKey) ack.getKP()[1]).elementAt(0);
if ((i = MyDb.findItem( kv.intValue())) == null)
break;
// check leaf tag
ConfTag leaf = (ConfTag) ack.getKP()[0];
switch (leaf.getTagHash()) {
case work._title:
i.title = ack.getValue().toString();
break;
case work._responsible:
i.responsible = ack.getValue().toString();
break;
case work._comment:
i.comment = ack.getValue().toString();
break;
}
break;
case DpAccumulate.CREATE:
kv = (ConfInt32) ((ConfKey) ack.getKP()[0]).elementAt(0);
MyDb.newItem(new Item(kv.intValue()));
break;
case DpAccumulate.REMOVE:
kv = (ConfInt32) ((ConfKey) ack.getKP()[0]).elementAt(0);
MyDb.removeItem(kv.intValue());
break;
}
}
try {
MyDb.save("running.prep");
} catch (Exception e) {
throw
new DpCallbackException("failed to save file: running.prep",
e);
}
}
@TransCallback(callType=TransCBType.ABORT)
public void abort(DpTrans trans) throws DpCallbackException {
MyDb.restore("running.DB");
MyDb.unlink("running.prep");
}
@TransCallback(callType=TransCBType.COMMIT)
public void commit(DpTrans trans) throws DpCallbackException {
try {
MyDb.rename("running.prep","running.DB");
} catch (DpCallbackException e) {
throw new DpCallbackException("commit failed");
}
}
@TransCallback(callType=TransCBType.FINISH)
public void finish(DpTrans trans) throws DpCallbackException {
;
}
}uses ncs:service-data;
ncs:servicepoint vlanspnt;uses ncs:service-data;
ncs:servicepoint vlanspnt;
tailf:action self-test {
tailf:info "Perform self-test of the service";
tailf:actionpoint vlanselftest;
output {
leaf success {
type boolean;
}
leaf message {
type string;
description
"Free format message.";
}
}
}/**
* Init method for selftest action
*/
@ActionCallback(callPoint="l3vpn-self-test",
callType=ActionCBType.INIT)
public void init(DpActionTrans trans) throws DpCallbackException {
}
/**
* Selftest action implementation for service
*/
@ActionCallback(callPoint="l3vpn-self-test", callType=ActionCBType.ACTION)
public ConfXMLParam[] selftest(DpActionTrans trans, ConfTag name,
ConfObject[] kp, ConfXMLParam[] params)
throws DpCallbackException {
try {
// Refer to the service yang model prefix
String nsPrefix = "l3vpn";
// Get the service instance key
String str = ((ConfKey)kp[0]).toString();
return new ConfXMLParam[] {
new ConfXMLParamValue(nsPrefix, "success", new ConfBool(true)),
new ConfXMLParamValue(nsPrefix, "message", new ConfBuf(str))};
} catch (Exception e) {
throw new DpCallbackException("self-test failed", e);
}
}
}```
public class SimpleValidator implements DpTransValidateCallback{
...
@TransValidateCallback(callType=TransValidateCBType.INIT)
public void init(DpTrans trans) throws DpCallbackException{
try {
th = trans.thandle;
maapi.attach(th, new MyNamesapce().hash(), trans.uinfo.usid);
..
} catch(Exception e) {
throw new DpCallbackException("failed to attach via maapi: "+
e.getMessage());
}
}
}
```\module tailf-ncs {
namespace "http://tail-f.com/ns/ncs";
...
} .....
NavuContext context = new NavuContext(maapi);
context.startRunningTrans(Conf.MODE_READ);
// This will be the base container "/"
NavuContainer base = new NavuContainer(context);
// This will be the ncs root container "/ncs"
NavuContainer root = base.container(new Ncs().hash());
.....
// This method finishes the started read transaction and
// clears the context from this transaction.
context.finishClearTrans();submodule tailf-ncs-devices {
...
container devices {
.....
list device {
key name;
leaf name {
type string;
}
....
}
}
.......
}
} .....
NavuContext context = new NavuContext(maapi);
context.startRunningTrans(Conf.MODE_READ);
NavuContainer base = new NavuContainer(context);
NavuContainer ncs = base.container(new Ncs().hash());
NavuContainer dev = ncs.container("devices").
list("device").
elem( key);
NavuListEntry devEntry = (NavuListEntry)dev;
.....
context.finishClearTrans(); .....
NavuContext context = new NavuContext(maapi);
context.startRunningTrans(Conf.MODE_READ);
NavuContainer base = new NavuContainer(context);
NavuContainer ncs = base.container(new Ncs().hash());
NavuList listOfDevs = ncs.container("devices").
list("device");
for (NavuContainer dev: listOfDevs.elements()) {
.....
}
.....
context.finishClearTrans(); .....
NavuContext context = new NavuContext(maapi);
context.startRunningTrans(Conf.MODE_READ);
NavuContainer base = new NavuContainer(context);
NavuContainer ncs = base.container(new Ncs().hash());
for (NavuNode node: ncs.container("devices").select("dev.*/.*")) {
NavuContainer dev = (NavuContainer)node;
.....
}
.....
context.finishClearTrans(); .....
NavuContext context = new NavuContext(maapi);
context.startRunningTrans(Conf.MODE_READ);
NavuContainer base = new NavuContainer(context);
NavuContainer ncs = base.container(new Ncs().hash());
for (NavuNode node: ncs.container("devices").xPathSelect("device/*")) {
NavuContainer devs = (NavuContainer)node;
.....
}
.....
context.finishClearTrans();module tailf-ncs {
namespace "http://tail-f.com/ns/ncs";
...
container ncs {
.....
list service {
key object-id;
leaf object-id {
type string;
}
....
leaf reference {
type string;
}
....
}
}
.......
}
} .....
NavuContext context = new NavuContext(maapi);
context.startRunningTrans(Conf.MODE_READ);
NavuContainer base = new NavuContainer(context);
NavuContainer ncs = base.container(new Ncs().hash());
for (NavuNode node: ncs.select("sm/ser.*/.*")) {
NavuContainer rfs = (NavuContainer)node;
if (rfs.leaf(Ncs._description_).value()==null) {
/*
* Setting dummy value.
*/
rfs.leaf(Ncs._description_).set(new ConfBuf("Dummy value"));
}
}
.....
context.finishClearTrans();module interfaces {
namespace "http://router.com/interfaces";
prefix i;
.....
list interface {
key name;
max-elements 64;
tailf:action ping-test {
description "ping a machine ";
tailf:exec "/tmp/mpls-ping-test.sh" {
tailf:args "-c $(context) -p $(path)";
}
input {
leaf ttl {
type int8;
}
}
output {
container rcon {
leaf result {
type string;
}
leaf ip {
type inet:ipv4-address;
}
leaf ival {
type int8;
}
}
}
}
.....
}
.....
} .....
NavuContext context = new NavuContext(maapi);
context.startRunningTrans(Conf.MODE_READ);
NavuContainer base = new NavuContainer(context);
NavuContainer ncs = base.container(new Ncs().hash());
/*
* Execute ping on all devices with the interface module.
*/
for (NavuNode node: ncs.container(Ncs._devices_).
select("device/.*/config/interface/.*")) {
NavuContainer if = (NavuContainer)node;
NavuAction ping = if.action(interfaces.i_ping_test_);
/*
* Execute action.
*/
ConfXMLParamResult[] result = ping.call(new ConfXMLParam[] {
new ConfXMLParamValue(new interfaces().hash(),
interfaces._ttl,
new ConfInt64(64))};
//or we could execute it with XML-String
result = ping.call("<if:ttl>64</if:ttl>");
/*
* Output the result of the action.
*/
System.out.println("result_ip: "+
((ConfXMLParamValue)result[1]).getValue().toString());
System.out.println("result_ival:" +
((ConfXMLParamValue)result[2]).getValue().toString());
}
.....
context.finishClearTrans(); .....
NavuContext context = new NavuContext(maapi);
context.startRunningTrans(Conf.MODE_READ);
NavuContainer base = new NavuContainer(context);
NavuContainer ncs = base.container(new Ncs().hash());
/*
* Execute ping on all devices with the interface module.
*/
for (NavuNode node: ncs.container(Ncs._devices_).
xPathSelect("device/config/interface")) {
NavuContainer if = (NavuContainer)node;
NavuAction ping = if.action(interfaces.i_ping_test_);
/*
* Execute action.
*/
ConfXMLParamResult[] result = ping.call(new ConfXMLParam[] {
new ConfXMLParamValue(new interfaces().hash(),
interfaces._ttl,
new ConfInt64(64))};
//or we could execute it with XML-String
result = ping.call("<if:ttl>64</if:ttl>");
/*
* Output the result of the action.
*/
System.out.println("result_ip: "+
((ConfXMLParamValue)result[1]).getValue().toString());
System.out.println("result_ival:" +
((ConfXMLParamValue)result[2]).getValue().toString());
}
.....
context.finishClearTrans(); // Set up a CDB socket
Socket socket = new Socket("127.0.0.1",Conf.NCS_PORT);
Cdb cdb = new Cdb("my-alarm-source-socket", socket);
// Get and start alarm source - this must only be done once per JVM
AlarmSourceCentral source =
AlarmSourceCentral.getAlarmSource(10000, cdb);
source.start(); AlarmSource mySource = new AlarmSource();
try {
mySource.startListening();
// Get an alarms.
Alarm alarm = mySource.takeAlarm();
while (alarm != null){
System.out.println(alarm);
for (Attribute attr: alarm.getCustomAttributes()){
System.out.println(attr);
}
alarm = mySource.takeAlarm();
}
} catch (Exception e) {
e.printStackTrace();
} finally {
mySource.stopListening();
} //
// Maapi socket used to write alarms directly.
//
Socket socket = new Socket("127.0.0.1",Conf.NCS_PORT);
Maapi maapi = new Maapi(socket);
maapi.startUserSession("system", InetAddress.getByName(host),
"system", new String[] {},
MaapiUserSessionFlag.PROTO_TCP);
AlarmSink sink = new AlarmSink(maapi); AlarmSink sink = new AlarmSink(); //
// You will need a Maapi socket to write you alarms.
//
Socket socket = new Socket("127.0.0.1",Conf.NCS_PORT);
Maapi maapi = new Maapi(socket);
maapi.startUserSession("system", InetAddress.getByName(host),
"system", new String[] {},
MaapiUserSessionFlag.PROTO_TCP);
AlarmSinkCentral sinkCentral = AlarmSinkCentral.getAlarmSink(1000, maapi);
sinkCentral.start(); ArrayList<AlarmId> idList = new ArrayList<AlarmId>();
ConfIdentityRef alarmType =
new ConfIdentityRef(NcsAlarms.hash,
NcsAlarms._ncs_dev_manager_alarm);
ManagedObject managedObject1 =
new ManagedObject("/ncs:devices/device{device0}/config/root1");
ManagedObject managedObject2 =
new ManagedObject("/ncs:devices/device{device0}/config/root2");
idList.add(new AlarmId(new ManagedDevice("device0"),
alarmType,
managedObject1));
idList.add(new AlarmId(new ManagedDevice("device0"),
alarmType,
managedObject2));
ManagedObject managedObject3 =
new ManagedObject("/ncs:devices/device{device0}/config/root3");
Alarm myAlarm =
new Alarm(new ManagedDevice("device0"),
managedObject3,
alarmType,
PerceivedSeverity.WARNING,
false,
"This is a warning",
null,
idList,
null,
ConfDatetime.getConfDatetime(),
new AlarmAttribute(myAlarm.hash,
myAlarm._custom_alarm_attribute_,
new ConfBuf("An alarm attribute")),
new AlarmAttribute(myAlarm.hash,
myAlarm._custom_status_change_,
new ConfBuf("A status change")));
sink.submitAlarm(myAlarm); Socket sock = new Socket("localhost", Conf.NCS_PORT);
EnumSet notifSet = EnumSet.of(NotificationType.NOTIF_COMMIT_SIMPLE,
NotificationType.NOTIF_AUDIT);
Notif notif = new Notif(sock, notifSet);
while (true) {
Notification n = notif.read();
if (n instanceof CommitNotification) {
// handle NOTIF_COMMIT_SIMPLE case
.....
} else if (n instanceof AuditNotification) {
// handle NOTIF_AUDIT case
.....
}
} ....
Socket s0 = new Socket("host1", Conf.NCS_PORT);
Socket s1 = new Socket("host2", Conf.NCS_PORT);
Socket s2 = new Socket("host3", Conf.NCS_PORT);
Ha ha0 = new Ha(s0, "clus0");
Ha ha1 = new Ha(s1, "clus0");
Ha ha2 = new Ha(s2, "clus0");
ConfHaNode primary =
new ConfHaNode(new ConfBuf("node0"),
new ConfIPv4(InetAddress.getByName("localhost")));
ha0.bePrimary(primary.nodeid);
ha1.beSecondary(new ConfBuf("node1"), primary, true);
ha2.beSecondary(new ConfBuf("node2"), primary, true);
HaStatus status0 = ha0.status();
HaStatus status1 = ha1.status();
HaStatus status2 = ha2.status();
.... ConfPath keyPath = new ConfPath(new ConfObject[] {
new ConfTag("ncs","devices"),
new ConfTag("ncs","device"),
new ConfKey(new ConfObject[] {
new ConfBuf("d1")}),
new ConfTag("iosxr","interface"),
new ConfTag("iosxr","Loopback"),
new ConfKey(new ConfObject[] {
new ConfBuf("lo0")})
}); // either this way
ConfPath key1 = new ConfPath("/ncs:devices/device{d1}"+
"/iosxr:interface/Loopback{lo0}"
// or this way
ConfPath key2 = new ConfPath("/ncs:devices/device{%s}"+
"/iosxr:interface/Loopback{%s}",
new ConfBuf("d1"),
new ConfBuf("lo0")); <servers>
<server>
<name>www</name>
</server>
</servers> ConfXMLParam[] tree = new ConfXMLParam[] {
new ConfXMLParamStart(ns.hash(),ns._servers),
new ConfXMLParamStart(ns.hash(),ns._server),
new ConfXMLParamValue(ns.hash(),ns._name),
new ConfXMLParamStop(ns.hash(),ns._server),
new ConfXMLParamStop(ns.hash,ns._servers)};ncsc --java-disable-prefix --java-package \
com.example.app.namespaces \
--emit-java \
java/src/com/example/app/namespaces/foo.java \
foo.fxs Socket s = new Socket("localhost", Conf.NCS_PORT);
Maapi maapi = new Maapi(s);
maapi.loadSchemas();
ArrayList<ConfNamespace> nsList = maapi.getAutoNsList(); ConfPath key1 = new ConfPath("/ncs:devices/device{d1}/iosxr:interface"); Socket s = new Socket("localhost", Conf.NCS_PORT);
Maapi maapi = new Maapi(s);
int th = maapi.startTrans(Conf.DB_CANDIDATE,
Conf.MODE_READ_WRITE);
// Because we will use keypaths without prefixes
maapi.setNamespace(th, new smp().uri());
ConfValue val = maapi.getElem(th, "/devices/device{d1}/address");<restconf>
<enabled>true</enabled>
</restconf>
<webui>
<transport>
<tcp>
<enabled>true</enabled>
<ip>0.0.0.0</ip>
<port>8080</port>
</tcp>
</transport>
</webui><restconf>
<enabled>true</enabled>
<transport>
<tcp>
<enabled>true</enabled>
<ip>0.0.0.0</ip>
<port>8090</port>
</tcp>
</transport>
</restconf>
<webui>
<enabled>false</enabled>
<transport>
<tcp>
<enabled>true</enabled>
<ip>0.0.0.0</ip>
<port>8080</port>
</tcp>
</transport>
</webui># Note that the command is wrapped in several lines in order to fit.
#
# The switch '-i' will include any HTTP reply headers in the output
# and the '-s' will suppress some superflous output.
#
# The '-u' switch specify the User:Password for login authentication.
#
# The '-H' switch will add a HTTP header to the request; in this case
# an 'Accept' header is added, requesting the preferred reply format.
#
# Finally, the complete URL to the wanted resource is specified,
# in this case the top of the configuration tree.
#
curl -is -u admin:admin \
-H "Accept: application/yang-data+xml" \
http://localhost:8080/restconf/dataGET /restconf/data
Accept: application/yang-data+xml
# Any reply with relevant headers will be displayed here!
HTTP/1.1 200 OKGET /restconf
Accept: application/yang-data+xml
HTTP/1.1 200 OK
<restconf xmlns="urn:ietf:params:xml:ns:yang:ietf-restconf">
<data/>
<operations/>
<yang-library-version>2019-01-04</yang-library-version>
</restconf>GET /restconf/data?depth=1
Accept: application/yang-data+xml
<data xmlns="urn:ietf:params:xml:ns:yang:ietf-restconf">
<yang-library xmlns="urn:ietf:params:xml:ns:yang:ietf-yang-library"/>
<modules-state xmlns="urn:ietf:params:xml:ns:yang:ietf-yang-library"/>
<dhcp xmlns="http://yang-central.org/ns/example/dhcp"/>
<nacm xmlns="urn:ietf:params:xml:ns:yang:ietf-netconf-acm"/>
<netconf-state xmlns="urn:ietf:params:xml:ns:yang:ietf-netconf-monitoring"/>
<restconf-state xmlns="urn:ietf:params:xml:ns:yang:ietf-restconf-monitoring"/>
<aaa xmlns="http://tail-f.com/ns/aaa/1.1"/>
<confd-state xmls="http://tail-f.com/yang/confd-monitoring"/>
<last-logins xmlns="http://tail-f.com/yang/last-login"/>
</data>> yanger -f tree examples.confd/restconf/basic/dhcp.yang
module: dhcp
+--rw dhcp
+--rw max-lease-time? uint32
+--rw default-lease-time? uint32
+--rw subnet* [net]
| +--rw net inet:ip-prefix
| +--rw range!
| | +--rw dynamic-bootp? empty
| | +--rw low inet:ip-address
| | +--rw high inet:ip-address
| +--rw dhcp-options
| | +--rw router* inet:host
| | +--rw domain-name? inet:domain-name
| +--rw max-lease-time? uint32GET /restconf/data/dhcp:dhcp/subnet
HTTP/1.1 204 No ContentPOST /restconf/data/dhcp:dhcp
Content-Type: application/yang-data+xml
<subnet xmlns="http://yang-central.org/ns/example/dhcp"
xmlns:dhcp="http://yang-central.org/ns/example/dhcp">
<net>10.254.239.0/27</net>
<range>
<dynamic-bootp/>
<low>10.254.239.10</low>
<high>10.254.239.20</high>
</range>
<dhcp-options>
<router>rtr-239-0-1.example.org</router>
<router>rtr-239-0-2.example.org</router>
</dhcp-options>
<max-lease-time>1200</max-lease-time>
</subnet>
# If the resource is created, the server might respond as follows:
HTTP/1.1 201 Created
Location: http://localhost:8080/restconf/data/dhcp:dhcp/subnet=10.254.239.0%2F27PATCH /restconf/data/dhcp:dhcp/subnet=10.254.239.0%2F27
<subnet>
<max-lease-time>3333</max-lease-time>
</subnet>
# If our modification is successful, the server might respond as follows:
HTTP/1.1 204 No ContentPUT /restconf/data/dhcp:dhcp/subnet=10.254.239.0%2F27
Content-Type: application/yang-data+xml
<subnet xmlns="http://yang-central.org/ns/example/dhcp"
xmlns:dhcp="http://yang-central.org/ns/example/dhcp">
<net>10.254.239.0/27</net>
<!-- ...config left out here... -->
</subnet>
# At success, the server will respond as follows:
HTTP/1.1 204 No ContentDELETE /restconf/data/dhcp:dhcp/subnet=10.254.239.0%2F27
HTTP/1.1 204 No Content<restconf>
<enabled>true</enabled>
<root-resource>my_own_restconf_root</root-resource>
</restconf> The client might send the following:
GET /.well-known/host-meta
Accept: application/xrd+xml
The server might respond as follows:
HTTP/1.1 200 OK
<XRD xmlns='http://docs.oasis-open.org/ns/xri/xrd-1.0'>
<Link rel='restconf' href='/restconf'/>
</XRD>GET /restconf/data/ietf-restconf-monitoring:restconf-state
Host: example.com
Accept: application/yang-data+xml
<restconf-state xmlns="urn:ietf:params:xml:ns:yang:ietf-restconf-monitoring"
xmlns:rcmon="urn:ietf:params:xml:ns:yang:ietf-restconf-monitoring">
<capabilities>
<capability>
urn:ietf:params:restconf:capability:defaults:1.0?basic-mode=explicit
</capability>
<capability>urn:ietf:params:restconf:capability:depth:1.0</capability>
<capability>urn:ietf:params:restconf:capability:fields:1.0</capability>
<capability>urn:ietf:params:restconf:capability:with-defaults:1.0</capability>
<capability>urn:ietf:params:restconf:capability:filter:1.0</capability>
<capability>urn:ietf:params:restconf:capability:replay:1.0</capability>
<capability>http://tail-f.com/ns/restconf/collection/1.0</capability>
<capability>http://tail-f.com/ns/restconf/query-api/1.0</capability>
<capability>http://tail-f.com/ns/restconf/partial-response/1.0</capability>
<capability>http://tail-f.com/ns/restconf/unhide/1.0</capability>
<capability>urn:ietf:params:xml:ns:yang:traceparent:1.0</capability>
<capability>urn:ietf:params:xml:ns:yang:tracestate:1.0</capability>
</capabilities>
</restconf-state> urn:ietf:params:restconf:capability:defaults:1.0urn:ietf:params:restconf:capability:defaults:1.0?basic-mode=explicitGET /restconf/data/dhcp:dhcp?fields=subnet/range(low;high)
Accept: application/yang-data+xml
HTTP/1.1 200 OK
<dhcp xmlns="http://yang-central.org/ns/example/dhcp" \
xmlns:dhcp="http://yang-central.org/ns/example/dhcp">
<subnet>
<range>
<low>10.254.239.10</low>
<high>10.254.239.20</high>
</range>
</subnet>
<subnet>
<range>
<low>10.254.244.10</low>
<high>10.254.244.20</high>
</range>
</subnet>
</dhcp>GET /restconf/data/dhcp:dhcp/subnet
Accept: application/yang-data+xml
HTTP/1.1 200 OK
<subnet xmlns="http://yang-central.org/ns/example/dhcp"
xmlns:dhcp="http://yang-central.org/ns/example/dhcp">
<net>10.254.239.0/27</net>
<range>
<dynamic-bootp/>
<low>10.254.239.10</low>
<high>10.254.239.20</high>
</range>
<dhcp-options>
<router>rtr-239-0-1.example.org</router>
<router>rtr-239-0-2.example.org</router>
</dhcp-options>
<max-lease-time>1200</max-lease-time>
</subnet>GET /restconf/data/dhcp:dhcp/subnet?exclude=range(low;high)
Accept: application/yang-data+xml
HTTP/1.1 200 OK
<subnet xmlns="http://yang-central.org/ns/example/dhcp"
xmlns:dhcp="http://yang-central.org/ns/example/dhcp">
<net>10.254.239.0/27</net>
<range>
<dynamic-bootp/>
</range>
<dhcp-options>
<router>rtr-239-0-1.example.org</router>
<router>rtr-239-0-2.example.org</router>
</dhcp-options>
<max-lease-time>1200</max-lease-time>
</subnet># Note: we have to split the POST line in order to fit the page
POST /restconf/data/dhcp:dhcp/subnet=10.254.239.0%2F27/dhcp-options?\
insert=first
Content-Type: application/yang-data+xml
<router>one.acme.org</router>
# If the resource is created, the server might respond as follows:
HTTP/1.1 201 Created
Location /restconf/data/dhcp:dhcp/subnet=10.254.239.0%2F27/dhcp-options/\
router=one.acme.orgGET /restconf/data/dhcp:dhcp/subnet=10.254.239.0%2F27/dhcp-options
Accept: application/yang-data+xml
HTTP/1.1 200 OK
<dhcp-options xmlns="http://yang-central.org/ns/example/dhcp"
xmlns:dhcp="http://yang-central.org/ns/example/dhcp">
<router>one.acme.org</router>
<router>rtr-239-0-1.example.org</router>
<router>rtr-239-0-2.example.org</router>
</dhcp-options># Note: we have to split the POST line in order to fit the page
POST /restconf/data/dhcp:dhcp/subnet=10.254.239.0%2F27/dhcp-options?\
insert=after&\
point=/dhcp:dhcp/subnet=10.254.239.0%2F27/dhcp-options/router=one.acme.org
Content-Type: application/yang-data+xml
<router>two.acme.org</router>
# If the resource is created, the server might respond as follows:
HTTP/1.1 201 Created
Location /restconf/data/dhcp:dhcp/subnet=10.254.239.0%2F27/dhcp-options/\
router=one.acme.orgGET /restconf/data/dhcp:dhcp/subnet=10.254.239.0%2F27/dhcp-options
Accept: application/yang-data+xml
HTTP/1.1 200 OK
<dhcp-options xmlns="http://yang-central.org/ns/example/dhcp"
xmlns:dhcp="http://yang-central.org/ns/example/dhcp">
<router>one.acme.org</router>
<router>two.acme.org</router>
<router>rtr-239-0-1.example.org</router>
<router>rtr-239-0-2.example.org</router>
</dhcp-options>POST /restconf/data/dhcp:dhcp?rollback-id=true
Content-Type: application/yang-data+xml
<subnet xmlns="http://yang-central.org/ns/example/dhcp">
<net>10.254.239.0/27</net>
</subnet>
HTTP/1.1 201 Created
Location: http://localhost:8008/restconf/data/dhcp:dhcp/subnet=10.254.239.0%2F27
<result xmlns="http://tail-f.com/ns/tailf-restconf">
<rollback>
<id>10002</id>
</rollback>
</result>POST /restconf/data/tailf-rollback:rollback-files/apply-rollback-file
Content-Type: application/yang-data+xml
<input xmlns="http://tail-f.com/ns/rollback">
<fixed-number>10002</fixed-number>
</input>
HTTP/1.1 204 No Content<notifications>
<eventStreams>
<stream>
<name>interface</name>
<description>Example notifications</description>
<replaySupport>true</replaySupport>
<builtinReplayStore>
<dir>./</dir>
<maxSize>S1M</maxSize>
<maxFiles>5</maxFiles>
</builtinReplayStore>
</stream>
</eventStreams>
</notifications>GET /restconf/data/ietf-restconf-monitoring:restconf-state/streams
Accept: application/yang-data+xml
HTTP/1.1 200 OK
<streams xmlns="urn:ietf:params:xml:ns:yang:ietf-restconf-monitoring"
xmlns:rcmon="urn:ietf:params:xml:ns:yang:ietf-restconf-monitoring">
...other streams info removed here for brewity reason...
<stream>
<name>interface</name>
<description>Example notifications</description>
<replay-support>true</replay-support>
<replay-log-creation-time>
2020-05-04T13:45:31.033817+00:00
</replay-log-creation-time>
<access>
<encoding>xml</encoding>
<location>https://localhost:8888/restconf/streams/interface/xml</location>
</access>
<access>
<encoding>json</encoding>
<location>https://localhost:8888/restconf/streams/interface/json</location>
</access>
</stream>
</streams>GET /restconf/streams/interface/xml
Accept: text/event-stream
...NOTE: we will be waiting here until a notification is generated...
HTTP/1.1 200 OK
Content-Type: text/event-stream
data: <notification xmlns='urn:ietf:params:xml:ns:netconf:notification:1.0'>
data: <eventTime>2020-05-04T13:48:02.291816+00:00</eventTime>
data: <link-up xmlns='http://tail-f.com/ns/test/notif'>
data: <if-index>2</if-index>
data: <link-property>
data: <newly-added/>
data: <flags>42</flags>
data: <extensions>
data: <name>1</name>
data: <value>3</value>
data: </extensions>
data: <extensions>
data: <name>2</name>
data: <value>4668</value>
data: </extensions>
data: </link-property>
data: </link-up>
data: </notification>
...NOTE: we will still be waiting here for more notifications to come...GET /restconf/streams/interface/xml?start-time=2007-07-28T15%3A23%3A36Z
Accept: text/event-stream
HTTP/1.1 200 OK
Content-Type: text/event-stream
data: ...any existing notification since given date will be delivered here...
...NOTE: when all notifications are delivered, we will be waiting here for more...: error: notification stream NETCONF temporarily unavailableGET /restconf/data/ietf-yang-library:modules-state
Accept: application/yang-data+xml
HTTP/1.1 200 OK
<modules-state xmlns="urn:ietf:params:xml:ns:yang:ietf-yang-library"
xmlns:yanglib="urn:ietf:params:xml:ns:yang:ietf-yang-library">
<module-set-id>f4709e88d3250bd84f2378185c2833c2</module-set-id>
<module>
<name>dhcp</name>
<revision>2019-02-14</revision>
<schema>http://localhost:8080/restconf/tailf/modules/dhcp/2019-02-14</schema>
<namespace>http://yang-central.org/ns/example/dhcp</namespace>
<conformance-type>implement</conformance-type>
</module>
...rest of the output removed here...
</modules-state>GET /restconf/tailf/modules/dhcp/2019-02-14
HTTP/1.1 200 OK
module dhcp {
namespace "http://yang-central.org/ns/example/dhcp";
prefix dhcp;
import ietf-yang-types {
...the rest of the Yang module removed here...PATCH /restconf/data/dhcp:dhcp
Accept: application/yang-data+xml
Content-Type: application/yang-patch+xml
<yang-patch xmlns="urn:ietf:params:xml:ns:yang:ietf-yang-patch">
<patch-id>add-subnets</patch-id>
<edit>
<edit-id>add-subnet-239</edit-id>
<operation>create</operation>
<target>/subnet=10.254.239.0%2F27</target>
<value>
<subnet xmlns="http://yang-central.org/ns/example/dhcp" \
xmlns:dhcp="http://yang-central.org/ns/example/dhcp">
<net>10.254.239.0/27</net>
...content removed here for brevity...
<max-lease-time>1200</max-lease-time>
</subnet>
</value>
</edit>
<edit>
<edit-id>add-subnet-244</edit-id>
<operation>create</operation>
<target>/subnet=10.254.244.0%2F27</target>
<value>
<subnet xmlns="http://yang-central.org/ns/example/dhcp" \
xmlns:dhcp="http://yang-central.org/ns/example/dhcp">
<net>10.254.244.0/27</net>
...content removed here for brevity...
<max-lease-time>1200</max-lease-time>
</subnet>
</value>
</edit>
</yang-patch>
# If the YANG Patch request was successful,
# the server might respond as follows:
HTTP/1.1 200 OK
<yang-patch-status xmlns="urn:ietf:params:xml:ns:yang:ietf-yang-patch">
<patch-id>add-subnets</patch-id>
<ok/>
</yang-patch-status>PATCH /restconf/data/dhcp:dhcp
Accept: application/yang-data+xml
Content-Type: application/yang-patch+xml
<yang-patch xmlns="urn:ietf:params:xml:ns:yang:ietf-yang-patch">
<patch-id>modify-and-delete</patch-id>
<edit>
<edit-id>modify-max-lease-time-239</edit-id>
<operation>merge</operation>
<target>/dhcp:subnet=10.254.239.0%2F27</target>
<value>
<subnet xmlns="http://yang-central.org/ns/example/dhcp" \
xmlns:dhcp="http://yang-central.org/ns/example/dhcp">
<net>10.254.239.0/27</net>
<max-lease-time>1234</max-lease-time>
</subnet>
</value>
</edit>
<edit>
<edit-id>delete-max-lease-time-244</edit-id>
<operation>delete</operation>
<target>/dhcp:subnet=10.254.244.0%2F27/max-lease-time</target>
</edit>
</yang-patch>
# If the YANG Patch request was successful,
# the server might respond as follows:
HTTP/1.1 200 OK
<yang-patch-status xmlns="urn:ietf:params:xml:ns:yang:ietf-yang-patch">
<patch-id>modify-and-delete</patch-id>
<ok/>
</yang-patch-status>GET /restconf/data/dhcp:dhcp/subnet=10.254.239.0%2F27/max-lease-time
Accept: application/yang-data+xml
HTTP/1.1 200 OK
<max-lease-time xmlns="http://yang-central.org/ns/example/dhcp"
xmlns:dhcp="http://yang-central.org/ns/example/dhcp">
1234
</max-lease-time>GET /restconf/data/dhcp:dhcp/subnet=10.254.244.0%2F27/max-lease-time?\
with-defaults=report-all-tagged
Accept: application/yang-data+xml
HTTP/1.1 200 OK
<max-lease-time wd:default="true"
xmlns:wd="urn:ietf:params:restconf:capability:defaults:1.0"
xmlns="http://yang-central.org/ns/example/dhcp"
xmlns:dhcp="http://yang-central.org/ns/example/dhcp">
7200
</max-lease-time>HEAD /restconf/ds/ietf-datastores:operational
HTTP/1.1 200 OKGET /restconf/ds/ietf-datastores:operational/datastore
Accept: application/yang-data+json
HTTP/1.1 200 OK
{
"ietf-yang-library:datastore": [
{
"name": "ietf-datastores:running",
"schema": "common"
},
{
"name": "ietf-datastores:intended",
"schema": "common"
},
{
"name": "ietf-datastores:operational",
"schema": "common"
}
]
}GET /restconf/ds/ietf-datastores:operational/\
ietf-yang-library:yang-library/datastore
Accept: application/vnd.yang.collection+xml
<collection xmlns="http://tail-f.com/ns/restconf/collection/1.0">
<datastore xmlns="urn:ietf:params:xml:ns:yang:ietf-yang-library"
xmlns:yanglib="urn:ietf:params:xml:ns:yang:ietf-yang-library">
<name xmlns:ds="urn:ietf:params:xml:ns:yang:ietf-datastores">
ds:running
</name>
<schema>common</schema>
</datastore>
<datastore xmlns="urn:ietf:params:xml:ns:yang:ietf-yang-library"
xmlns:yanglib="urn:ietf:params:xml:ns:yang:ietf-yang-library">
<name xmlns:ds="urn:ietf:params:xml:ns:yang:ietf-datastores">
ds:intended
</name>
<schema>common</schema>
</datastore>
<datastore xmlns="urn:ietf:params:xml:ns:yang:ietf-yang-library
xmlns:yanglib="urn:ietf:params:xml:ns:yang:ietf-yang-library">
<name xmlns:ds="urn:ietf:params:xml:ns:yang:ietf-datastores">
ds:operational
</name>
<schema>common</schema>
</datastore>
</collection>container x {
list host {
key number;
leaf number {
type int32;
}
leaf enabled {
type boolean;
}
leaf name {
type string;
}
leaf address {
type inet:ip-address;
}
}
}]POST /restconf/tailf/query
Content-Type: application/yang-data+xml
<start-query xmlns="http://tail-f.com/ns/tailf-rest-query">
<foreach>
/x/host[enabled = 'true']
</foreach>
<select>
<label>Host name</label>
<expression>name</expression>
<result-type>string</result-type>
</select>
<select>
<expression>address</expression>
<result-type>string</result-type>
</select>
<sort-by>name</sort-by>
<limit>100</limit>
<offset>1</offset>
<timeout>600</timeout>
</start-query>]POST /restconf/tailf/query
Content-Type: application/yang-data+json
{
"start-query": {
"foreach": "/x/host[enabled = 'true']",
"select": [
{
"label": "Host name",
"expression": "name",
"result-type": ["string"]
},
{
"expression": "address",
"result-type": ["string"]
}
],
"sort-by": ["name"],
"limit": 100,
"offset": 1,
"timeout": 600
}
}]<start-query xmlns="http://tail-f.com/ns/tailf-rest-query"><foreach>
/x/host[enabled = 'true']
</foreach><select>
<label>Host name</label>
<expression>name</expression>
<result-type>string</result-type>
</select>
<select>
<expression>address</expression>
<result-type>string</result-type>
</select><sort-by>name</sort-by><offset>1</offset><timeout>600</timeout><start-query-result>
<query-handle>12345</query-handle>
</start-query-result><fetch-query-result xmlns="http://tail-f.com/ns/tailf-rest-query">
<query-handle>12345</query-handle>
</fetch-query-result><query-result xmlns="http://tail-f.com/ns/tailf-rest-query">
<result>
<select>
<label>Host name</label>
<value>One</value>
</select>
<select>
<value>10.0.0.1</value>
</select>
</result>
<result>
<select>
<label>Host name</label>
<value>Three</value>
</select>
<select>
<value>10.0.0.3</value>
</select>
</result>
</query-result><query-result xmlns="http://tail-f.com/ns/tailf-rest-query">
</query-result><stop-query xmlns="http://tail-f.com/ns/tailf-rest-query">
<query-handle>12345</query-handle>
</stop-query><reset-query xmlns="http://tail-f.com/ns/tailf-rest-query">
<query-handle>12345</query-handle>
<offset>42</offset>
</reset-query>GET /restconf/data/example-jukebox:jukebox/library/artist?offset=1&limit=2
Accept: application/yang-data+json
...in return we will get the second and third elements of the list...<groupname>[;<password>]unhide=extra,debug;secrettraceparent = <version>-<trace-id>-<parent-id>-<flags>traceparent: 00-100456789abcde10123456789abcde10-001006789abcdef0-01tracestate: key1=value1,key2=value2<x xmlns="urn:x" xmlns:x="urn:x">
<id tags=" important ethernet " annotation="hello world">42</id>
<person annotation="This is a person">
<name>Bill</name>
<person annotation="This is another person">grandma</person>
</person>
</x>{
"x": {
"foo": 42,
"@foo": {"tailf_netconf:tags": ["tags","for","foo"],
"tailf_netconf:annotation": "annotation for foo"},
"y": {
"@": {"tailf_netconf:annotation": "Annotation for parent y"},
"y": 1,
"@y": {"tailf_netconf:annotation": "Annotation for sibling y"}
}
}
}
{
"x": {
"foo": 42,
"@foo": {"tags": ["tags","for","foo"], "annotation": "annotation for foo"},
"y": {
"@@y": {"annotation": "Annotation for parent y"},
"y": 1,
"@y": {"annotation": "Annotation for sibling y"}
}
}
} ...
<aaa>
...
<restconf>
<!-- Set the TTL to 10 seconds! -->
<auth-cache-ttl>PT10S</auth-cache-ttl>
<!-- Use both "User" and "ClientIP" as key into the AuthCache -->
<enable-auth-cache-client-ip>false</enable-auth-cache-client-ip>
</restconf>
...
</aaa>
... ...
<webui>
...
<use-forwarded-client-ip>
<proxy-headers>X-Forwarded-For</proxy-headers>
<proxy-headers>X-REAL-IP</proxy-headers>
<allowed-proxy-ip-prefix>10.12.34.0/24</allowed-proxy-ip-prefix>
<allowed-proxy-ip-prefix>2001:db8:1234::/48</allowed-proxy-ip-prefix>
</use-forwarded-client-ip>
...
</webui>
... ...
<restconf>
...
<token-response>
<x-auth-token>true</x-auth-token>
</token-response>
...
</restconf>
... ...
<restconf>
...
<token-cookie>
<name>X-JWT-ACCESS-TOKEN</name>
<directives>path=/; Expires=Tue, 19 Jan 2038 03:14:07 GMT;</directives>
</token-cookie>
...
</restconf>
... <restconf>
<enabled>true</enabled>
<custom-headers>
<header>
<name>Access-Control-Allow-Origin</name>
<value>*</value>
</header>
</custom-headers>
</restconf>yanger -t expand -f swagger example.yang -o example.jsonyanger -t expand -f swagger base.yang base-ext-1.yang base-ext-2.yang -o base.jsonyanger --helpSwagger output specific options:
--swagger-host Add host to the Swagger output
--swagger-basepath Add basePath to the Swagger output
--swagger-version Add version url to the Swagger output.
NOTE: this will override any revision
in the yang file
--swagger-tag-mode Set tag mode to group resources. Valid
values are: methods, resources, all
[default: all]
--swagger-terms Add termsOfService to the Swagger
output
--swagger-contact-name Add contact name to the Swagger output
--swagger-contact-url Add contact url to the Swagger output
--swagger-contact-email Add contact email to the Swagger output
--swagger-license-name Add license name to the Swagger output
--swagger-license-url Add license url to the Swagger output
--swagger-top-resource Generate only swagger resources from
this top resource. Valid values are:
root, data, operations, all [default:
all]
--swagger-omit-query-params Omit RESTCONF query parameters
[default: false]
--swagger-omit-body-params Omit RESTCONF body parameters
[default: false]
--swagger-omit-form-params Omit RESTCONF form parameters
[default: false]
--swagger-omit-header-params Omit RESTCONF header parameters
[default: false]
--swagger-omit-path-params Omit RESTCONF path parameters
[default: false]
--swagger-omit-standard-statuses Omit standard HTTP response statuses.
NOTE: at least one successful HTTP
status will still be included
[default: false]
--swagger-methods HTTP methods to include. Example:
--swagger-methods "get, post"
[default: "get, post, put, patch,
delete"]
--swagger-path-filter Filter out paths matching a path filter.
Example: --swagger-path-filter
"/data/example-jukebox/jukebox"yanger -p . -t expand -f swagger example-jukebox.yang \
--swagger-host 127.0.0.1:8080 \
--swagger-basepath /restconf \
--swagger-version "My swagger version 1.0.0.1" \
--swagger-tag-mode all \
--swagger-terms "http://my-terms.example.com" \
--swagger-contact-name "my contact name" \
--swagger-contact-url "http://my-contact-url.example.com" \
--swagger-contact-email "[email protected]" \
--swagger-license-name "my license name" \
--swagger-license-url "http://my-license-url.example.com" \
--swagger-top-resource all \
--swagger-omit-query-params false \
--swagger-omit-body-params false \
--swagger-omit-form-params false \
--swagger-omit-header-params false \
--swagger-omit-path-params false \
--swagger-omit-standard-statuses false \
--swagger-methods "post, get, patch, put, delete, head, options"Format String
"Aborting candidate commit, request from user, reverting configuration."
Format String
"ConfD restarted while having a ongoing candidate commit timer, reverting configuration."
Format String
"Candidate commit session terminated, reverting configuration."
Format String
"Candidate commit timer expired, reverting configuration."
Format String
"Fatal error for accept() - ~s"
Format String
"Out of file descriptors for accept() - ~s limit reached"
Format String
"login failed via ~s from ~s with ~s: ~s"
Format String
"logged in via ~s from ~s with ~s using ~s authentication"
Format String
"logged out <~s> user"
Format String
"Bad configuration: ~s:~s: ~s"
Format String
"The dependency node '~s' for node '~s' in module '~s' does not exist"
Format String
"~s"
Format String
"~s"
Format String
"confd_aaa_bridge died - ~s"
Format String
"Candidate commit rollback done"
Format String
"Failed to rollback candidate commit due to: ~s"
Format String
"Bad format found in candidate db file ~s; resetting candidate"
Format String
"Corrupt candidate db file ~s; resetting candidate"
Format String
"CDB boot error: ~s"
Format String
"CDB client (~s) timed out, waiting for ~s"
Format String
"CDB: lost config, deleting DB"
Format String
"CDB: lost DB, deleting old config"
Format String
"fatal error in CDB: ~s"
Format String
"CDB load: processing file: ~s"
Format String
"CDB: Operational DB re-initialized"
Format String
"CDB: Upgrade failed: ~s"
Format String
"CGI: '~s' script with method ~s"
Format String
"libcrypto does not support ~s"
Format String
"CLI aborted"
Format String
"CLI done"
Format String
"CLI '~s'"
Format String
"CLI denied '~s'"
Format String
"commit ~s"
Format String
"Resetting commit queue due do inconsistent or corrupt data."
Format String
"ConfD configuration change: ~s"
Format String
"Configuration transaction limit of type '~s' reached, rejected new transaction request"
Format String
"Consulting daemon configuration file ~s"
Format String
"Daemon ~s died"
Format String
"Daemon ~s timed out"
Format String
"~s"
Format String
"~s"
Format String
"~s"
Format String
"~s"
Format String
"~s"
Format String
"~s"
Format String
"~s"
Format String
"~s"
Format String
"~s"
Format String
"The namespace ~s is defined in both module ~s and ~s."
Format String
"The prefix ~s is defined in both ~s and ~s."
Format String
"Changing size of error log (~s) to ~s (was ~s)"
Format String
"Event notification subscriber with bitmask ~s timed out, waiting for ~s"
Format String
"~s"
Format String
"When-expression evaluation error: circular dependency in ~s"
Format String
"external challenge authentication failed via ~s from ~s with ~s: ~s"
Format String
"external challenge sent to ~s from ~s with ~s"
Format String
"external challenge authentication succeeded via ~s from ~s with ~s, member of groups: ~s~s"
Format String
"External auth program (user=~s) ret bad output: ~s"
Format String
"external authentication failed via ~s from ~s with ~s: ~s"
Format String
"external authentication succeeded via ~s from ~s with ~s, member of groups: ~s~s"
Format String
"external token authentication failed via ~s from ~s with ~s: ~s"
Format String
"external token authentication succeeded via ~s from ~s with ~s, member of groups: ~s~s"
Format String
"~s"
Format String
"~s: ~s"
Format String
"Loaded file ~s"
Format String
"Failed to load file ~s: ~s"
Format String
"Loading file ~s"
Format String
"Fxs mismatch, secondary is not allowed"
Format String
"assigned to groups: ~s"
Format String
"Not assigned to any groups - all access is denied"
Format String
"Incompatible HA version (~s, expected ~s), secondary is not allowed"
Format String
"Nodeid ~s already exists"
Format String
"Failed to connect to primary: ~s"
Format String
"Secondary ~s killed due to no ticks"
Format String
"Internal error: ~s"
Format String
"JIT ~s"
Format String
"JSON-RPC traffic log: ~s"
Format String
"Stopping session due to absolute timeout: ~s"
Format String
"Stopping session due to idle timeout: ~s"
Format String
"JSON-RPC: '~s' with JSON params ~s"
Format String
"JSON-RPC warning: ~s"
Format String
"Failed to load kicker schema"
Format String
"Got connect from library with insufficient keypath depth/keys support (~s/~s, needs ~s/~s)"
Format String
"Got library connect from wrong version (~s, expected ~s)"
Format String
"Got library connect with failed access check: ~s"
Format String
"~s to listen for ~s on ~s:~s"
Format String
"local authentication failed via ~s from ~s with ~s: ~s"
Format String
"local authentication failed via ~s from ~s with ~s: ~s"
Format String
"local authentication failed via ~s from ~s with ~s: ~s"
Format String
"local authentication succeeded via ~s from ~s with ~s, member of groups: ~s"
Format String
"Changing destination of ~s log to ~s"
Format String
"Daemon logging terminating, reason: ~s"
Format String
"Daemon logging started"
Format String
"Writing ~s log to ~s"
Format String
"~s ~s log"
Format String
"~s"
Format String
"Logged out from maapi ctx=~s (~s)"
Format String
"maapi server failed to write to a socket. Op: ~s Ecode: ~s Error: ~s~s"
Format String
"AES256CFB128 keys were not found in confd.conf"
Format String
"AESCFB128 keys were not found in confd.conf"
Format String
"DES3CBC keys were not found in confd.conf"
Format String
"The namespace ~s (referenced by ~s) could not be found in the loadPath."
Format String
"The namespace ~s could not be found in the loadPath."
Format String
"Failed to setup the shared memory schema"
Format String
"Got bad NETCONF TCP header"
Format String
"~s"
Format String
"~s: ~s"
Format String
"logged in from the CLI with aaa disabled"
Format String
"no registration found for callpoint ~s of type=~s"
Format String
"The identity ~s in namespace ~s refers to a non-existing base identity ~s in namespace ~s"
Format String
"No such namespace ~s, used by ~s"
Format String
"No such simpleType '~s' in ~s, used by ~s"
Format String
"~s"
Format String
"Failed to process namespaces: ~s"
Format String
"Failed to process namespace ~s: ~s"
Format String
"Logging subsystem, opening log file '~s' for ~s"
Format String
"PAM authentication failed via ~s from ~s with ~s: phase ~s, ~s"
Format String
"pam authentication succeeded via ~s from ~s with ~s"
Format String
"ConfD phase0 started"
Format String
"ConfD phase1 started"
Format String
"Reading state file failed: ~s: ~s (~s)"
Format String
"Reloading daemon configuration."
Format String
"Logging subsystem, reopening log files"
Format String
"rest authentication failed from ~s"
Format String
"rest authentication succeeded from ~s , member of groups: ~s"
Format String
"RESTCONF: request with ~s: ~s"
Format String
"RESTCONF: response with ~s: ~s duration ~s us"
Format String
"REST: request with ~s: ~s"
Format String
"REST: response with ~s: ~s duration ~s ms"
Format String
"Error while creating rollback file: ~s: ~s"
Format String
"Failed to delete rollback file ~s: ~s"
Format String
"Failed to rename rollback file ~s to ~s: ~s"
Format String
"Failed to repair rollback files."
Format String
"Found half created rollback0 file - removing and creating new"
Format String
"Found half created rollback0 file - repairing"
Format String
"created new session via ~s from ~s with ~s"
Format String
"Session limit of type '~s' reached, rejected new session request"
Format String
"could not create new session via ~s from ~s with ~s due to session limits"
Format String
"terminated session (reason: ~s)"
Format String
"Skipping file ~s: ~s"
Format String
"SNMP authentication failed: ~s"
Format String
"Can't load MIB file: ~s"
Format String
"Loading MIB: ~s"
Format String
"SNMP gateway: Non-trap received from ~s"
Format String
"Read state file failed: ~s: ~s"
Format String
"Can't start SNMP. CDB is not enabled"
Format String
"SNMP gateway: Can't forward trap from ~s; ~s"
Format String
"SNMP gateway: Can't forward trap with OID ~s from ~s; There is no notification with this OID in the loaded models."
Format String
"SNMP gateway: Can't open trap listening port ~s: ~s"
Format String
"SNMP gateway: Not forwarding trap from ~s; the sender is not recognized"
Format String
"SNMP gateway: V1 trap received from ~s"
Format String
"Write state file failed: ~s: ~s"
Format String
"No SSH host keys available"
Format String
"ssh protocol subsys - ~s"
Format String
"ConfD started vsn: ~s"
Format String
"Starting ConfD vsn: ~s"
Format String
"ConfD stopping (~s)"
Format String
"Token mismatch, secondary is not allowed"
Format String
"Upgrade aborted"
Format String
"Upgrade committed"
Format String
"Upgrade init started"
Format String
"Upgrade init succeeded"
Format String
"Upgrade performed"
Format String
"WebUI action '~s'"
Format String
"WebUI cmd '~s'"
Format String
"WebUI commit ~s"
Format String
"WebUI access log: ~s"
Format String
"Writing state file failed: ~s: ~s (~s)"
Format String
"XPath evaluation error: ~s for ~s"
Format String
"XPath evaluation error: '~s' resulted in ~s for ~s"
Format String
"Committed data towards device ~s which is out of sync"
Format String
"NCS device-out-of-sync Device '~s' Info '~s'"
Format String
"The NCS Java VM ~s"
Format String
"Starting the NCS Java VM"
Format String
"package authentication using ~s program ret bad output: ~s"
Format String
"package authentication using ~s failed via ~s from ~s with ~s: ~s"
Format String
"package authentication using ~s succeeded via ~s from ~s with ~s, member of groups: ~s~s"
Format String
"Failed to load NCS package: ~s; required package ~s of version ~s is not present (found ~s)"
Format String
"Failed to load NCS package: ~s; requires NCS version ~s"
Format String
"package authentication challenge sent to ~s from ~s with ~s"
Format String
"package authentication challenge using ~s failed via ~s from ~s with ~s: ~s"
Format String
"Failed to load NCS package: ~s; circular dependency found"
Format String
"Copying NCS package from ~s to ~s"
Format String
"Failed to load duplicate NCS package ~s: (~s)"
Format String
"Failed to load NCS package: ~s; syntax error in package file"
Format String
"NCS package upgrade failed with reason '~s'"
Format String
"NCS package upgrade has been aborted due to warnings:\n~s"
Format String
"The NCS Python VM ~s"
Format String
"Starting the NCS Python VM ~s"
Format String
"Starting upgrade of NCS Python package ~s"
Format String
"NCS service-out-of-sync Service '~s' Info '~s'"
Format String
"NCS Device '~s' failed to set platform data Info '~s'"
Format String
"Smart Licensing Entitlement Notification: ~s"
Format String
"Smart Licensing evaluation time remaining: ~s"
Format String
"The NCS Smart Licensing Java VM ~s"
Format String
"Smart Licensing Global Notification: ~s"
Format String
"Starting the NCS Smart Licensing Java VM"
Format String
"Failed to locate snmp_init.xml in loadpath ~s"
Format String
"Starting the NCS SNMP manager component"
Format String
"The NCS SNMP manager component has been stopped"
Format String
"NCS upgrade failed with reason '~s'"
Format String
"Provided bad password"
Format String
"Logged in over ~s using externalauth, member of groups: ~s~s"
Format String
"failed to login using externalauth: ~s"
Format String
"no such local user"
Format String
"pam phase ~s failed to login through PAM: ~s"
Format String
"failed to login through PAM: ~s"
Format String
"logged in over ssh from ~s with authmeth:~s"
Format String
"Logged out ssh <~s> user"
Format String
"Failed to login over ssh: ~s"
Format String
"logged in through Web UI from ~s"
Format String
"logged out from Web UI"







filter
GET,HEAD
Boolean notification filter for event stream resources.
insert
POST,PUT
Insertion mode for ordered-by user data resources
point
POST,PUT
Insertion point for ordered-by user data resources
start-time
GET,HEAD
Replay buffer start time for event stream resources.
stop-time
GET,HEAD
Replay buffer stop time for event stream resources.
with-defaults
GET,HEAD
Control the retrieval of default values.
with-origin
GET
Include the "origin" metadata annotations, as detailed in the NMDA.
no-overwrite
POST
PUT
PATCH
DELETE
NSO will check that the data that should be modified has not changed on the device compared to NSO's view of the data. Can't be used together with no-out-of-sync-check.
no-revision-drop
POST
PUT
PATCH
DELETE
NSO will not run its data model revision algorithm, which requires all participating managed devices to have all parts of the data models for all data contained in this transaction. Thus, this flag forces NSO to never silently drop any data set operations towards a device.
no-deploy
POST
PUT
PATCH
DELETE
Commit without invoking the service create method, i.e, write the service instance data without activating the service(s). The service(s) can later be re-deployed to write the changes of the service(s) to the network.
reconcile
POST
PUT
PATCH
DELETE
Reconcile the service data. All data which existed before the service was created will now be owned by the service. When the service is removed that data will also be removed. In technical terms, the reference count will be decreased by one for everything that existed prior to the service. If the manually configured data exists below in the configuration tree, that data is kept unless the option discard-non-service-config is used.
use-lsa
POST
PUT
PATCH
DELETE
Force handling of the LSA nodes as such. This flag tells NSO to propagate applicable commit flags and actions to the LSA nodes without applying them on the upper NSO node itself. The commit flags affected are dry-run, no-networking, no-out-of-sync-check, no-overwrite and no-revision-drop.
no-lsa
POST
PUT
PATCH
DELETE
Do not handle any of the LSA nodes as such. These nodes will be handled as any other device.
commit-queue
POST
PUT
PATCH
DELETE
Commit the transaction data to the commit queue. Possible values are: async, sync, and bypass. If the async value is set the operation returns successfully if the transaction data has been successfully placed in the queue. The sync value will cause the operation to not return until the transaction data has been sent to all devices, or a timeout occurs. The bypass value means that if /devices/global-settings/commit-queue/enabled-by-default is true the data in this transaction will bypass the commit queue. The data will be written directly to the devices.
commit-queue-atomic
POST
PUT
PATCH
DELETE
Sets the atomic behavior of the resulting queue item. Possible values are: true and false. If this is set to false, the devices contained in the resulting queue item can start executing if the same devices in other non-atomic queue items ahead of it in the queue are completed. If set to true, the atomic integrity of the queue item is preserved.
commit-queue-block-others
POST
PUT
PATCH
DELETE
The resulting queue item will block subsequent queue items, which use any of the devices in this queue item, from being queued.
commit-queue-lock
POST
PUT
PATCH
DELETE
Place a lock on the resulting queue item. The queue item will not be processed until it has been unlocked, see the actions unlock and lock in /devices/commit-queue/queue-item. No following queue items, using the same devices, will be allowed to execute as long as the lock is in place.
commit-queue-tag
POST
PUT
PATCH
DELETE
The value is a user-defined opaque tag. The tag is present in all notifications and events sent referencing the specific queue item.
commit-queue-timeout
POST
PUT
PATCH
DELETE
Specifies a maximum number of seconds to wait for completion. Possible values are infinity or a positive integer. If the timer expires, the transaction is kept in the commit-queue, and the operation returns successfully. If the timeout is not set, the operation waits until completion indefinitely.
commit-queue-error-option
POST
PUT
PATCH
DELETE
The error option to use. Depending on the selected error option, NSO will store the reverse of the original transaction to be able to undo the transaction changes and get back to the previous state. This data is stored in the /devices/commit-queue/completed tree from where it can be viewed and invoked with the rollback action. When invoked, the data will be removed. Possible values are: continue-on-error, rollback-on-error, and stop-on-error. The continue-on-error value means that the commit queue will continue on errors. No rollback data will be created. The rollback-on-error value means that the commit queue item will roll back on errors. The commit queue will place a lock with block-others on the devices and services in the failed queue item. The rollback action will then automatically be invoked when the queue item has finished its execution. The lock will be removed as part of the rollback. The stop-on-error means that the commit queue will place a lock with block-others on the devices and services in the failed queue item. The lock must then either manually be released when the error is fixed or the rollback action under /devices/commit-queue/completed be invoked. Read about error recovery in for a more detailed explanation.
trace-id
POST
PUT
PATCH
DELETE
Use the provided trace ID as part of the log messages emitted while processing. If no trace ID is given, NSO will generate and assign a trace ID to the processing. The trace-id query parameter can also be used with RPCs and actions to relay a trace-id from northbound requests. The trace-id will be included in the X-Cisco-NSO-Trace-ID header in the response.
NOTE: trace-id as a query parameter is deprecated from NSO version 6.3. Capabilities within Trace Context will provide support for trace-id, see Trace Context.
limit
GET
Used by the client to specify a limited set of list entries to retrieve. See The value of the limit parameter is either an integer greater than or equal to 1, or the string unbounded. The string unbounded is the default value. See Partial Responses for an example.
offset
GET
Used by the client to specify the number of list elements to skip before returning the requested set of list entries. See The value of the offset parameter is an integer greater than or equal to 0. The default value is 0. See Partial Responses for an example.
rollback-comment
POST
PUT
PATCH
DELETE
Used to specify a comment to be attached to the Rollback File that will be created as a result of the POST operation. This assumes that Rollback File handling is enabled.
rollback-label
POST
PUT
PATCH
DELETE
Used to specify a label to be attached to the Rollback File that will be created as a result of the POST operation. This assume that Rollback File handling is enabled.
rollback-id
POST
PUT
PATCH
DELETE
Return the rollback ID in the response if a rollback file was created during this operation. This requires rollbacks to be enabled in the NSO to take effect.
with-service-meta-data
GET
Include FASTMAP attributes such as backpointers and reference counters in the reply. These are typically internal to NSO and thus not shown by default.
Create CLI NEDs.
The CLI NED is a model-driven way to CLI script towards all Cisco-like devices. Some Java code is necessary for handling the corner cases a human-to-machine interface presents. The NSO CLI NED southbound of NSO shares a Cisco-style CLI engine with the northbound NSO CLI interface, and the CLI engine can thus run in both directions, producing CLI southbound and interpreting CLI data coming from southbound while presenting a CLI interface northbound. It is helpful to keep this in mind when learning and working with CLI NEDs.
A sequence of Cisco CLI commands can be turned into the equivalent manipulation of the internal XML tree that represents the configuration inside NSO.
A YANG model, annotated appropriately, will produce a Cisco CLI. The user can enter Cisco commands, and NSO will parse the Cisco CLI commands using the annotated YANG model and change the internal XML tree accordingly. Thus, this is the CLI parser and interpreter. Model-driven.
The reverse operation is also possible. Given two different XML trees, each representing a configuration state, in the netsim/ConfD case and NSO's northbound CLI interface, it represents the configuration of a single device, i.e., the device using ConfD as a management framework. In contrast, the NSO case represents the entire network configuration and can generate the list of Cisco commands going from one XML tree to another.
NSO uses this technology to generate CLI commands southbound when we manage Cisco-like devices.
It will become clear later in the examples how the CLI engine runs in forward and reverse mode. The key point though, is that the Cisco CLI NED Java programmer doesn't have to understand and parse the structure of the CLI; this is entirely done by the NSO CLI engine.
To implement a CLI NED, the following components are required:
A YANG data model that describes the CLI. An important development tool here is netsim (ConfD), the Tail-f on-device management toolkit. For NSO to manage a CLI device, it needs a YANG file with exactly the right annotations to produce precisely the managed device's CLI. A few examples exist in the NSO NED evaluation collection with annotated YANG models that render different Cisco CLI variants.
See, for example, $NCS_DIR/packages/neds/dell-ftos and $NCS_DIR/packages/neds/cisco-nx. Look for tailf:cli-* extensions in the NED src/yang directory YANG models.
Thus, to create annotated YANG files for a device with a Cisco-like CLI, the work procedure is to run netsim (ConfD) and write a YANG file that renders the correct CLI.
Furthermore, this YANG model must declare an identity with ned:cli-ned-id
Java CLI NED code must implement the CliNed interface.
NedConnectionBase.java. See $NCS_DIR/java/jar/ncs-src.jar. Use jar xf ncs-src.jar to extract the JAR file. Look for src/com/tailf/ned/NedConnectionBase.java.
NedCliBase.java. See $NCS_DIR/java/jar/ncs-src.jar. Use jar xf ncs-src.jar to extract the JAR file. Look for src/com/tailf/ned/NedCliBase.java.
Thus, the Java NED class has the following responsibilities.
It must implement the identification callbacks, i.e modules(), type(), and identity()
It must implement the connection-related callback methods newConnection(), isConnection() and reconnect()
NSO will invoke the newConnection()
The idea is to write a YANG data model and feed that into the NSO CLI engine such that the resulting CLI mimics that of the device to manage. This is fairly straightforward once you have understood how the different constructs in YANG are mapped into CLI commands. The data model usually needs to be annotated with a specific Tail-f CLI extension to tailor exactly how the CLI is rendered.
This section will describe how the general principles work and give a number of cookbook-style examples of how certain CLI constructs are modeled.
The CLI NED is primarily designed to be used with devices that has a CLI that is similar to the CLIs on a typical Cisco box (i.e. IOS, XR, NX-OS, etc). However, if the CLI follows the same principles but with a slightly different syntax, it may still be possible to use a CLI NED if some of the differences are handled by the Java part of the CLI NED. This section will describe how this can be done.
Let's start with the basic data model for CLI mapping. YANG consists of three major elements: containers, lists, and leaves. For example:
The basic rendering of the constructs is as follows. Containers are rendered as command prefixes which can be stacked at any depth. Leaves are rendered as commands that take one parameter. Lists are rendered as submodes, where the key of the list is rendered as a submode parameter. The example above would result in the command:
For entering the interface ethernet submode. The interface is a container and is rendered as a prefix, ethernet is a list and is rendered as a submode. Two additional commands would be available in the submode:
A typical configuration with two interfaces could look like this:
Note that it makes sense to add help texts to the data model since these texts will be visible in the NSO and help the user see the mapping between the J-style CLI in the NSO and the CLI on the target device. The data model above may look like the following with proper help texts.
I will generally not include the help texts in the examples below to save some space but they should be present in a production data model.
The basic rendering suffice in many cases but is also not enough in many situations. What follows is a list of ways to annotate the data model in order to make the CLI engine mimic a device.
Sometimes you want a number of instances (a list) but do not want a submode. For example:
The above would result in the following commands:
A typical show-config output may look like:
Sometimes you want a submode to be created without having a list instance, for example, a submode called aaa where all AAA configuration is located.
This is done by using the tailf:cli-add-mode extension. For example:
This would result in the command aaa for entering the container. However, sometimes the CLI requires that a certain set of elements are also set when entering the submode, but without being a list. For example, the police rules inside a policy map in the Cisco 7200.
Here, the leaves with the annotation tailf:cli-hide-in-submode is not present as commands once the submode has been entered, but are instead only available as options the police command when entering the police submode.
Often a command is defined as taking multiple parameters in a typical Cisco CLI. This is achieved in the data model by using the annotations tailf:cli-sequence-commands, tailf:cli-compact-syntax, tailf:cli-drop-node-name, and possibly tailf:cli-reset-siblings.
For example:
This results in the command:
The tailf:cli-sequence-commands annotation tells the CLI engine to process the leaves in sequence. The tailf:cli-reset-siblings tells the CLI to reset all leaves in the container if one is set. This is necessary in order to ensure that no lingering config remains from a previous invocation of the command where more parameters were configured. The tailf:cli-drop-node-name tells the CLI that the leaf name shouldn't be specified. The tailf:cli-compact-syntax annotation tells the CLI that the leaves should be formatted on one line, i.e. as:
As opposed to without the annotation:
When constructs are used to control if the numerical value should be the milli or the secs leaf.
This command could also be written using a choice construct as:
Sometimes the tailf:cli-incomplete-command is used to ensure that all parameters are configured. The cli-incomplete-command only applies to the C- and I-style CLI. To ensure that prior leaves in a container are also configured when the configuration is written using J-style or Netconf proper 'must' declarations should be used.
Another example is this, where tailf:cli-optional-in-sequence is used:
The tailf:cli-optional-in-sequence means that the parameters should be processed in sequence but a parameter can be skipped. However, if a parameter is specified then only parameters later in the container can follow it.
It is also possible to have some parameters in sequence initially in the container, and then the rest in any order. This is indicated by the tailf:cli-break-sequence command. For example:
Where it is possible to write:
As well as:
Sometimes a command for entering a submode has parameters that are not really key values, i.e. not part of the instance identifier, but still need to be given when entering the submode. For example
In this case, the tcpudp is a non-key leaf that needs to be specified as a parameter when entering the service-group submode. Once in the submode the commands backup-server-event-log and extended-stats are present. Leaves with the tailf:cli-hide-in-submode attribute are given after the last key, in the sequence they appear in the list.
It is also possible to allow leaf values to be entered in between key elements. For example:
Here we have a list that is not mapped to a submode. It has two keys, read and remote, and an optional oid that can be specified before the remote key. Finally, after the last key, an optional mask parameter can be specified. The use of the tailf:cli-expose-key-name means that the key names should be part of the command, which they are not by default. The above construct results in the commands:
The tailf:cli-reset-container attribute means that all leaves in the container will be reset if any leaf is given.
Some devices require that a setting be removed before it can be changed, for example, the service-group list above. This is indicated with the tailf:cli-remove-before-change annotation. It can be used both on lists and on leaves. A leaf example:
This means that the diff sent to the device will contain first a no source-ip command, followed by a new source-ip command to set the new value.
The data model also use the tailf:cli-no-value-on-delete annotation which means that the leaf value should not be present in the no command. With the annotation, a diff to modify the source IP from 1.1.1.1 to 2.2.2.2 would look like:
And, without the annotation as:
By default, a diff for an ordered-by-user list contains information about where a new item should be inserted. This is typically not supported by the device. Instead, the commands (diff) to send the device needs to remove all items following the new item, and then reinsert the items in the proper order. This behavior is controlled using the tailf:cli-long-obu-diff annotation. For example
Suppose we have the access list:
And we want to change this to:
We would generate the diff with the tailf:cli-long-obu-diff:
Without the annotation, the diff would be:
Often in a config when a leaf is set to its default value it is not displayed by the show running-config command, but we still need to set it explicitly. Suppose we have the leaf state. By default, the value is active.
If the device state is block and we want to set it to active, i.e. the default value. The default behavior is to send to the device:
This will not work. The correct command sequence should be:
The way to achieve this is to do the following:
This way a value for 'state' will always be generated. This may seem unintuitive but the reason this works comes from how the diff is calculated. When generating the diff the target configuration and the desired configuration is compared (per line). The target config will be:
And the desired config will be:
This will be interpreted as a leaf value change and the resulting diff will be to set the new value, i.e. active.
However, without the cli-show-with-default option, the desired config will be an empty line, i.e. no value set. When we compare the two lines we get:
(current config)
(desired config)
This will result in the command to remove the configured leaf, i.e.
Which does not work.
What you see in the C-style CLI when you do 'show configuration' is the commands needed to go from the running config to the configuration you have in your current session. It usually corresponds to the command you have just issued in your CLI session, but not always.
The output is actually generated by comparing the two configurations, i.e. the running config and your current uncommitted configuration. It is done by running 'show running-config' on both the running config and your uncommitted config, and then comparing the output line by line. Each line is complemented by some meta information which makes it possible to generate a better diff.
For example, if you modify a leaf value, say set the MTU to 1400 and the previous value was 1500. The two configs will then be
When we compare these configs, the first lines are the same -> no action but we remember that we have entered the FastEthernet0/0/1 submode. The second line differs in value (the meta-information associated with the lines has the path and the value). When we analyze the two lines we determine that a value_set has occurred. The default action when the value has been changed is to output the command for setting the new value, i.e. MTU 1500. However, we also need to reposition to the current submode. If this is the first line we are outputting in the submode we need to issue the command before issuing the MTU 1500 command.
Similarly, suppose a value has been removed, i.e. mtu used to be set but it is no longer present
As before, the first lines are equivalent, but the second line has a ! in the new config, and MTU 1400 in the running config. This is analyzed as being a delete and the commands are generated:
There are tweaks to this behavior. For example, some machines do not like the no command to include the old value but want instead the command:
We can instruct the CLI diff engine to behave in this way by using the YANG annotation tailf:cli-no-value-on-delete;:
It is also possible to tell the CLI engine to not include the element name in the delete operation. For example the command:
But the command to delete the password is:
The data model for this would be:
It is often necessary to do some minor modifications to the Java part of a CLI NED. There are mainly four functions that needs to be modified: connect, show, applyConfig, and enter/exit config mode.
The CLI NED code should do a few things when the connect callback is invoked.
Set up a connection to the device (usually SSH).
If necessary send a secondary password to enter exec mode. Typically a Cisco IOS-like CLI requires the user to give the enable command followed by a password.
Verify that it is the right kind of device and respond to NSO with a list of capabilities. This is usually done by running the show version command, or equivalent, and parsing the output.
Some modifications may be needed in this section if the commands for the above differ from the Cisco IOS style.
The NSO will invoke the show() callback multiple times, one time for each top-level tag in the data model. Some devices have support for displaying just parts of the configuration, others do not.
For a device that cannot display only parts of a config the recommended strategy is to wait for a show() invocation with a well known top tag and send the entire config at that point. If, if you know that the data model has a top tag called interface then you can use code like:
From the point of NSO, it is perfectly ok to send the entire config as a response to one of the requested toptags and to send an empty response otherwise.
Often some filtering is required of the output from the device. For example, perhaps part of the configuration should not be sent to NSO, or some keywords replaced with others. Here are some examples:
Some devices start the output from show running-config with a short header, and some add a footer. Common headers are Current configuration: and a footer may be end or return. In the example below we strip out a header and remove a footer.
Also, you may choose to only model part of a device configuration in which case you can strip out the parts that you have not modelled. For example, stripping out the SNMP configuration:
Sometimes a device generates non-parsable commands in the output from show running-config. For example, some A10 devices add a keyword cpu-process at the end of the ip route command, i.e.:
However, it does not accept this keyword when a route is configured. The solution is to simply strip the keyword before sending the config to NSO and to not include the keyword in the data model for the device. The code to do this may look like this:
Sometimes a device has some other names for delete than the standard no command found in a typical Cisco CLI. NSO will only generate no commands when, for example, an element does not exist (i.e. no shutdown for an interface), but the device may need undo instead. This can be dealt with as a simple transformation of the configuration before sending it to NSO. For example:
Another example is the following situation. A device has a configuration for port trunk permit vlan 1-3 and may at the same time have disallowed some VLANs using the command no port trunk permit vlan 4-6. Since we cannot use a no container in the config, we instead add a disallow container, and then rely on the Java code to do some processing, e.g.:
And, in the Java show() code:
A similar transformation needs to take place when the NSO sends a configuration change to the device. A more detailed discussion about apply config modifications follows later but the corresponding code would in this case be:
If the way a device quotes strings differ from the way it can be modeled in NSO, it can be handled in the Java code. For example, one device does not quote encrypted password strings which may contain odd characters like the command character !. Java code to deal with this may look like:
And similarly de-quoting when applying a configuration.
NSO will send the configuration to the device in three different callbacks: prepare(), abort(), and revert(). The Java code should issue these commands to the device but some processing of the commands may be necessary. Also, the ongoing CLI session needs to enter configure mode, issue the commands, and then exit configure mode. Some processing may be needed if the device has different keywords, or different quoting, as described under the "Displaying the configuration of a device" section above.
For example, if a device uses undo in place of no then the code may look like this, where data is the string of commands received from NSO:
This relies on the fact that NSO will not have any indentation in the commands sent to the device (as opposed to the indentation usually present in the output from show running-config).
The typical Cisco CLI has two major modes, operational mode and configure mode. In addition, the configure mode has submodes. For example, interfaces are configured in a submode that is entered by giving the command interface <InterfaceType> <Number>. Exiting a submode, i.e. giving the exit command, leaves you in the parent mode. Submodes can also be embedded in other submodes.
In a typical Cisco CLI, you do not necessary have to exit a submode to execute a command in a parent mode. In fact, the output of the command show running-config hardly contains any exit commands. Instead, there is an exclamation mark, !, to indicate that a submode is done, which is only a comment. The config is formatted to rely on the fact that if a command isn't found in the current submode, the CLI engine searches for the command in its parent mode.
Another interesting mapping problem is how to interpret the no command when multiple leaves are given on a command line. Consider the model:
It corresponds to the command syntax foo [a <word> [b <word> [c <word>]]], i.e. the following commands are valid:
Now what does it mean to write no foo a <word> b <word> c <word>? . It could mean that only the c leaf should be removed, or it could mean that all leaves should be removed, and it may also mean that the foo container should be removed.
There is no clear principle here and no one right solution. The annotations are therefore necessary to help the diff engine figure out what to actually send to the device.
The full set of annotations can be found in the tailf_yang_cli_extensions Manual Page. All annotation YANG extensions are not applicable in an NSO context, but most are. The most commonly used annotations are (in alphabetical order):
It is important to note that a NED only needs to cover certain aspects of the device. To have NSO manage a device with a Cisco-like CLI you do not have to model the entire device, only the commands intended to be used need to be covered. When the show() callback issues its show running-config [toptag] command and the device replies with data that is fed to NSO, NSO will ignore all command dump output that the loaded YANG models do not cover.
Thus, whichever Cisco-like device we wish to manage, we must first have YANG models from NSO that cover all aspects of the device we want to use. Once we have a YANG model, we load it into NSO and modify the example CLI NED class to return the NedCapability list of the device.
The NED code gets to see all data from and to the device. If it's impossible or too hard to get the YANG model exactly right for all commands, a last resort is to let the NED code modify the data inline.
The next thing required is a Java class that implements the NED. This is typically not a lot of code, and the existing example NED Java classes are easily extended and modified to fit other needs. The most important point of the Java NED class code is that the code can be oblivious to the CLI commands sent and received.
newConnection()NedCapabilityThis is very much in line with how a NETCONF connect works and how the NETCONF client and server exchange hello messages.
Finally, the NED code must implement a series of data methods. For example, the method void prepare(NedWorker w, String data) get a String object which is the set of Cisco CLI commands it shall send to the device.
In the other direction, when NSO wants to collect data from the device, it will invoke void show(NedWorker w, String toptag) for each tag found at the top of the data model(s) loaded for that device. For example, if the NED gets invoked with show(w, "interface") it's responsibility is to invoke the relevant show configuration command for "interface", i.e. show running-config interface over the connection to the device, and then dumbly reply with all the data the device replies with. NSO will parse the output data and feed it into its internal XML trees.
NSO can order the showPartial() to collect part of the data if the NED announces the capability http://tail-f.com/ns/ncs-ned/show-partial?path-format=FORMAT in which FORMAT is of the following:
key-path: support regular instance keypath format.
top-tag: support top tags under the /devices/device/config tree.
cmd-path-full: support Cisco's CLI edit path with instances.
path-modes-only: support Cisco CLI mode path.
Configure the CLI session on the device to not use pagination. This is normally done by setting the screen length to 0 (or infinity or disable). Optionally it may also fiddle with the idle time.
start-stop groupstart-stopgrouptailf:cli-before-keytailf:cli-drop-node-nameIn the above example the keepalive leaf is set to true when the command keepalive is given, and to false when no keepalive is given. The well known shutdown command, on the other hand, is modeled as a type empty leaf with the tailf:cli-show-no annotation:
...
aggregate-addresstailf:cli-drop-node-nametailf:cli-break-sequence-commandsTwo other annotations are often used in combination with tailf:cli-sequence-commands; tailf:cli-reset-all-siblings, and tailf:cli-compact-syntax. The first tells the parser that all leaves should be reset when any leaf is entered, i.e. if the user first gives the command:
This would result in the leaves address, mask, as-set, and summary-only being set in the configuration. However, if the user then entered:
The assumed result of this is that summary-only is no longer configured, ie that all leaves in the container is zeroed out when the command is entered again. The tailf:cli-compact-syntax annotation tells the CLI engine to render all leaves in the rendered on a separate line.
The above will be rendered on one line (compact syntax) as:
tailf:cli-diff-dependency "/ios:ip/ios:vrf" tells the engine that if the ip vrf part of the configuration is deleted, then first display any changes to this part. This can be used when the device requires a certain ordering of the commands.If the tailf:cli-trigger-on-all substatement is used, then it means that the target will always be displayed before the current node. Normally the order in the YANG file is used, but and it might not even be possible if they are embedded in a container.
The tailf:cli-trigger-on-set tells the engine that the ordering should be taken into account when this leaf is set and some other leaf is deleted. The other leaf should then be deleted before this is set. Suppose you have this data model:
Then the tailf:cli-diff-dependency "/b[id=current()/../id]" tells the CLI that before b list instance is delete, the c instance with the same name needs to be changed.
This annotation, on the other hand, says that before this instance is created any changes to the a instance with the same name needs to be displayed.
Suppose you have the configuration:
Then created c foo and deleted a foo, it should be displayed as:
If you then deleted c foo and created a foo, it should be rendered as:
That is, in the reverse order.
foo inbound 1 3foo outbound 1 2tailf:cli-compact-syntaxfoo inbound 1 2 outbound 3 4 mtu 1500The annotation tailf:cli-sequence-commands tells the CLI that the user has to enter the leaves inside the container in the specified order. Without this annotation, it would not be possible to drop the names of the leaves and still have a deterministic parser. With the annotation, the parser knows that for the command foo inbound 1 2, leaf a should be assigned the value 1 and leaf b the value 2.
Another example:
The above model results in the command htest param a <uint16> b <uint16> for entering the submode. Once the submode has been entered, the command mtu <uint16> is available. Without the tailf:cli-flatten-container annotation it wouldn't be possible to use the tailf:cli-hide-in-submode annotation to attach the leaves to the command for entering the submode.
tailf:cli-incomplete-commandIn other words, the command is incomplete after entering just foo, and also after entering foo a <word>, but not after foo a <word> b <word> or foo a <word> b <word> c <word>.
In many cases, this may be the better choice. Notice how the tailf:cli-suppress-mode annotation is used to prevent the list from being rendered as a submode.
tailf:cli-no-name-on-deletetailf:cli-no-value-on-deleteactailf:cli-optional-in-sequencebA live example of this from the Cisco-ios data model is:
denypermitdenypermitNow, if the command foo a 3 is executed, it will set the value of leaf a to 3, but will leave leaf b and c as they were before. This is probably not the way the device works. In most cases, it expects the leaves b and c to be unset. The annotation tailf:cli-reset-siblings tells the CLI engine that all siblings covered by the tailf:cli-sequence-commands should be reset.
Another similar case is when you have some leaves covered by the command sequencing, and some not. For example:
The above model will allow the user to enter the b and c leaves in any order, as long as leaf a is entered first. The annotation tailf:cli-reset-siblings will reset the leaves up to the tailf:cli-break-sequence-commands. The tailf:cli-reset-all-siblings tells the CLI engine to reset all siblings, also those outside the command sequencing.
The problem with the above is that when a new interface is created, say a VLAN interface, the shutdown leaf would not be set to anything and you would not send anything to the device. With the cli-show-no definition, you would send no shutdown since the shutdown leaf would not be defined when a new interface VLAN instance is created.
The boolean version can be tweaked to behave in a similar way using the default annotation and tailf:cli-show-with-default, i.e.:
The problem with this is that if you explicitly configure the leaf to false in NSO, you will send no shutdown to the device (which is fine), but if you then read the config from the device it will not display no shutdown since it now has its default setting. This will lead to an out-of-sync situation in NSO. NSO thinks the value should be set to false (which is different from the leaf not being set), whereas the device reports the value as being unset.
The whole situation comes from the fact that NSO and the device treat default values differently. NSO considers a leaf as either being set or not set. If a leaf is set to its default value, it is still considered as set. A leaf must be explicitly deleted for it to become unset. Whereas a typical Cisco device considers a leaf unset if you set it to its default value.
Learn the concepts of NSO device management.
The NSO device manager is the center of NSO. The device manager maintains a flat list of all managed devices. NSO keeps the primary copy of the configuration for each managed device in CDB. Whenever a configuration change is done to the list of device configuration primary copies, the device manager will partition this network configuration change into the corresponding changes for the managed devices. The device manager passes on the required changes to the NEDs (Network Element Drivers). A NED needs to be installed for every type of device OS, like Cisco IOS NED, Cisco XR NED, Juniper JUNOS NED, etc. The NEDs communicate through the native device protocol southbound.
The NEDs fall into the following categories:
NETCONF-capable device: The Device Manager will produce NETCONF edit-config RPC operations for each participating device.
SNMP device: The Device Manager translates the changes made to the configuration into the corresponding SNMP SET PDUs.
Device with Cisco CLI: The device has a CLI with the same structure as Cisco IOS or XR routers. The Device Manager and a CLI NED are used to produce the correct sequence of CLI commands which reflects the changes made to the configuration.
Other devices: For devices that do not fit into any of the above-mentioned categories, a corresponding Generic NED is invoked. Generic NEDs are used for proprietary protocols like REST and for CLI flavors that do not resemble IOS or XR. The Device Manager will inform the Generic NED about the made changes and the NED will translate these to the appropriate operations toward the device.
NSO orchestrates an atomic transaction that has the very desirable characteristic of either the transaction as a whole ending up on all participating devices and in the NSO primary copy, or alternatively, the whole transaction getting aborted and resultingly, all changes getting automatically rolled back.
The architecture of the NETCONF protocol is the enabling technology making it possible to push out configuration changes to managed devices and then in the case of other errors, roll back changes. Devices that do not support NETCONF, i.e., devices that do not have transactional capabilities can also participate, however depending on the device, error recovery may not be as good as it is for a proper NETCONF-enabled device.
To understand the main idea behind the NSO device manager it is necessary to understand the NSO data model and how NSO incorporates the YANG data models from the different managed devices.
The NEDs will publish YANG data models even for non-NETCONF devices. In the case of SNMP the YANG models are generated from the MIBs. For JunOS devices the JunOS NED generates a YANG from the JunOS XML Schema. For Schema-less devices like CLI devices, the NED developer writes YANG models corresponding to the CLI structure. The result of this is the device manager and NSO CDB has YANG data models for all devices independent of the underlying protocol.
Throughout this section, we will use the examples.ncs/service-provider/mpls-vpn example. The example network consists of Cisco ASR 9k and Juniper core routers (P and PE) and Cisco IOS-based CE routers.
The central part of the NSO YANG model, in the file tailf-ncs-devices.yang, has the following structure:
Each managed device is uniquely identified by its name, which is a free-form text string. This is typically the DNS name of the managed device but could equally well be the string format of the IP address of the managed device or anything else. Furthermore, each managed device has a mandatory address/port pair that together with the authgroup leaf provides information to NSO on how to connect and authenticate over SSH/NETCONF to the device. Each device also has a mandatory parameter device-type that specifies which southbound protocol to use for communication with the device.
The following device types are available:
NETCONF
CLI: A corresponding CLI NED is needed to communicate with the device. This requires YANG models with the appropriate annotations for the device CLI.
SNMP: The device speaks SNMP, preferably in read-write mode.
Generic NED: A corresponding Generic NED is needed to communicate with the device. This requires YANG models and Java code.
The NSO CLI command below lists the NED types for the devices in the example network.
The empty container /ncs:devices/device/config is used as a mount point for the YANG models from the different managed devices.
As previously mentioned, NSO needs the following information to manage a device:
The IP/Port of the device and authentication information.
Some or all of the YANG data models for the device.
In the example setup, the address and authentication information are provided in the NSO database (CDB) initialization file. There are many different ways to add new managed devices. All of the NSO northbound interfaces can be used to manipulate the set of managed devices. This will be further described later.
Once NSO has started you can inspect the meta information for the managed devices through the NSO CLI. This is an example session:
Alternatively, this information could be retrieved from the NSO northbound NETCONF interface by running the simple Python-based netconf-console program towards the NSO NETCONF server.
All devices in the above two examples (Show Device Configuration in NSO CLI and Show Device Configuration in NETCONF) have /devices/device/state/admin-state set to unlocked, this will be described later in this section.
To communicate with a managed device, a NED for that device type needs to be loaded by NSO. A NED contains the YANG model for the device and corresponding driver code to talk CLI, REST, SNMP, etc. NEDs are distributed as packages.
The CLI command in the above example (Installed Packages) shows all the loaded packages. NSO loads packages at startup and can reload packages at run-time. By default, the packages reside in the packages directory in the NSO run-time directory.
Once you have access to the network information for a managed device, its IP address and authentication information, as well as the data models of the device, you can actually manage the device from NSO.
You start the ncs daemon in a terminal like:
Which is the same as, NSO loads it config from a ncs.conf file
During development, it is sometimes convenient to run ncs in the foreground as:
Once the daemon is running, you can issue the command:
To get more information about options to ncs do:
The ncs --status command produces a lengthy list describing for example which YANG modules are loaded in the system. This is a valuable debug tool.
The same information is also available in the NSO CLI (and thus through all available northbound interfaces, including Maapi for Java programmers)
When the NSO daemon is running and has been initialized with IP/Port and authentication information as well as imported all modules you can start to manage devices through NSO.
NSO provides the ability to synchronize the configuration to or from the device. If you know that the device has the correct configuration you can choose to synchronize from a managed device whereas if you know NSO has the correct device configuration and the device is incorrect, you can choose to synchronize from NSO to the device.
In the normal case, the configuration on the device and the copy of the configuration inside NSO should be identical.
In a cold start situation like in the mpls-vpn example, where NSO is empty and there are network devices to talk to, it makes sense to synchronize from the devices. You can choose to synchronize from one device at a time or from all devices at once. Here is a CLI session to illustrate this.
The command devices sync-from, in example (Synchronize from Devices), is an action that is defined in the NSO data model. It is important to understand the model-driven nature of NSO. All devices are modeled in YANG, network services like MPLS VPN are also modeled in YANG, and the same is true for NSO itself. Anything that can be performed over the NSO CLI or any north-bound is defined in the YANG files. The NSO YANG files are located here:
All packages comes with YANG files as well. For example the directory packages/cisco-ios/src/yang/ contains the YANG definition of an IOS device.
The tailf-ncs.yang is the main part of the NSO YANG data model. The file mode tailf-ncs.yang includes all parts of the model from different files.
The actions sync-from and sync-to are modeled in the file tailf-ncs-devices.yang. The sync action(s) are defined as:
Synchronizing from NSO to the device is common when a device has been configured out-of-band. NSO has no means to enforce that devices are not directly reconfigured behind the scenes of NSO; however, once an out-of-band configuration has been performed, NSO can detect the fact. When this happens it may (or may not, depending on the situation at hand) make sense to synchronize from NSO to the device, i.e. undo the rogue reconfigurations.
The command to do that is:
A dry-run option is available for the action sync-to.
This makes it possible to investigate the changes before they are transmitted to the devices.
sync-fromIt is possible to synchronize a part of the configuration (a certain subtree) from the device using the partial-sync-from action located under /devices. While it is primarily intended to be used by service developers as described in , it is also possible to use directly from the NSO CLI (or any other northbound interface). The example below (Example of Running partial-sync-from Action via CLI) illustrates using this action via CLI, using a router device from examples.ncs/getting-started/developing-with-ncs/0-router-network.
It is now possible to configure several devices through the NSO inside the same network transaction. To illustrate this, start the NSO CLI from a terminal application.
The example above (Configure Devices) illustrates a multi-host transaction. In the same transaction, three hosts were re-configured. Had one of them failed, or been non-operational, the transaction as a whole would have failed.
As seen from the output of the command commit dry-run outformat native, NSO generates the native CLI and NETCONF commands which will be sent to each device when the transaction is committed.
Since the /devices/device/config path contains different models depending on the augmented device model NSO uses the data model prefix in the CLI names; ios, cisco-ios-xr and junos. Different data models might use the same name for elements and the prefix avoids name clashes.
NSO uses different underlying techniques to implement the atomic transactional behavior in case of any error. NETCONF devices are straightforward using confirmed commit. For CLI devices like IOS NSO calculates the reverse diff to restore the configuration to the state before the transaction was applied.
Each managed device needs to be configured with the IP address and the port where the CLI, NETCONF server, etc. of the managed device listens for incoming requests.
Connections are established on demand as they are needed. It is possible to explicitly establish connections, but that functionality is mostly there for troubleshooting connection establishment. We can, for example, do:
We were able to connect to all managed devices. It is also possible to explicitly attempt to test connections to individual managed devices:
Established connections are typically not closed right away when not needed, but rather pooled according to the rules described in . This applies to NETCONF sessions as well as sessions established by CLI or generic NEDs via a connection-oriented protocol. In addition to session pooling, underlying SSH connections for NETCONF devices are also reused. Note that a single NETCONF session occupies one SSH channel inside an SSH connection, so multiple NETCONF sessions can co-exist in a single connection. When an SSH connection has been idle (no SSH channels open) for 2 minutes, the SSH connection is closed. If a new connection is needed later, a connection is established on demand.
Three configuration parameters can be used to control the connection establishment: connect-timeout, read-timeout, and write-timeout. In the NSO data model file tailf-ncs-devices.yang, these timeouts are modeled as:
Thus, to change these parameters (globally for all managed devices) you do:
Or, to use a profile:
When NSO connects to a managed device, it requires authentication information for that device. The authgroups are modeled in the NSO data model:
Each managed device must refer to a named authgroup. The purpose of an authentication group is to map local users to remote users together with the relevant SSH authentication information.
Southbound authentication can be done in two ways. One is to configure the stored user and credential components as shown in the example below (Configured authgroup) and the next example (authgroup default-map). The other way is to configure a callback to retrieve user and credentials on demand as shown in the example below (authgroup-callback).
In the example above (Configured authgroup) in the auth group named default, the two local users oper and admin shall use the remote users' name oper and admin respectively with identical passwords.
Inside an authgroup, all local users need to be enumerated. Each local user name must have credentials configured which should be used for the remote host. In centralized AAA environments, this is usually a bad strategy. You may also choose to instantiate a default-map. If you do that it probably only makes sense to specify the same user name/password pair should be used remotely as the pair that was used to log into NSO.
In the example (Configured authgroup), only two users admin and oper were configured. If the default-map in example (authgroup default-map) is configured, all local users not found in the umap list will end up in the default-map. For example, if the user rocky logs in to NSO with the password secret. Since NSO has a built-in SSH server and also a built-in HTTPS server, NSO will be able to pick up the clear text passwords and can then reuse the same password when NSO attempts to establish southbound SSH connections. The user rocky will end up in the default-map and when NSO attempts to propagate rocky's changes towards the managed devices, NSO will use the remote user name rocky with whatever password rocky
Authenticating southbound using stored configuration has two main components to define remote user and remote credentials. This is defined by the authgroup. As for the southbound user, there exist two options, the same user logged in to NSO or another user, as specified in the authgroup. As for the credentials, there are three options.
Regular password.
Public key. This means that a private key, either from a file in the user's SSH key directory, or one that is configured in the /ssh/private-key list in the NSO configuration, is used for authentication. Refer to for the details on how the private key is selected.
Finally, an interesting option is to use the 'same-pass' option. Since NSO runs its own SSH server and its own SSL server, NSO can pick up the password of a user in clear text. Hence, if the 'same-pass' option is chosen for an authgroup, NSO will reuse the same password when attempting to connect southbound to a managed device.
NSO can connect to a device that is using multi-factor authentication. For this, the authgroup must be configured with an executable for handling the keyboard-interactive part, and optionally some opaque data that is passed to the executable. i.e., the /devices/authgroups/group/umap/mfa/executable and /devices/authgroups/group/umap/mfa/opaque (or under default-map for users that are not in umap) must be configured.
The prompts from the SSH server (including the password prompt and any additional challenge prompts) are passed to the stdin of the executable along with some other relevant data. The executable must write a single line to its stdout as the reply to the prompt. This is the reply that NSO sends to the SSH server.
For example, with the above configured for the authgroup, if the user admin is trying to log in to the device dev0 with password admin, this is the line that is sent to the stdin of the handle_mfa.py script:
The input to the script is the device, username, password, opaque data, as well as the name, instruction, and prompt from the SSH server. All these fields are base64 encoded, and separated by a semi-colon (;). So, the above line in effect encodes the following:
A small Python program can be used to implement the keyboard-interactive authentication towards a device, such as:
This script will then be invoked with the above fields for every prompt from the server, and the corresponding output from the script will be sent as the reply to the server.
In the case of authenticating southbound using a callback, remote user and remote credentials are obtained by an action invocation. The action is defined by the callback-node and action-name as in the example below (authgroup-callback) and supported credentials are remote password and optionally a secondary password for the provided local user, authgroup, and device.
With remote passwords, you may encounter issues if you use special characters, such as quotes (") and backslash (\) in your password. See for recommendations on how to avoid running into password issues.
In the example above (authgroup-callback), the configuration for the umap entry of the oper user is changed to use a callback to retrieve southbound authentication credentials. Thus, NSO is going to invoke the action auth-cb defined in the callback-node callback. The callback node is of type instance-identifier and refers to the container called callback defined in the example, (authgroup-callback.yang), which includes an action defined by action-name auth-cb and uses groupings authgroup-callback-input-params and authgroup-callback-output-params for input and output parameters respectively. In the example, (authgroup-callback), authgroup-callback
Authentication groups and the functionality they bring come with some limitations on where and how it is used.
The callback option that enables authgroup-callback feature is not applicable for members of snmp-group list.
Generic devices that implement their own authentication scheme do not use any mapping or callback functionality provided by Authgroups.
Cluster nodes use their own authgroups and mapping model, thus functionality differs, e.g. callback option is not applicable.
Opening a session towards a managed device is potentially time and resource-consuming. Also, the probability that a recently accessed device is still subject to further requests is reasonably high. These are motives for having a managed devices session pool in NSO.
The NSO device session pool is by default active and normally needs no maintenance. However, under certain circumstances, it might be of interest to modify its behavior. Examples can be when some device type has characteristics that make session pooling undesired, or when connections to a specific device are very costly, and therefore the time that open sessions can stay in the pool should increase.
NSO presents operational data that represent the current state of the session pool. To visualize this, we use the CLI to connect to NSO and force connection to all known devices:
We can now list all open sessions in the session-pool. But note that this is a live pool. Sessions will only remain open for a certain amount of time, the idle time.
In addition to the idle time for sessions, we can also see the type of device, current number of pooled sessions, and maximum number of pooled sessions.
We can close pooled sessions for specific devices.
And we can close all pooled sessions in the session pool.
The session pool configuration is found in the tailf-ncs-devices.yang submodel. The following part of the YANG device-profile-parameters grouping controls how the session pool is configured:
This grouping can be found in the NSO model under /ncs:devices/global-settings/session-pool, /ncs:devices/profiles/profile/session-pool and /ncs:devices/device/session-pool to be able to control session pooling for all devices, a group of devices, and a specific device respectively.
In addition under /ncs:devices/global-settings/session-pool/default it is possible to control the global max size of the session pool, as defined by the following yang snippet:
Let's illustrate the possibilities with an example configuration of the session pool:
In the above configuration, the default idle time is set to 100 seconds for all devices. A device profile called small is defined which contains a max-session value of 3 sessions, this profile is set on all ce* devices. The devices pe0 has a max-sessions 0 which implies that this device cannot be pooled. Let's connect all devices and see what happens in the session pool:
Now, we set an upper limit to the maximum number of sessions in the pool. Setting the value to 4 is too small for a real situation but serves the purpose of illustration:
The number of open sessions in the pool will be adjusted accordingly:
Some devices only allow a small number of concurrent sessions, in the extreme case it only allows one (for example through a terminal server). For this reason, NSO can limit the number of concurrent sessions to a device and make operations wait if the maximum number of sessions has been reached.
In other situations, we need to limit the number of concurrent connect attempts made by NSO. For example, the devices managed by NSO talk to the same server for authentication which can only handle a limited number of connections at a time.
The configuration for session limits is found in the tailf-ncs-devices.yang submodel. The following part of the YANG device-profile-parameters grouping controls how the session limits are configured:
This grouping can be found in the NSO model under /ncs:devices/global-settings/session-limits, /ncs:devices/profiles/profile/session-limits and /ncs:devices/device/session-limits to be able to control session limits for all devices, a group of devices, and a specific device respectively.
In addition, under /ncs:devices/global-settings/session-limits, it is possible to control the number of concurrent connect attempts allowed and the maximum time to wait for a device to be available to connect.
It is possible to turn on and off NED traffic tracing. This is often a good way to troubleshoot problems. To understand the trace output, a basic prerequisite is a good understanding of the native device interface. For NETCONF devices, an understanding of NETCONF RPC is a prerequisite. Similarly for CLI NEDs, a good understanding of the CLI capabilities of the managed devices is required.
To turn on southbound traffic tracing, we need to enable the feature and we must also configure a directory where we want the trace output to be written. It is possible to have the trace output in two different formats, pretty and raw. The format of the data depends on the type of the managed device. For NETCONF devices, the pretty mode indents all the XML data for enhanced readability and the raw mode does not. Sometimes when the XML is broken, raw mode is required to see all the data received. Tracing in raw mode will also signal to the corresponding NED to log more verbose tracing information.
To enable tracing, do:
The trace setting only affects new NED connections, so to ensure that we get any tracing data, we can do:
The above command terminates all existing connections.
At this point, if you execute a transaction towards one or several devices and then view the trace data.
It is possible to clear all existing trace files through the command
Finally, it is worth mentioning the trace functionality does not come for free. It is fairly costly to have the trace turned on. Also, there exists no trace log wrapping functionality.
When managing large networks with NSO a good strategy is to consider the NSO copy of the network configuration to be the main primary copy. All device configuration changes must go through NSO and all other device re-configurations are considered rogue.
NSO does not contain any functionality which disallows rogue re-configurations of managed devices, however, it does contain a mechanism whereby it is a very cheap operation to discover if one or several devices have been configured out-of-band.
The underlying mechanism for cheap check-sync is to compare time stamps, transaction IDs, hash-sums, etc., depending on what the device supports. This is in order not to have to read the full configuration to check if the NSO copy is in sync.
The transaction IDs are stored in CDB and can be viewed as:
Some of the devices do not have a transaction ID, this is the case where the NED has not implemented the cheap check-sync mechanism. Although it is called transaction-id, the underlying value in the device can be anything to detect a config change, like for example a time-stamp.
To check for consistency, we execute:
Alternatively for all (or a subset) managed devices:
The following YANG grouping is used for the return value from the check-sync command:
In the previous section, we described how we can easily check if a managed device is in sync. If the device is not in sync, we are interested to know what the difference is. The CLI sequence below shows how to modify ce0 out-of-band using the ncs-netsim tool. Finally, the sequence shows how to do an explicit configuration comparison.
The diff in the above output should be interpreted as: what needs to be done in NSO to become in sync with the device.
Previously in the example (Synchronize from Devices), NSO was brought in sync with the devices by fetching configuration from the devices. In this case, where the device has a rogue re-configuration, NSO has the correct configuration. In such cases, you want to reset the device configuration to what is stored inside NSO.
When you decide to reset the configuration with the copy kept in NSO use the option dry-run in conjunction with sync-to and inspect what will be sent to the device:
As this is the desired data to send to the device a sync-to can now safely be performed.
The device configuration should now be in sync with the copy in NSO and compare-config ought to yield an empty output:
There exist several ways to initialize new devices. The two common ways are to initialize a device from another existing device or to use device templates.
For example, another CE router has been added to our example network. You want to base the configuration of that host on the configuration of the managed device ce0 which has a valid configuration:
If the configuration is accurate you can create a new managed device based on that configuration as:
In the example above (Instantiate Device from Other) the commands first create the new managed device, ce9 and then populates the configuration of the new device based on the configuration of ce0.
This new configuration might not be entirely correct, you can modify any configuration before committing it.
The above concludes the instantiation of a new managed device. The new device configuration is committed and NSO returned OK without the device existing in the network (netsim). Try to force a sync to the device:
The device is southbound locked, this is a mode that is used where you can reconfigure a device, but any changes done to it are never sent to the managed device. This will be thoroughly described in the next section. Devices are by default created southbound locked. Default values are not shown if not explicitly requested:
Another alternative to instantiating a device from the actual working configuration of another device is to have a number of named device templates that manipulate the configuration.
The template tree looks like this:
The tree for device templates is generated from all device YANG models. All constraints are removed and the data type of all leafs is changed to string.
A device template is created by setting the desired data in the configuration. The created device template is stored in NSO CDB.
The device template created in the example above (Create ce-initialize template) can now be used to initialize single devices or device groups, see .
In the following CLI session, a new device ce10 is created:
Initialize the newly created device ce10 with the device template ce-initialize:
When initializing devices, NSO does not have any knowledge about the capabilities of the device, no connect has been done. This can be overridden by the option accept-empty-capabilities
Inspect the changes made by the template ce-initialize:
This section shows how device templates can be used to create and change device configurations. See in Templates for other ways of using templates.
Device templates are part of the NSO configuration. Device templates are created and changed in the tree /devices/template/config the same way as any other configuration data and are affected by rollbacks and upgrades. Device templates can only manipulate configuration data in the /devices/device/config tree i.e., only device data.
The $NCS_DIR/examples.ncs/service-provider/mpls-vpn example comes with a pre-populated template for SNMP settings.
Templates can be created like any configuration data and use the CLI tab completion to navigate. Variables can be used instead of hard-coded values. In the template above the community string is a variable. The template can cover several device types/NEDs, by making use of the namespace information. This will make sure that only devices modeled with this particular namespace will be affected by this part of the template. Hence, it is possible for one template to handle a multitude of devices from various manufacturers.
A template can be applied to a device, a device group, and a range of devices. It can be used as shown in to create the day-zero config for a newly created device.
Applying the snmp1 template, providing a value for the COMMUNITY template variable:
The result of applying the template:
The default operation for templates is to merge the configuration. Tags can be added to templates to have the template merge, replace, delete, create or nocreate configuration. A tag is inherited to its sub-nodes until a new tag is introduced.
merge: Merge with a node if it exists, otherwise create the node. This is the default operation if no operation is explicitly set.
replace: Replace a node if it exists, otherwise create the node.
create: Creates a node. The node can not already exist.
Example of how to set a tag:
Displaying Tags information::
By adding the CLI pipe flag debug template when applying a template, the CLI will output detailed information on what is happening when the template is being applied:
The usual way to rename an instance in a list is to delete it and create a new instance. Aside from having to explicitly create all its children, an obvious problem with this method is the dependencies - if there is a leafref that refers to this instance, this method of deleting and recreating will fail unless the leafref is also explicitly reset to the value of the new instance.
The /devices/device/rename action renames an existing device and fixes the node/data dependencies in CDB. When renaming a device, the action fixes the following dependencies:
Leafrefs and instance-identifiers (both config true and config false).
Monitor and kick-node of kickers, if they refer to this device.
Diff-sets and forward-diff-sets of services that touch this device (This includes nano-services and also zombies).
NSO maintains a history of past renames at /devices/device/rename-history.
The rename action takes a device lock to prevent modifications to the device while renaming it. Depending on the input parameters, the action will either immediately fail if it cannot get the device lock, or wait wait a specified amount of seconds before timing out.
The parameter no-wait-for-lock makes the action fail immediately if the device lock is unavailable, while a timeout of infinity can be used to make it wait indefinitely for the lock.
If a nano-service has components whose names are derived from the device name, and that device is renamed, the corresponding service components in its plan are not automatically renamed.
For example, let's say the nano-service has components with names matching device names.
If this device is renamed, the corresponding nano-service component is not renamed.
To handle this, the component with the old name must be force-back-tracked and the service re-deployed.
When a device is renamed, all components that derive their name from that device's name in all the service instances must be force-back-tracked.
Provisioning new devices in NSO requires the user to be familiar with the concept of Network Element Drivers and the unique ned-id they use to distinguish their schema. For an end user interacting with a northbound client of NSO, the concept of a ned-id might feel too abstract. It could be challenging to know what device type and ned-id to select when configuring a device for the first time in NSO. After initial configuration, there are also additional steps required before the device can be operated from NSO.
NSO can auto-configure devices during initial provisioning. Under /devices/device/auto-configure, a user can specify either the ned-id explicitly or a combination of the device vendor and product-family or operating-system. These are meta-data specified in the package-meta-data.xml file in the NED package. Based on the combination of this meta-data or using the ned-id explicitly configured, a ned-id from a matching NED package is selected from the currently loaded packages. If multiple packages match the given combination, the package with the latest version is selected. In the same transaction, NSO also fetches the host keys if required, and synchronizes the configuration from the device, making it ready to operate in a single step.
NSO will auto-configure a new device in a transaction if either /devices/device/auto-configure/vendor or /devices/device/auto-configure/ned-id is set in that transaction.
One can configure either vendor and product-family, or vendor and operating-system or just the ned-id explicitly.
The admin-state for the device, if configured, will be honored. I.e., while auto-configuring a new device, if the admin-state is set to be southbound-locked, NSO will only pick the ned-id automatically. NSO will not fetch host keys and synchronize config from the device.
Many NEDs require additional custom configuration to be operational. This applied in particular to Generic NEDs. Information about such additional configuration can be found in the files README.md and README-ned-settings.md bundled with the NED package.
oper-state and admin-stateNSO differentiates between oper-state and admin-state for a managed device. oper-state is the actual state of the device. We have chosen to implement a very simple oper-state model. A managed device oper-state is either enabled or disabled. oper-state can be mapped to an alarm for the device. If the device is disabled, we may have additional error information. For example, the ce9 device created from another device and ce10 created with a device template in the previous section is disabled, and no connection has been established with the device, so its state is completely unknown:
Or, a slightly more interesting CLI usage:
If you manually stop a managed device, for example ce0, NSO doesn't immediately indicate that. NSO may have an active SSH connection to the device, but the device may voluntarily choose to close its end of that (idle) SSH connection. Thus the fact that a socket from the device to NSO is closed by the managed device doesn't indicate anything. The only certain method NSO has to decide a managed device is non-operational - from the point of view of NSO - is NSO cannot SSH connect to it. If you manually stop managed device ce0, you still have:
NSO cannot draw any conclusions from the fact that a managed device closed its end of the SSH connection. It may have done so because it decided to time out an idle SSH connection. Whereas if NSO tried to initiate any operations towards the dead device, the device would be marked as oper-state disabled:
Now, NSO has failed to connect to it, NSO knows that ce0 is dead:
This concludes the oper-state discussion. The next state to be illustrated is the admin-state. The admin-state is what the operator configures, this is the desired state of the managed device.
In tailf-ncs.yang we have the following configuration definition for admin-state:
In the example above (tailf-ncs-devices.yang - admin-state), you can see the four different admin states for a managed device as defined in the YANG model.
locked - This means that all changes to the device are forbidden. Any transaction which attempts to manipulate the configuration of the device will fail. It is still possible to read the configuration of the device.
unlocked -This is the state a device is set into when the device is operational. All changes to the device are attempted to be sent southbound.
southbound-locked - This is the default value. It means that it is possible to manipulate the configuration of the device but changes done to the device configuration are never pushed to the device. This mode is useful during e.g. pre-provisioning, or when we instantiate new devices.
NSO manages a set of devices that are given to NSO through any means like CLI, inventory system integration through XML APIs, or configuration files at startup. The list of devices to manage in an overall integrated network management solution is shared between different tools and therefore it is important to keep an authoritative database of this and share it between different tools including NSO. The purpose of this part is to identify the source of the population of managed devices. The source attribute should indicate the source of the managed device like "inventory", "manual", or "EMS".
These attributes should be automatically set by the integration towards the inventory source, rather than manipulated manually.
added-by-user: Identify the user who loaded the managed device.
context: In what context was the device loaded.
when: When the device was added to NSO.
The NETCONF protocol mandates that the first thing both the server and the client have to do is to send its list of NETCONF capabilities in the <hello> message. A capability indicates what the peer can do. For example the validate:1.0 indicates that the server can validate a proposed configuration change, whereas the capability http://acme.com/if indicates the device implements the http://acme.com proprietary capability.
The NEDs report the capabilities of the devices at connection time. The NEDs also load the YANG modules for NSO. For a NETCONF/YANG device, all this is straightforward, for non-NETCONF devices the NEDs do the translation.
The capabilities announced by a device also contain the YANG version 1 modules supported. In addition to this, YANG version 1.1 modules are advertised in the YANG library module on the device. NSO checks both the capabilities and the YANG library to find out which YANG modules a device supports.
The capabilities and modules detected by NSO are available in two different lists, /devices/device/capability and devices/device/module. The capability list contains all capabilities announced and all YANG modules in the YANG library. The module list contains all YANG modules announced that are also supported by the NED in NSO.
NSO can be used to handle all or some of the YANG configuration modules for a device. A device may announce several modules through its capability list which NSO ignores. NSO will only handle the YANG modules for a device which are loaded (and compiled through ncsc --ncs-compile-bundle) or ncsc --ncs-compile-module) all other modules for the device are ignored. If you require a situation where NSO is entirely responsible for a device so that complete device backup/configurations are stored in NSO you must ensure NSO indeed has support for all modules for the device. It is not possible to automate this process since a capability URI doesn't necessarily indicate actual configuration.
When a device is added to NSO its NED ID must be set. For a NETCONF device, it is possible to configure the generic NETCONF NED id netconf (defined in the YANG module tailf-ncs-ned). If this NED ID is configured, we can then ask NSO to connect to the device and then check the capability list to see which modules this device implements.
We can also check which modules the loaded NEDs support. Then we can pick the most suitable NED and configure the device with this NED ID.
NSO works best if the managed devices support the NETCONF candidate configuration datastore. However, NSO reads the capabilities of each managed device and executes different sequences of NETCONF commands towards different types of devices.
For implementations of the NETCONF protocol that do not support the candidate datastore, and in particular, devices that do not support NETCONF commit with a timeout, NSO tries to do the best of the situation.
NSO divides devices into the following groups.
start_trans_running: This mode is used for devices that support the Tail-f proprietary transaction extension defined by http://tail-f.com/ns/netconf/transactions/1.0. Read more on this in the Tail-f ConfD user guide. In principle it's a means to - over the NETCONF interface - control transaction processing towards the running data store. This may be more efficient than going through the candidate data store. The downside is that it is Tail-f proprietary non-standardized technology.
lock_candidate: This mode is used for devices that support the candidate data store but disallow direct writes to the running data store.
Which category NSO chooses for a managed device depends on which NETCONF capabilities the device sends to NSO in its NETCONF hello message. You can see in the CLI what NSO has decided for a device as in:
NSO talking to ConfD device running in its standard configuration, thus lock-reset-candidate.
Another important discriminator between managed devices is whether they support the confirmed commit with a timeout capability, i.e., the confirmed-commit:1.0 standard NETCONF capability. If a device supports this capability, NSO utilizes it. This is the case with for example Juniper routers.
If a managed device does not support this capability, NSO attempts to do the best it can.
This is how NSO handles common failure scenarios:
The operator aborts the transaction, or the NSO loses the SSH connection to another managed device which is also participating in the same network transaction. If the device does support the confirmed-commit capability, NSO aborts the outstanding yet-uncommitted transaction simply by closing the SSH connection. When the device does not support the confirmed-commit capability, NSO has the reverse diff and simply sends the precise undo information to the device instead.
The device rejects the transaction in the first place, i.e. the NSO attempts to modify its running data store. This is an easy case since NSO then simply aborts the transaction as a whole in the initial commit confirmed [time] attempt.
Thus, even if not all participating devices have first-class NETCONF server implementations, NSO will attempt to fake the confirmed-commit capability.
When the managed device defines top-level NETCONF RPCs or alternatively, define tailf:action points inside the YANG model, these RPCs and actions are also imported into the data model that resides in NSO.
For example, the Juniper NED comes with a set of JunOS RPCs defined in: $NCS_DIR/packages/neds/juniper-junos/src/yang/junos-rpc.yang
Thus, since all RPCs and actions from the devices are accessible through the NSO data model, these actions are also accessible through all NSO northbound APIs, REST, JAVA MAAPI, etc. Hence it is possible to - from user scripts/code - invoke actions and RPCs on all managed devices. The RPCs are augmented below an RPC container:
In the simulated environment of the mpls-vpn example, these RPCs might not have been implemented.
The NSO device manager has a concept of groups of devices. A group is nothing more than a named group of devices. What makes this interesting is that we can invoke several different actions in the group, thus implicitly invoking the action on all members in the group. This is especially interesting for the apply-template action.
The definition of device groups resides at the same layer in the NSO data model as the device list, thus we have:
The MPLS VPN example comes with a couple of pre-defined device-groups:
Device groups are created like below:
Device groups can reference other device groups. There is an operational attribute that flattens all members in the group. The CLI sequence below adds the PE group to my-group. Then it shows the configuration of that group followed by the status of this group. The status for the group contains a members attribute that lists all device members.
Once you have a group, you can sync and check-sync the entire group.
However, what makes device groups really interesting is the ability to apply a template to a group. You can use the pre-populated templates to apply SNMP settings to device groups.
Policies allow you to specify network-wide constraints that always must be true. If someone tries to apply a configuration change over any northbound interface that would be evaluated to false, the configuration change is rejected by NSO. Policies can be of type warning means that it is possible to override them, or error which cannot be overridden.
Assume you would like to enforce all CE routers to have a Gigabit interface 0/1.
As seen in the example above (Policies) , a policy rule has (an optional) for each statement and a mandatory expression and error message. The foreach statement evaluates to a node set, and the expression is then evaluated on each node. So in this example, the expression would be evaluated for every device in NSO which begins with ce. The name variable in the warning message refers to a leaf available from the for-each node set.
Validation is always performed at commit but can also be requested interactively.
Note any configuration can be activated or deactivated. This means that to temporarily turn off a certain policy you can deactivate it. Note also that if the configuration was changed by any other means than NSO by local tools to the device like a CLI, a devices sync-from operation might fail if the device configuration violates the policy.
One of the strengths of NSO is the concept of network-wide transactions. When you commit data to NSO that spans multiple devices in the /ncs:devices/device tree, NSO will - within the NSO transaction - commit the data on all devices or none, keeping the network consistent with CDB. The NSO transaction doesn't return until all participants have acknowledged the proposed configuration change. The downside of this is that the slowest device in each transaction limits the overall transactional throughput in NSO. Such things as out-of-sync checks, network latency, calculation of changes sent southbound, or device deficiencies all affect the throughput.
Typically when automation software north of NSO generates network change requests it may very well be the case more requests arrive than what can be handled. In NSO deployment scenarios where you wish to have higher transactional throughput than what is possible using network-wide transactions, you can use the commit queue instead. The goal of the commit queue is to increase the transactional throughput of NSO while keeping an eventual consistency view of the database. With the commit queue, NSO will compute the configuration change for each participating device, put it in an outbound queue item, and immediately return. The queue is then independently run.
Another use case where you can use the commit queue is when you wish to push a configuration change to a set of devices and don't care about whether all devices accept the change or not. You do not want the default behavior for transactions which is to reject the transaction as a whole if one or more participating devices fail to process its part of the transaction.
An example of the above could be if you wish to set a new NTP server on all managed devices in our entire network, if one or more devices currently are non-operational, you still want to push out the change. You also want the change automatically pushed to the non-operational devices once they go live again.
The big upside of this scheme is that the transactional throughput through NSO is considerably higher. Also, transient devices are handled better. The downsides are:
If a device rejects the proposed change, NSO and the device are now out of sync until any error recovery is performed. Whenever this happens, an NSO alarm (called commit-through-queue-failed) is generated.
While a transaction remains in the queue, i.e., it has been accepted for delivery by NSO but is not yet delivered, the view of the network in NSO is not (yet) correct. Eventually, though, the queued item will be delivered, thus achieving eventual consistency.
To facilitate the two use cases of the commit queue the outbound queue item can be either in an atomic or non-atomic mode.
In atomic mode the outbound queue item will push all configuration changes concurrently once there are no intersecting devices ahead in the queue. If any device rejects the proposed change, all device configuration changes in the queue item will be rejected as a whole, leaving the network in a consistent state. The atomic mode also allows for automatic error recovery to be performed by NSO.
In the non-atomic mode, the outbound queue item will push configuration changes for a device whenever all occurrences of it are completed or it doesn't exist ahead in the queue. The drawback to this mode is that there is no automatic error recovery that can be performed by NSO.
In the following sequences, the simulated device ce0 is stopped to illustrate the commit queue. This can be achieved by the following sequence including returning to the NSO CLI config mode:
By default, the commit queue is turned off. You can configure NSO to run a transaction, device, or device group through the commit queue in a number of different ways, either by providing a flag to the commit command as:
Or, by configuring NSO to always run all transactions through the commit queue as in:
Or, by configuring a number of devices to run through the commit queue as default:
When enabling the commit queue as default on a per device/device group basis, an NSO transaction will compute the configuration change for each participating device, put the devices enabled for the commit queue in the outbound queue, and then proceed with the normal transaction behavior for those devices not commit queue enabled. The transaction will still be successfully committed even if some of the devices added to the outbound queue will fail. If the transaction fails in the validation phase the entire transaction will be aborted, including the configuration change for those devices added to the commit queue. If the transaction fails after the validation phase, the configuration change for the devices in the commit queue will still be delivered.
Do some changes and commit through the commit queue:
In the example above (Commit through Commit Queue), the commit affected three devices, ce0, ce1 and ce2. If you immediately would have launched yet another transaction, as in the second one (see example below), manipulating an interface of ce2, that transaction would have been queued instead of immediately launched. The idea here is to queue entire transactions that touch any device that has anything queued ahead in the queue.
Each transaction committed through the queues becomes a queue item. A queue item has an ID number. A bigger number means that it's scheduled later. Each queue item waits for something to happen. A queue item is in either of three states.
waiting: The queue item is waiting for other queue items to finish. This is because the waiting queue item has participating devices that are part of other queue items, ahead in the queue. It is waiting for a set of devices, to not occur ahead of itself in the queue.
executing: The queue item is currently being processed. Multiple queue items can run currently as long as they don't share any managed devices. Transient errors might be present. These errors occur when NSO fails to communicate with some of the devices. The errors are shown in the leaf-list transient-errors. Retries will take place at intervals specified in /ncs:devices/global-settings/commit-queue/retry-timeout. Examples of transient errors are connection failures and that the changes are rejected due to the device being locked. Transient errors are potentially bad since the queue might grow if new items are added, waiting for the same device.
You can view the queue in the CLI. There are three different view modes, summary, normal, and detailed. Depending on the output, both the summary and the normal look good:
The age field indicated how many seconds a queue item has been in the queue.
You can also view the queue items in detailed mode:
The queue items are stored persistently, thus if NSO is stopped and restarted, the queue remains the same. Similarly, if NSO runs in HA (High Availability) mode, the queue items are replicated, ensuring the queue is processed even in case of failover.
A number of useful actions are available to manipulate the queue:
devices commit-queue add-lock device [ ... ]. This adds a fictive queue item to the commit queue. Any queue item, affecting the same devices, which is entering the commit queue will have to wait for this lock item to be unlocked or deleted. If no devices are specified, all devices in NSO are locked.
devices commit-queue clear. This action clears the entire queue. All devices present in the commit queue will, after this action, have executed be out of sync. The clear action is a rather blunt tool and is not recommended to be used in any normal use case.
devices commit-queue prune device [ ... ]
A typical use scenario is where one or more devices are not operational. In the example above (Viewing Queue Items), there are two queue items, waiting for the device ce0 to come alive. ce0 is listed as a transient error, and this is blocking the entire queue. Whenever a queue item is blocked because another item ahead of it cannot connect to a specific managed device, an alarm is generated:
Block other affecting device ce0 from entering the commit queue:
Now queue item 9577950918 is blocking other items using ce0 from entering the queue.
Prune the usage of the device ce0 from all queue items in the commit queue:
The lock will be in the queue until it has been deleted or unlocked. Queue items affecting other devices are still allowed to enter the queue.
In an LSA cluster, each remote NSO has its own commit queue. When committing through the commit queue on the upper node NSO will automatically create queue items on the lower nodes where the devices in the transaction reside. The progress of the lower node queue items is monitored through a queue item on the upper node. The remote NSO is treated as a device in the queue item and the remote queue items and devices are opaque to the user of the upper node.
Generally, it is not recommended to interfere with the queue items of the lower nodes that have been created by an upper NSO. This can cause the upper queue item to not synchronize with the lower ones correctly.
To be able to track the commit queue on the lower cluster nodes, NSO uses the built-in stream ncs-events that generates northbound notifications for internal events. This stream is required if running the commit queue in a clustered scenario. It is enabled in ncs.conf:
In addition, the commit queue needs to be enabled in the cluster configuration.
For more detailed information on how to set up clustering, see .
The goal of the commit queue is to increase the transactional throughput of NSO while keeping an eventual consistency view of the database. This means no matter if changes committed through the commit queue originate as pure device changes or as the effect of service manipulations the effects on the network should eventually be the same as if performed without a commit queue no matter if they succeed or not. This should apply to a single NSO node as well as NSO nodes in an LSA cluster.
Depending on the selected error-option NSO will store the reverse of the original transaction to be able to undo the transaction changes and get back to the previous state. This data is stored in the /ncs:devices/commit-queue/completed tree from where it can be viewed and invoked with the rollback action. When invoked the data will be removed.
The error option can be configured under /ncs:devices/global-settings/commit-queue/error-option. Possible values are: continue-on-error, rollback-on-error, and stop-on-error. The continue-on-error value means that the commit queue will continue on errors. No rollback data will be created. The rollback-on-error value means that the commit queue item will roll back on errors. The commit queue will place a lock on the failed queue item, thus blocking other queue items with overlapping devices from being executed. The rollback action will then automatically be invoked when the queue item has finished its execution. The lock will be removed as part of the rollback. The stop-on-error means that the commit queue will place a lock on the failed queue item, thus blocking other queue items with overlapping devices from being executed. The lock must then either manually be released when the error is fixed or the rollback action under /devices/commit-queue/completed
The error option can also be given as a commit parameter.
In a clustered environment, different parts of the resulting configuration change set will end up on different lower nodes. This means on some nodes the queue item could succeed and on others, it could not.
The error option in a cluster environment will originate on the upper node. The reverse of the original transaction will be committed on this node and propagated through the cluster down to the lower nodes. The net effect of this is the state of the network will be the same as before the original change.
When NSO is recovering from a failed commit, the rollback data of the failed queue items in the cluster is applied and committed through the commit queue. In the rollback, the no-networking flag will be set on the commits towards the failed lower nodes or devices to get CDB consistent with the network. Towards the successful nodes or devices, the commit is done as before. This is what the rollback action in /ncs:devices/commit-queue/completed/queue-item does.
TR1; service s1 creates ce0:a and ce1:b. The nodes a and b are created in CDB. In the changes of the queue item, CQ1, a and b are created.
TR2; service s2
The reverse of TR1, the rollback of CQ1, TR3, is committed.
TR3; service s1 is applied with the old parameters. Thus the effect of TR1 is reverted. Nothing needs to be pushed towards the network, so no queue item is created.
NSO1:TR1; service s1 dispatches the service to NSO2 and NSO3 through the queue item NSO1:CQ1. In the changes of NSO1:CQ1, NSO2:s1 and NSO3:s1 are created.
The reverse of TR1, rollback of CQ1, TR3, is committed on all nodes part of TR1 that failed.
NSO2:TR3; service s1 is applied with the old parameters. Thus the effect of NSO2:TR1
If for some reason the rollback transaction fails there are, depending on the failure, different techniques to reconcile the services involved:
Make sure that the commit queue is blocked to not interfere with the error recovery procedure. Do a sync-from on the non-completed device(s) and then re-deploy the failed service(s) with the reconcile option to reconcile original data, i.e., take control of that data. This option acknowledges other services controlling the same data. The reference count will indicate how many services control the data. Release any queue lock that was created.
Make sure that the commit queue is blocked to not interfere with the error recovery procedure. Use un-deploy with the no-networking option on the service and then do sync-from on the non-completed device(s). Make sure the error is fixed and then re-deploy the failed service(s) with the reconcile option. Release any queue lock that was created.
As the goal of the commit queue is to increase the transactional throughput of NSO it means that we need to calculate the configuration change towards the device(s) outside of the transaction lock. To calculate a configuration change NSO needs a pre-commit running and a running view of the database. The key enabler to support this in the commit queue is to allow different views of the database to live beyond the commit. In NSO, this is implemented by keeping a snapshot database of the configuration tree for devices and storing configuration changes towards this snapshot database on a per-device basis. The snapshot database is updated when a device in the queue has been processed. This snapshot database is stored on disk for persistence (the S.cdb file in the ncs-cdb directory).
The snapshot database could be populated in two ways. This is controlled by the /ncs-config/cdb/snapshot/pre-populate setting in the ncs.conf file. The parameter controls whether the snapshot database should be pre-populated during the upgrade or not. Switching this on or off implies different trade-offs.
If set to false, NSO is optimized for the default transaction behavior. The snapshot database is populated in a lazy manner (when a device is committed through the commit queue for the first time after an upgrade). The drawback is that this commit will suffer performance-wise, which is especially true for devices with large configurations. Subsequent commits on the same device will not have the same penalty.
If true, NSO is optimized for systems using the commit queue extensively. This will lead to better performance when committing using the commit queue with no additional penalty for first-time commits. The drawbacks are that the time to do upgrades will increase and also an almost twofold increase in NSO memory consumption.
The NSO device manager has built-in support for the NETCONF Call Home client protocol operations over SSH as defined in .
With NETCONF SSH Call Home, the NETCONF client listens for TCP connection requests from NETCONF servers. The SSH client protocol is started when the connection is accepted. The SSH client validates the server's presented host key with credentials stored in NSO. If no matching host key is found the TCP connection is closed immediately. Otherwise, the SSH connection is established, and NSO is enabled to communicate with the device. The SSH connection is kept open until the device itself terminates the connection, an NSO user disconnects the device, or the idle connection timeout is triggered (configurable in the ncs.conf file).
NSO will generate an asynchronous notification event whenever there is a connection request. An application can subscribe to these events and, for example, add an unknown device to the device tree with the information provided, or invoke actions on the device if it is known.
If an SSH connection is established, any outstanding configuration in the commit queue for the device will be pushed. Any notification stream for the device will also be reconnected.
NETCONF Call Home is enabled and configured under /ncs-config/netconf-call-home in the ncs.conf file. By default NETCONF Call Home is disabled.
A device can be connected through the NETCONF Call Home client only if /devices/device/state/admin-state is set to call-home. This state prevents any southbound communication to the device unless the connection has already been established through the NETCONF Call Home client protocol.
The NSO device manager has built-in support for device notifications. Notifications are a means for the managed devices to send structured data asynchronously to the manager. NSO has native support for NETCONF event notifications (see RFC 5277) but could also receive notifications from other protocols implemented by the Network Element Drivers.
Notifications can be utilized in various use-case scenarios. It can be used to populate alarms in the Alarm manager, collect certain types of errors over time, build a network-wide audit log, react to configuration changes, etc.
The basic mode of operation is the manager subscribes to one or more named notification channels which are announced by the managed device. The manager keeps an open SSH channel towards the managed device, and then, the managed device may asynchronously send structured XML data on the SSH channel.
The notification support in NSO is usable as is without any further programming. However, NSO cannot understand any semantics contained inside the received XML messages, thus for example a notification with a content of "Clear Alarm 456" cannot be processed by NSO without any additional programming.
When you add programs to interpret and act upon notifications, make sure that resulting operations are idempotent. This means that they should be able to be called any number of times while guaranteeing that side effects only occur once. The reason for this is that, for example, replaying notifications can sometimes mean that your program will handle the same notifications multiple times.
In the tailf-ncs.yang data model, you find a YANG data model that can be used to:
Setup subscriptions. A subscription is configuration data from the point of view of NSO, thus if NSO is restarted, all configured subscriptions are automatically resumed.
Inspect which named streams a managed device publishes.
View all received notifications.
In this section, we will use the examples.ncs/web-server-farm/basic example.
Let's dive into an example session with the NSO CLI. In the NSO example collection, the webserver publishes two NETCONF notification structures, indicating what they intend to send to any interested listeners. They all have the YANG module:
Follow the instructions in the README file if you want to run the example: build the example, start netsim, and start NCS.
The above shows how we can inspect - as status data - which named streams the managed device publishes. Each stream also has some associated data. The data model for that looks like this:
Let's set up a subscription for the stream called interface. The subscriptions are NSO configuration data, thus to create a subscription we need to enter configuration mode:
The above example created subscriptions for the interface stream on all web servers, i.e. managed devices, www0, www1, and www2. Each subscription must have an associated stream to it, this is however not the key for an NSO notification, the key is a free-form text string. This is because we can have multiple subscriptions to the same stream. More on this later when we describe the filter that can be associated with a subscription. Once the notifications start to arrive, they are read by NSO and stored in stable storage as CDB operational data. they are stored under each managed device - and we can view them as:
Each received notification has some associated metadata, such as the time the event was received by NSO, which subscription and which stream is associated with the notification, and also which user created the subscription.
It is fairly instructive to inspect the XML that goes on the wire when we create a subscription and then also receive the first notification. We can do:
Thus, once the subscription has been configured, NSO continuously receives, and stores in CDB oper persistent storage, the notifications sent from the managed device. The notifications are stored in a circular buffer, to set the size of the buffer, we can do:
The default value is 200. Once the size of the circular buffer is exceeded, the older notification is removed.
A running subscription can be in either of three states. The YANG model has:
If a subscription is in the failed state, an optional failure-reason field indicates the reason for the failure. If a subscription fails due to, not being able to connect to the managed device or if the managed device closed its end of the SSH socket, NSO will attempt to automatically reconnect. The re-connect attempt interval is configurable.
SNMP Notifications (v1, v2c, v3) can be received by NSO and acted upon. The SNMP receiver is a stand-alone process and by default, all notifications are ignored. IP addresses must be opted in and a handler must be defined to take actions on certain notifications. This can be used to for example listen to configuration change notifications and trigger a log action or a resync for example
These actions are programmed in Java, see the for how to do this.
NSO can configure inactive parameters on the devices that support inactive configuration. Currently, these devices include Juniper devices and devices that announce http://tail-f.com/ns/netconf/inactive/1.0 capability. NSO itself implements http://tail-f.com/ns/netconf/inactive/1.0 capability which is formally defined in tailf-netconf-inactive YANG module.
To recap, a node that is marked as inactive exists in the data store but is not used by the server. The nodes announced as inactive by the device will also be inactive in the device's configuration in NSO, and activating/deactivating a node in NSO will push the corresponding change to the device. This also means that for NSO to be able to manage inactive configuration, both /ncs-config/enable-inactive and /ncs-config/netconf-north-bound/capabilities/inactive need to be enabled in ncs.conf.
If the inactive feature is disabled in ncs.conf, NSO will still be able to manage devices that have inactive configuration in their datastore, but the inactive attribute will be ignored, so the data will appear as active in NSO and it would not be possible for NSO to activate/deactivate such nodes in the device.
container interface {
list ethernet {
key id;
leaf id {
type uint16 {
range "0..66";
}
}
leaf description {
type string {
length "1..80";
}
}
leaf mtu {
type uint16 {
range "64..18000";
}
}
}
}interface ethernet IDdescription WORD
mtu INTEGER<64-18000>interface ethernet 0
description "customer a"
mtu 1400
!
interface ethernet 1
description "customer b"
mtu 1500
!container interface {
tailf:info "Configure interfaces";
list ethernet {
tailf:info "FastEthernet IEEE 802.3";
key id;
leaf id {
type uint16 {
range "0..66";
tailf:info "<0-66>;;FastEthernet interface number";
}
leaf description {
type string {
length "1..80";
tailf:info "LINE;;Up to 80 characters describing this interface";
}
}
leaf mtu {
type uint16 {
range "64..18000";
tailf:info "<64-18000>;;MTU size in bytes";
}
}
}
}container dns {
leaf domain {
type string;
}
list server {
ordered-by user;
tailf:cli-suppress-mode;
key ip;
leaf ip {
type inet:ipv4-address;
}
}
}dns domain WORD
dns server IPAddressdns domain tail-f.com
dns server 192.168.1.42
dns server 8.8.8.8container aaa {
tailf:info "AAA view";
tailf:cli-add-mode;
tailf:cli-full-command;
...
}container police {
// To cover also the syntax where cir, bc and be
// doesn't have to be explicitly specified
tailf:info "Police";
tailf:cli-add-mode;
tailf:cli-mode-name "config-pmap-c-police";
tailf:cli-incomplete-command;
tailf:cli-compact-syntax;
tailf:cli-sequence-commands {
tailf:cli-reset-siblings;
}
leaf cir {
tailf:info "Committed information rate";
tailf:cli-hide-in-submode;
type uint32 {
range "8000..2000000000";
tailf:info "<8000-2000000000>;;Bits per second";
}
}
leaf bc {
tailf:info "Conform burst";
tailf:cli-hide-in-submode;
type uint32 {
range "1000..512000000";
tailf:info "<1000-512000000>;;Burst bytes";
}
}
leaf be {
tailf:info "Excess burst";
tailf:cli-hide-in-submode;
type uint32 {
range "1000..512000000";
tailf:info "<1000-512000000>;;Burst bytes";
}
}
leaf conform-action {
tailf:cli-break-sequence-commands;
tailf:info "action when rate is less than conform burst";
type police-action-type;
}
leaf exceed-action {
tailf:info "action when rate is within conform and "+
"conform + exceed burst";
type police-action-type;
}
leaf violate-action {
tailf:info "action when rate is greater than conform + "+
"exceed burst";
type police-action-type;
}
}container udld-timeout {
tailf:info "LACP unidirectional-detection timer";
tailf:cli-sequence-commands {
tailf:cli-reset-all-siblings;
}
tailf:cli-compact-syntax;
leaf "timeout-type" {
tailf:cli-drop-node-name;
type enumeration {
enum fast {
tailf:info "in unit of milli-seconds";
}
enum slow {
tailf:info "in unit of seconds";
}
}
}
leaf "milli" {
tailf:cli-drop-node-name;
when "../timeout-type = 'fast'" {
tailf:dependency "../timeout-type";
}
type uint16 {
range "100..1000";
tailf:info "<100-1000>;;timeout in unit of "
+"milli-seconds";
}
}
leaf "secs" {
tailf:cli-drop-node-name;
when "../timeout-type = 'slow'" {
tailf:dependency "../timeout-type";
}
type uint16 {
range "1..60";
tailf:info "<1-60>;;timeout in unit of seconds";
}
}}udld-timeout [fast <millisecs> | slow <secs> ]udld-timeout fast 1000uldl-timeout fast
uldl-timeout 1000container udld-timeout {
tailf:cli-sequence-command;
choice udld-timeout-choice {
case fast-case {
leaf fast {
tailf:info "in unit of milli-seconds";
type empty;
}
leaf milli {
tailf:cli-drop-node-name;
must "../fast" { tailf:dependency "../fast"; }
type uint16 {
range "100..1000";
tailf:info "<100-1000>;;timeout in unit of "
+"milli-seconds";
}
mandatory true;
}
}
case slow-case {
leaf slow {
tailf:info "in unit of milli-seconds";
type empty;
}
leaf "secs" {
must "../slow" { tailf:dependency "../slow"; }
tailf:cli-drop-node-name;
type uint16 {
range "1..60";
tailf:info "<1-60>;;timeout in unit of seconds";
}
mandatory true;
}
}
}
}list pool {
tailf:cli-remove-before-change;
tailf:cli-suppress-mode;
tailf:cli-sequence-commands {
tailf:cli-reset-all-siblings;
}
tailf:cli-compact-syntax;
tailf:cli-incomplete-command;
key name;
leaf name {
type string {
length "1..31";
tailf:info "WORD<length:1-31> Pool Name or Pool Group";
}
}
leaf ipstart {
mandatory true;
tailf:cli-incomplete-command;
tailf:cli-drop-node-name;
type inet:ipv4-address {
tailf:info "A.B.C.D;;Start IP Address of NAT pool";
}
}
leaf ipend {
mandatory true;
tailf:cli-incomplete-command;
tailf:cli-drop-node-name;
type inet:ipv4-address {
tailf:info "A.B.C.D;;End IP Address of NAT pool";
}
}
leaf netmask {
mandatory true;
tailf:info "Configure Mask for Pool";
type string {
tailf:info "/nn or A.B.C.D;;Configure Mask for Pool";
}
}
leaf gateway {
tailf:info "Gateway IP";
tailf:cli-optional-in-sequence;
type inet:ipv4-address {
tailf:info "A.B.C.D;;Gateway IP";
}
}
leaf ha-group-ip {
tailf:info "HA Group ID";
tailf:cli-optional-in-sequence;
type uint16 {
range "1..31";
tailf:info "<1-31>;;HA Group ID 1 to 31";
}
}
leaf ha-use-all-ports {
tailf:info "Specify this if services using this NAT pool "
+"are transaction based (immediate aging)";
tailf:cli-optional-in-sequence;
type empty;
when "../ha-group-ip" {
tailf:dependency "../ha-group-ip";
}
}
leaf vrid {
tailf:info "VRRP vrid";
tailf:cli-optional-in-sequence;
when "not(../ha-group-ip)" {
tailf:dependency "../ha-group-ip";
}
type uint16 {
range "1..31";
tailf:info "<1-31>;;VRRP vrid 1 to 31";
}
}
leaf ip-rr {
tailf:info "Use IP address round-robin behavior";
type empty;
}
}list address {
key ip;
tailf:cli-suppress-mode;
tailf:info "Set the IP address of an interface";
tailf:cli-sequence-commands {
tailf:cli-reset-all-siblings;
}
tailf:cli-compact-syntax;
leaf ip {
tailf:cli-drop-node-name;
type inet:ipv6-prefix;
}
leaf link-local {
type empty;
tailf:info "Configure an IPv6 link local address";
tailf:cli-break-sequence-commands;
}
leaf anycast {
type empty;
tailf:info "Configure an IPv6 anycast address";
tailf:cli-break-sequence-commands;
}
} ip 1.1.1.1 link-local anycast ip 1.1.1.1 anycast link-locallist service-group {
tailf:info "Service Group";
tailf:cli-remove-before-change;
key "name";
leaf name {
type string {
length "1..63";
tailf:info "NAME<length:1-63>;;SLB Service Name";
}
}
leaf tcpudp {
mandatory true;
tailf:cli-drop-node-name;
tailf:cli-hide-in-submode;
type enumeration {
enum tcp { tailf:info "TCP LB service"; }
enum udp { tailf:info "UDP LB service"; }
}
}
leaf backup-server-event-log {
tailf:info "Send log info on back up server events";
tailf:cli-full-command;
type empty;
}
leaf extended-stats {
tailf:info "Send log info on back up server events";
tailf:cli-full-command;
type empty;
}
...
}list community {
tailf:info "Define a community who can access the SNMP engine";
key "read remote";
tailf:cli-suppress-mode;
tailf:cli-compact-syntax;
tailf:cli-reset-container;
leaf read {
tailf:cli-expose-key-name;
tailf:info "read only community";
type string {
length "1..31";
tailf:info "WORD<length:1-31>;;SNMPv1/v2c community string";
}
}
leaf remote {
tailf:cli-expose-key-name;
tailf:info "Specify a remote SNMP entity to which the user belongs";
type string {
length "1..31";
tailf:info "Hostname or A.B.C.D;;IP address of remote SNMP "
+"entity(length: 1-31)";
}
}
leaf oid {
tailf:info "specific the oid"; // SIC
tailf:cli-prefix-key {
tailf:cli-before-key 2;
}
type string {
length "1..31";
tailf:info "WORD<length:1-31>;;The oid qvalue";
}
}
leaf mask {
tailf:cli-drop-node-name;
type string {
tailf:info "/nn or A.B.C.D;;The mask";
}
}
}community read WORD [oid WORD] remote HOSTNAME [/nn or A.B.C.D]leaf source-ip {
tailf:cli-remove-before-change;
tailf:cli-no-value-on-delete;
tailf:cli-full-command;
type inet:ipv6-address {
tailf:info "X:X::X:X;;Source IPv6 address used by DNS";
}
}no source-ip
source-ip 2.2.2.2no source-ip 1.1.1.1
source-ip 2.2.2.2list access-list {
tailf:info "Configure Access List";
tailf:cli-suppress-mode;
key id;
leaf id {
type uint16 {
range "1..199";
}
}
list rules {
ordered-by user;
tailf:cli-suppress-mode;
tailf:cli-drop-node-name;
tailf:cli-show-long-obu-diffs;
key "txt";
leaf txt {
tailf:cli-multi-word-key;
type string;
}
}
}access-list 90 permit host 10.34.97.124
access-list 90 permit host 172.16.4.224access-list 90 permit host 10.34.97.124
access-list 90 permit host 10.34.94.109
access-list 90 permit host 172.16.4.224no access-list 90 permit host 172.16.4.224
access-list 90 permit host 10.34.94.109
access-list 90 permit host 172.16.4.224# after permit host 10.34.97.124
access-list 90 permit host 10.34.94.109leaf state {
tailf:info "Activate/Block the user(s)";
type enumeration {
enum active {
tailf:info "Activate/Block the user(s)";
}
enum block {
tailf:info "Activate/Block the user(s)";
}
}
default "active";
}no state blockstate activeleaf state {
tailf:info "Activate/Block the user(s)";
type enumeration {
enum active {
tailf:info "Activate/Block the user(s)";
}
enum block {
tailf:info "Activate/Block the user(s)";
}
}
default "active";
tailf:cli-trim-default;
tailf:cli-show-with-default;
}state blockstate activestate block<empty>state blockinterface FastEthernet0/0/1 interface FastEthernet0/0/1
mtu 1500 mtu 1400
! !interface FastEthernet0/0/1interface FastEthernet0/0/1 interface FastEthernet0/0/1
! mtu 1400
!interface FastEthernet0/0/1
no mtu 1400no mtuleaf mtu {
tailf:cli-no-value-on-delete;
type uint16;
}aaa local-user password cipher "C>9=UF*^V/'Q=^Q`MAF4<1!!"no aaa local-user password// aaa local-user
container password {
tailf:info "Set password";
tailf:cli-flatten-container;
leaf cipher {
tailf:cli-no-value-on-delete;
tailf:cli-no-name-on-delete;
type string {
tailf:info "STRING<1-16>/<24>;;The UNENCRYPTED/"
+"ENCRYPTED password string";
}
}
}public void show(NedWorker worker, String toptag)
throws NedException, IOException {
session.setTracer(worker);
try {
int i;
if (toptag.equals("interface")) {
session.print("show running-config | exclude able-management\n");
...
} else {
worker.showCliResponse("");
}
} catch (...) { ... }
}if (toptag.equals("interface")) {
session.print("show running-config | exclude able-management\n");
session.expect("show running-config | exclude able-management");
String res = session.expect(".*#");
i = res.indexOf("Current configuration :");
if (i >= 0) {
int n = res.indexOf("\n", i);
res = res.substring(n+1);
}
i = res.lastIndexOf("\nend");
if (i >= 0) {
res = res.substring(0,i);
}
worker.showCliResponse(res);
} else {
// only respond to first toptag since the A10
// cannot show different parts of the config.
worker.showCliResponse("");
}if (toptag.equals("context")) {
session.print("show configuration\n");
session.expect("show configuration");
String res = session.expect(".*\\[.*\\]#");
snmp = res.indexOf("\nsnmp");
home = res.indexOf("\nsession-home");
port = res.indexOf("\nport");
tunnel = res.indexOf("\ntunnel");
if (snmp >= 0) {
res = res.substring(0,snmp)+res.substring(home,port)+
res.substring(tunnel);
} else if (port >= 0) {
res = res.substring(0,port)+res.substring(tunnel);
}
worker.showCliResponse(res);
} else {
// only respond to first toptag since the STOKEOS
// cannot show different parts of the config.
worker.showCliResponse("");
} ip route 10.40.0.0 /14 10.16.156.65 cpu-processif (toptag.equals("interface")) {
session.print("show running-config | exclude able-management\n");
session.expect("show running-config | exclude able-management");
String res = session.expect(".*#");
// look for the string cpu-process and remove it
i = res.indexOf(" cpu-process");
while (i >= 0) {
res = res.substring(0,i)+res.substring(i+12);
i = res.indexOf(" cpu-process");
}
worker.showCliResponse(res);
} else {
// only respond to first toptag since the A10
// cannot show different parts of the config.
worker.showCliResponse("");
}if (toptag.equals("aaa")) {
session.print("display current-config\n");
session.expect("display current-config");
String res = session.expect("return");
session.expect(".*>");
// split into lines, and process each line
lines = res.split("\n");
for(i=0 ; i < lines.length ; i++) {
int c;
// delete the version information, not really config
if (lines[i].indexOf("version ") == 1) {
lines[i] = "";
}
else if (lines[i].indexOf("undo ") >= 0) {
lines[i] = lines[i].replaceAll("undo ", "no ");
}
}
worker.showCliResponse(join(lines, "\n"));
} else {
// only respond to first toptag since the H3C
// cannot show different parts of the config.
// (well almost)
worker.showCliResponse("");
}container disallow {
container port {
tailf:info "The port of mux-vlan";
container trunk {
tailf:info "Specify current Trunk port's "
+"characteristics";
container permit {
tailf:info "allowed VLANs";
leaf-list vlan {
tailf:info "allowed VLAN";
tailf:cli-range-list-syntax;
type uint16 {
range "1..4094";
}
}
}
}
}
}if (toptag.equals("aaa")) {
session.print("display current-config\n");
session.expect("display current-config");
String res = session.expect("return");
session.expect(".*>");
// process each line
lines = res.split("\n");
for(i=0 ; i < lines.length ; i++) {
int c;
if (lines[i].indexOf("no port") >= 0) {
lines[i] = lines[i].replaceAll("no ", "disallow ");
}
}
worker.showCliResponse(join(lines, "\n"));
} else {
// only respond to first toptag since the H3C
// cannot show different parts of the config.
// (well almost)
worker.showCliResponse("");
}lines = data.split("\n");
for (i=0 ; i < lines.length ; i++) {
if (lines[i].indexOf("disallow port ") == 0) {
lines[i] = lines[i].replace("disallow ", "undo ");
}
}if (toptag.equals("aaa")) {
session.print("display current-config\n");
session.expect("display current-config");
String res = session.expect("return");
session.expect(".*>");
// process each line
lines = res.split("\n");
for(i=0 ; i < lines.length ; i++) {
if ((c=lines[i].indexOf("cipher ")) >= 0) {
String line = lines[i];
String pass = line.substring(c+7);
String rest;
int s = pass.indexOf(" ");
if (s >= 0) {
rest = pass.substring(s);
pass = pass.substring(0,s);
} else {
s = pass.indexOf("\r");
if (s >= 0) {
rest = pass.substring(s);
pass = pass.substring(0,s);
}
else {
rest = "";
}
}
// find cipher string and quote it
lines[i] = line.substring(0,c+7)+quote(pass)+rest;
}
}
worker.showCliResponse(join(lines, "\n"));
} else {
worker.showCliResponse("");
}lines = data.split("\n");
for (i=0 ; i < lines.length ; i++) {
if ((c=lines[i].indexOf("cipher ")) >= 0) {
String line = lines[i];
String pass = line.substring(c+7);
String rest;
int s = pass.indexOf(" ");
if (s >= 0) {
rest = pass.substring(s);
pass = pass.substring(0,s);
} else {
s = pass.indexOf("\r");
if (s >= 0) {
rest = pass.substring(s);
pass = pass.substring(0,s);
}
else {
rest = "";
}
}
// find cipher string and quote it
lines[i] = line.substring(0,c+7)+dequote(pass)+rest;
}
}lines = data.split("\n");
for (i=0 ; i < lines.length ; i++) {
if (lines[i].indexOf("no ") == 0) {
lines[i] = lines[i].replace("no ", "undo ");
}
}container foo {
tailf:cli-compact-syntax;
tailf:cli-sequence-commands;
presence true;
leaf a {
type string;
}
leaf b {
type string;
}
leaf c {
type string;
}
}foo
foo a <word>
foo a <word> b <word>
foo a <word> b <word> c <word>container system {
tailf:info "For system events.";
container "default" {
tailf:cli-add-mode;
tailf:cli-mode-name "cfg-acct-mlist";
tailf:cli-delete-when-empty;
presence true;
container start-stop {
tailf:info "Record start and stop without waiting";
leaf group {
tailf:info "Use Server-group";
type aaa-group-type;
}
}
}
}list FastEthernet {
tailf:info "FastEthernet IEEE 802.3";
tailf:cli-allow-join-with-key {
tailf:cli-display-joined;
}
tailf:cli-mode-name "config-if";
key name;
leaf name {
type string {
pattern "[0-9]+.*";
tailf:info "<0-66>/<0-128>;;FastEthernet interface number";
}
}leaf FastEthernet {
tailf:info "FastEthernet IEEE 802.3";
tailf:cli-allow-join-with-value {
tailf:cli-display-joined;
}
type string;
tailf:non-strict-leafref {
path "/ios:interface/ios:FastEthernet/ios:name";
}
}list route-map {
tailf:info "Route map tag";
tailf:cli-mode-name "config-route-map";
tailf:cli-compact-syntax;
tailf:cli-full-command;
key "name sequence";
leaf name {
type string {
tailf:info "WORD;;Route map tag";
}
}
// route-map * #
leaf sequence {
tailf:cli-drop-node-name;
type uint16 {
tailf:info "<0-65535>;;Sequence to insert to/delete from "
+"existing route-map entry";
range "0..65535";
}
}
// route-map * permit
// route-map * deny
leaf operation {
tailf:cli-drop-node-name;
tailf:cli-prefix-key {
tailf:cli-before-key 2;
}
type enumeration {
enum deny {
tailf:code-name "op_deny";
tailf:info "Route map denies set operations";
}
enum permit {
tailf:code-name "op_internet";
tailf:info "Route map permits set operations";
}
}
default permit;
}
}// router bgp * / aggregate-address
container aggregate-address {
tailf:info "Configure BGP aggregate entries";
tailf:cli-compact-syntax;
tailf:cli-sequence-commands {
tailf:cli-reset-all-siblings;
}
leaf address {
tailf:cli-drop-node-name;
type inet:ipv4-address {
tailf:info "A.B.C.D;;Aggregate address";
}
}
leaf mask {
tailf:cli-drop-node-name;
type inet:ipv4-address {
tailf:info "A.B.C.D;;Aggregate mask";
}
}
leaf advertise-map {
tailf:cli-break-sequence-commands;
tailf:info "Set condition to advertise attribute";
type string {
tailf:info "WORD;;Route map to control attribute "
+"advertisement";
}
}
leaf as-set {
tailf:info "Generate AS set path information";
type empty;
}
leaf attribute-map {
type string {
tailf:info "WORD;;Route map for parameter control";
}
}
leaf as-override {
tailf:info "Override matching AS-number while sending update";
type empty;
}
leaf route-map {
type string {
tailf:info "WORD;;Route map for parameter control";
}
}
leaf summary-only {
tailf:info "Filter more specific routes from updates";
type empty;
}
leaf suppress-map {
tailf:info "Conditionally filter more specific routes from "
+"updates";
type string {
tailf:info "WORD;;Route map for suppression";
}
}
}leaf dhcp {
tailf:info "Default Gateway obtained from DHCP";
tailf:cli-case-insensitive;
type empty;
}aggregate-address 1.1.1.1
aggregate-address 255.255.255.0
aggregate-address as-set
aggregate-address summary-onlyaggregate-address 1.1.1.1 255.255.255.0 as-set summary-onlycontainer dampening {
tailf:info "Enable event dampening";
presence "true";
leaf dampening-time {
tailf:cli-drop-node-name;
tailf:cli-delete-container-on-delete;
tailf:info "<1-30>;;Half-life time for penalty";
type uint16 {
range 1..30;
}
}
}container access-class {
tailf:info "Filter connections based on an IP access list";
tailf:cli-compact-syntax;
tailf:cli-sequence-commands;
tailf:cli-reset-container;
tailf:cli-flatten-container;
list access-list {
tailf:cli-drop-node-name;
tailf:cli-compact-syntax;
tailf:cli-reset-container;
tailf:cli-suppress-mode;
tailf:cli-delete-when-empty;
key direction;
leaf direction {
type enumeration {
enum "in" {
tailf:info "Filter incoming connections";
}
enum "out" {
tailf:info "Filter outgoing connections";
}
}
}
leaf access-list {
tailf:cli-drop-node-name;
tailf:cli-prefix-key;
type exp-ip-acl-type;
mandatory true;
}
leaf vrf-also {
tailf:info "Same access list is applied for all VRFs";
type empty;
}
}
}// router bgp * / redistribute ospf *
list ospf {
tailf:info "Open Shortest Path First (OSPF)";
tailf:cli-suppress-mode;
tailf:cli-delete-when-empty;
tailf:cli-compact-syntax;
key id;
leaf id {
type uint16 {
tailf:info "<1-65535>;;Process ID";
range "1..65535";
}
}
list vrf {
tailf:info "VPN Routing/Forwarding Instance";
tailf:cli-suppress-mode;
tailf:cli-delete-when-empty;
tailf:cli-compact-syntax;
tailf:cli-diff-dependency "/ios:ip/ios:vrf";
tailf:cli-diff-dependency "/ios:vrf/ios:definition";
key name;
leaf name {
type string {
tailf:info "WORD;;VPN Routing/Forwarding Instance (VRF) name";
}
}
}
}container authentication {
tailf:info "Authentication";
choice auth {
leaf word {
tailf:cli-drop-node-name;
tailf:cli-disallow-value "md5|text";
type string {
tailf:info "WORD;;Plain text authentication string "
+"(8 chars max)";
}
}
container md5 {
tailf:info "Use MD5 authentication";
leaf key-chain {
tailf:info "Set key chain";
type string {
tailf:info "WORD;;Name of key-chain";
}
}
}
}
}container ntp {
tailf:info "Configure NTP";
// interface * / ntp broadcast
container broadcast {
tailf:info "Configure NTP broadcast service";
//tailf:cli-display-separated;
presence true;
container client {
tailf:info "Listen to NTP broadcasts";
tailf:cli-full-command;
presence true;
}
}
}ntp broadcast
ntp broadcast clientntp broadcast clientcontainer exec-timeout {
tailf:info "Set the EXEC timeout";
tailf:cli-sequence-commands;
tailf:cli-compact-syntax;
leaf minutes {
tailf:info "<0-35791>;;Timeout in minutes";
tailf:cli-drop-node-name;
type uint32;
}
leaf seconds {
tailf:info "<0-2147483>;;Timeout in seconds";
tailf:cli-drop-node-name;
type uint32;
}
}// interface * / vrf forwarding
// interface * / ip vrf forwarding
choice vrf-choice {
container ip-vrf {
tailf:cli-no-keyword;
tailf:cli-drop-node-name;
container ip {
container vrf {
leaf forwarding {
tailf:info "Configure forwarding table";
type string {
tailf:info "WORD;;VRF name";
}
tailf:non-strict-leafref {
path "/ios:ip/ios:vrf/ios:name";
}
}
}
}
}
container vrf {
tailf:info "VPN Routing/Forwarding parameters on the interface";
// interface * / vrf forwarding
leaf forwarding {
tailf:info "Configure forwarding table";
type string {
tailf:info "WORD;;VRF name";
}
tailf:non-strict-leafref {
path "/ios:vrf/ios:definition/ios:name";
}
}
}
// interface * / ip
container ip {
tailf:info "Interface Internet Protocol config commands";
}container address-family {
tailf:info "Enter Address Family command mode";
container ipv6 {
tailf:info "Address family";
container unicast {
tailf:cli-add-mode;
tailf:cli-mode-name "config-router-af";
tailf:info "Address Family Modifier";
tailf:cli-full-command;
tailf:cli-exit-command "exit-address-family" {
tailf:info "Exit from Address Family configuration "
+"mode";
}
}
}
}container interface {
tailf:info "Configure interfaces";
tailf:cli-diff-dependency "/ios:vrf";
tailf:cli-explicit-exit;
// interface Loopback
list Loopback {
tailf:info "Loopback interface";
tailf:cli-allow-join-with-key {
tailf:cli-display-joined;
}
tailf:cli-mode-name "config-if";
tailf:cli-suppress-key-abbreviation;
// tailf:cli-full-command;
key name;
leaf name {
type string {
pattern "([0-9\.])+";
tailf:info "<0-2147483647>;;Loopback interface number";
}
}
uses interface-common-grouping;
}
}// ip explicit-path name *
list explicit-path {
tailf:info "Configure explicit-path";
tailf:cli-mode-name "cfg-ip-expl-path";
key name;
leaf name {
tailf:info "Specify explicit path by name";
tailf:cli-expose-key-name;
type string {
tailf:info "WORD;;Enter name";
}
}
}// class-map * / match cos
leaf-list cos {
tailf:info "IEEE 802.1Q/ISL class of service/user priority values";
tailf:cli-flat-list-syntax;
type uint16 {
range "0..7";
tailf:info "<0-7>;;Enter up to 4 class-of-service values"+
" separated by white-spaces";
}
}container foo {
tailf:cli-compact-syntax;
container inbound {
tailf:cli-compact-syntax;
tailf:cli-sequence-commands;
tailf:cli-flatten-container;
leaf a {
tailf:cli-drop-node-name;
type uint16;
}
leaf b {
tailf:cli-drop-node-name;
type uint16;
}
}
container outbound {
tailf:cli-compact-syntax;
tailf:cli-sequence-commands;
tailf:cli-flatten-container;
leaf a {
tailf:cli-drop-node-name;
type uint16;
}
leaf b {
tailf:cli-drop-node-name;
type uint16;
}
}
leaf mtu {
type uint16;
}
}container transceiver {
tailf:info "Select from transceiver configuration commands";
container "type" {
tailf:info "type keyword";
// transceiver type all
container all {
tailf:cli-add-mode;
tailf:cli-mode-name "config-xcvr-type";
tailf:cli-full-command;
// transceiver type all / monitoring
container monitoring {
tailf:info "Enable/disable monitoring";
presence true;
leaf interval {
tailf:info "Set interval for monitoring";
type uint16 {
tailf:info "<300-3600>;;Time interval for monitoring "+
"transceiver in seconds";
range "300..3600";
}
}
}
}
}
}// event manager applet * / action * info
container info {
tailf:info "Obtain system specific information";
// event manager applet * / action info type
container "type" {
tailf:info "Type of information to obtain";
tailf:cli-full-no;
container snmp {
tailf:info "SNMP information";
// event manager applet * / action info type snmp var
container var {
tailf:info "Trap variable";
tailf:cli-compact-syntax;
tailf:cli-sequence-commands;
tailf:cli-reset-container;
leaf variable-name {
tailf:cli-drop-node-name;
tailf:cli-incomplete-command;
type string {
tailf:info "WORD;;Trap variable name";
}
}
}
}
}
}// event manager applet *
list applet {
tailf:info "Register an Event Manager applet";
tailf:cli-mode-name "config-applet";
tailf:cli-exit-command "exit" {
tailf:info "Exit from Event Manager applet configuration submode";
}
key name;
leaf name {
type string {
tailf:info "WORD;;Name of the Event Manager applet";
}
}
// event manager applet * authorization
leaf authorization {
tailf:info "Specify an authorization type for the applet";
tailf:cli-hide-in-submode;
type enumeration {
enum bypass {
tailf:info "EEM aaa authorization type bypass";
}
}
}
// event manager applet * class
leaf class {
tailf:info "Specify a class for the applet";
tailf:cli-hide-in-submode;
type string {
tailf:info "Class A-Z | default - default class";
pattern "[A-Z]|default";
}
}
// event manager applet * trap
leaf trap {
tailf:info "Generate an SNMP trap when applet is triggered.";
tailf:cli-hide-in-submode;
type empty;
}
}container foo {
tailf:cli-compact-syntax;
tailf:cli-sequence-commands;
presence true;
leaf a {
type string;
}
leaf b {
type string;
}
leaf c {
type string;
}
}container foo {
tailf:cli-compact-syntax;
tailf:cli-sequence-commands;
tailf:cli-incomplete-command;
presence true;
leaf a {
tailf:cli-incomplete-command;
type string;
}
leaf b {
type string;
}
leaf c {
type string;
}
}no foo
no foo a <word>
no foo a <word> b <word>
no foo a <word> b <word> c <word>container foo {
tailf:cli-compact-syntax;
tailf:cli-sequence-commands;
tailf:cli-incomplete-command;
tailf:cli-incomplete-no;
presence true;
leaf a {
tailf:cli-incomplete-command;
tailf:cli-incomplete-no;
type string;
}
leaf b {
tailf:cli-incomplete-no;
type string;
}
leaf c {
type string;
}
}// class-map * / source-address
container source-address {
tailf:info "Source address";
leaf-list mac {
tailf:info "MAC address";
type string {
tailf:info "H.H.H;;MAC address";
}
}
}source-address {
mac [ 1410.9fd8.8999 a110.9fd8.8999 bb10.9fd8.8999 ]
}source-address mac 1410.9fd8.8999
source-address mac a110.9fd8.8999
source-address mac bb10.9fd8.8999// class-map * / source-address
container source-address {
tailf:info "Source address";
leaf-list mac {
tailf:info "MAC address";
tailf:cli-list-syntax;
type string {
tailf:info "H.H.H;;MAC address";
}
}
}container transceiver {
tailf:info "Select from transceiver configuration commands";
container "type" {
tailf:info "type keyword";
// transceiver type all
container all {
tailf:cli-add-mode;
tailf:cli-mode-name "config-xcvr-type";
tailf:cli-full-command;
// transceiver type all / monitoring
container monitoring {
tailf:info "Enable/disable monitoring";
presence true;
leaf interval {
tailf:info "Set interval for monitoring";
type uint16 {
tailf:info "<300-3600>;;Time interval for monitoring "+
"transceiver in seconds";
range "300..3600";
}
}
}
}
}
}// event manager applet * / description
leaf "description" {
tailf:info "Add or modify an applet description";
tailf:cli-full-command;
tailf:cli-multi-value;
type string {
tailf:info "LINE;;description";
}
}container permit {
tailf:info "Specify community to accept";
presence "Specify community to accept";
list permit-list {
tailf:cli-suppress-mode;
tailf:cli-delete-when-empty;
tailf:cli-drop-node-name;
key expr;
leaf expr {
tailf:cli-multi-word-key {
tailf:cli-max-words 10;
}
type string {
tailf:info "LINE;;An ordered list as a regular-expression";
}
}
}
}container ospf {
tailf:info "OSPF routes Administrative distance";
leaf external {
tailf:info "External routes";
type uint32 {
range "1.. 255";
tailf:info "<1-255>;;Distance for external routes";
}
tailf:cli-suppress-no;
tailf:cli-no-value-on-delete;
tailf:cli-no-name-on-delete;
}
leaf inter-area {
tailf:info "Inter-area routes";
type uint32 {
range "1.. 255";
tailf:info "<1-255>;;Distance for inter-area routes";
}
tailf:cli-suppress-no;
tailf:cli-no-name-on-delete;
tailf:cli-no-value-on-delete;
}
leaf intra-area {
tailf:info "Intra-area routes";
type uint32 {
range "1.. 255";
tailf:info "<1-255>;;Distance for intra-area routes";
}
tailf:cli-suppress-no;
tailf:cli-no-name-on-delete;
tailf:cli-no-value-on-delete;
}
}container foo {
tailf:cli-compact-syntax;
tailf:cli-sequence-commands;
presence true;
leaf a {
tailf:cli-incomplete-command;
type string;
}
leaf b {
tailf:cli-incomplete-command;
type string;
}
leaf c {
type string;
}
}container radius {
tailf:info "RADIUS server configuration command";
// radius filter *
list filter {
tailf:info "Packet filter configuration";
key id;
leaf id {
type string {
tailf:info "WORD;;Name of the filter (max 31 characters, longer will "
+"be rejected";
}
}
leaf match {
tailf:cli-drop-node-name;
tailf:cli-prefix-key;
type enumeration {
enum match-all {
tailf:info "Filter if all of the attributes matches";
}
enum match-any {
tailf:info "Filter if any of the attributes matches";
}
}
}
}list route-map {
tailf:info "Route map tag";
tailf:cli-mode-name "config-route-map";
tailf:cli-compact-syntax;
tailf:cli-full-command;
key "name sequence";
leaf name {
type string {
tailf:info "WORD;;Route map tag";
}
}
// route-map * #
leaf sequence {
tailf:cli-drop-node-name;
type uint16 {
tailf:info "<0-65535>;;Sequence to insert to/delete from "
+"existing route-map entry";
range "0..65535";
}
}
// route-map * permit
// route-map * deny
leaf operation {
tailf:cli-drop-node-name;
tailf:cli-prefix-key {
tailf:cli-before-key 2;
}
type enumeration {
enum deny {
tailf:code-name "op_deny";
tailf:info "Route map denies set operations";
}
enum permit {
tailf:code-name "op_internet";
tailf:info "Route map permits set operations";
}
}
default permit;
}
// route-map * / description
leaf "description" {
tailf:info "Route-map comment";
tailf:cli-multi-value;
type string {
tailf:info "LINE;;Comment up to 100 characters";
length "0..100";
}
}
}// spanning-tree vlans-root
container vlans-root {
tailf:cli-drop-node-name;
list vlan {
tailf:info "VLAN Switch Spanning Tree";
tailf:cli-range-list-syntax;
tailf:cli-suppress-mode;
tailf:cli-delete-when-empty;
key id;
leaf id {
type uint16 {
tailf:info "WORD;;vlan range, example: 1,3-5,7,9-11";
range "1..4096";
}
}
}
}vlan 1
vlan 2
vlan 3
vlan 5
vlan 7
...leaf-list vlan {
tailf:info "Range of vlans to add to the instance mapping";
tailf:cli-range-list-syntax;
type uint16 {
tailf:info "LINE;;vlan range ex: 1-65, 72, 300 -200";
}
}// ip vrf * / rd
leaf rd {
tailf:info "Specify Route Distinguisher";
tailf:cli-full-command;
tailf:cli-remove-before-change;
type rd-type;
}// controller * / channel-group
list channel-group {
tailf:info "Specify the timeslots to channel-group "+
"mapping for an interface";
tailf:cli-suppress-mode;
tailf:cli-delete-when-empty;
key number;
leaf number {
type uint8 {
range "0..30";
}
}
leaf-list timeslots {
tailf:cli-replace-all;
tailf:cli-range-list-syntax;
type uint16;
}
}container foo {
tailf:cli-compact-syntax;
tailf:cli-sequence-commands {
tailf:cli-reset-siblings;
}
presence true;
leaf a {
type string;
}
leaf b {
type string;
}
leaf c {
type string;
}
}foo
foo a <word>
foo a <word> b <word>
foo a <word> b <word> c <word>// license udi
container udi {
tailf:cli-compact-syntax;
tailf:cli-sequence-commands;
tailf:cli-reset-container;
leaf pid {
type string;
}
leaf sn {
type string;
}
}
container ietf {
tailf:info "IETF graceful restart";
container helper {
tailf:info "helper support";
presence "helper support";
leaf disable {
tailf:cli-reset-container;
tailf:cli-delete-container-on-delete;
tailf:info "disable helper support";
type empty;
}
leaf strict-lsa-checking {
tailf:info "enable helper strict LSA checking";
type empty;
}
}list foo {
ordered-by user;
tailf:cli-show-long-obu-diffs;
tailf:cli-suppress-mode;
key id;
leaf id {
type string;
}
}foo a
foo b
foo c
foo dfoo a
foo b
foo e
foo c
foo dno foo c
no foo d
foo e
foo c
foo d// ipv6 cef
container cef {
tailf:info "Cisco Express Forwarding";
tailf:cli-display-separated;
tailf:cli-show-no;
presence true;
}// interface * / shutdown
leaf shutdown {
// Note: default to "no shutdown" in order to be able to bring if up.
tailf:info "Shutdown the selected interface";
tailf:cli-full-command;
tailf:cli-show-no;
type empty;
}leaf "input" {
tailf:cli-boolean-no;
tailf:cli-show-with-default;
tailf:cli-full-command;
type boolean;
default true;
}list class-map {
tailf:info "Configure QoS Class Map";
tailf:cli-mode-name "config-cmap";
tailf:cli-suppress-list-no;
tailf:cli-delete-when-empty;
tailf:cli-no-key-completion;
tailf:cli-sequence-commands;
tailf:cli-full-command;
// class-map *
key name;
leaf name {
tailf:cli-disallow-value "type|match-any|match-all";
type string {
tailf:info "WORD;;class-map name";
}
}
}list foo {
key id;
leaf id {
type string;
}
leaf mtu {
type uint16;
}
}foo a {
mtu 1400;
}
foo b {
mtu 1500;
}foo a
mtu 1400
!
foo b
mtu 1500
!list foo {
tailf:cli-suppress-mode;
key id;
leaf id {
type string;
}
leaf mtu {
type uint16;
}
}foo a mtu 1400
foo b mtu 1500list interface {
tailf:cli-key-format "$(1)/$(2)/$(3):$(4)";
key "chassis slot subslot number";
leaf chassis {
type uint8 {
range "1 .. 4";
}
}
leaf slot {
type uint8 {
range "1 .. 16";
}
}
leaf subslot {
type uint8 {
range "1 .. 48";
}
}
leaf number {
type uint8 {
range "1 .. 255";
}
}
}interface 1/2/3:4list foo {
tailf:cli-recursive-delete;
key "id"";
leaf id {
type string;
}
leaf a {
type uint8;
}
leaf b {
type uint8;
}
leaf c {
type uint8;
}
}# show full
foo bar
a 1
b 2
c 3
!
# ex
# no foo bar
# show configuration
foo bar
no a 1
no b 2
no c 3
!
no foo bar
#list foo {
tailf:cli-recursive-delete;
key "id"";
leaf id {
type string;
}
leaf a {
type uint8;
}
leaf b {
tailf:cli-suppress-no;
type uint8;
}
leaf c {
type uint8;
}
}(config-foo-bar)# no ?
Possible completions:
a
c
---(config-foo-bar)# no ?
Possible completions:
a
c
---
service Modify use of network based services
(config-foo-bar)# ex
(config)# no foo bar
(config)# show config
foo bar
no a 1
no b 2
no c 3
!
no foo bar
(config)#list foo {
key "id"";
leaf id {
type string;
}
leaf a {
type uint8;
default 1;
}
leaf b {
tailf:cli-trim-default;
type uint8;
default 2;
}
}(config)# foo bar
(config-foo-bar)# a ?
Possible completions:
<unsignedByte>[1]
(config-foo-bar)# a 2 b ?
Possible completions:
<unsignedByte>[2]
(config-foo-bar)# a 2 b 3
(config-foo-bar)# commit
Commit complete.
(config-foo-bar)# show full
foo bar
a 2
b 3
!
(config-foo-bar)# a 1 b 2
(config-foo-bar)# commit
Commit complete.
(config-foo-bar)# show full
foo bar
a 1
!list foo {
key "id";
leaf id {
type string;
}
leaf a {
type uint8;
}
container x {
leaf b {
type uint8;
tailf:cli-embed-no-on-delete;
}
}
}(config-foo-bar)# show full
foo bar
a 1
x b 3
!
(config-foo-bar)# no x
(config-foo-bar)# show conf
foo bar
x no b 3
!list interface {
key name;
leaf name {
type string;
tailf:cli-allow-range;
}
leaf number {
type uint32;
}
}(config)# interface eth0-100 number 90
Error: no matching instances found
(config)# interface
Possible completions:
<name:string> eth0 eth1 eth2 eth3 eth4 eth5 range
(config)# interface eth0-3 number 100
(config-interface-eth0-3)# ex
(config)# interface eth4-5 number 200
(config-interface-eth4-5)# commit
Commit complete.
(config-interface-eth4-5)# ex
(config)# do show running-config interface
interface eth0
number 100
!
interface eth1
number 100
!
interface eth2
number 100
!
interface eth3
number 100
!
interface eth4
number 200
!
interface eth5
number 200
!list foo {
tailf:cli-case-sensitive;
key "id";
leaf id {
type string;
}
leaf a {
type string;
}
}(config)# foo bar a test
(config-foo-bar)# ex
(config)# commit
Commit complete.
(config)# do show running-config foo
foo bar
a test
!
(config)# foo bar a Test
(config-foo-bar)# ex
(config)# foo Bar a TEST
(config-foo-Bar)# commit
Commit complete.
(config-foo-Bar)# ex
(config)# do show running-config foo
foo Bar
a TEST
!
foo bar
a Test
!list foo {
tailf:cli-expose-ns-prefix;
key "id"";
leaf id {
type string;
}
leaf a {
type uint8;
}
leaf b {
type uint8;
}
leaf c {
type uint8;
}
}(config)# foo bar
(config-foo-bar)# ?
Possible completions:
example:a
example:b
example:c
--- container policy {
list policy-list {
tailf:cli-drop-node-name;
tailf:cli-show-obu-comments;
ordered-by user;
key policyid;
leaf policyid {
type uint32 {
tailf:info "policyid;;Policy ID.";
}
}
leaf-list srcintf {
tailf:cli-flat-list-syntax {
tailf:cli-replace-all;
}
type string;
}
leaf-list srcaddr {
tailf:cli-flat-list-syntax {
tailf:cli-replace-all;
}
type string;
}
leaf-list dstaddr {
tailf:cli-flat-list-syntax {
tailf:cli-replace-all;
}
type string;
}
leaf action {
type enumeration {
enum accept {
tailf:info "Action accept.";
}
enum deny {
tailf:info "Action deny.";
}
}admin@ncs(config-policy-4)# commit dry-run outformat cli
...
policy {
policy-list 1 {
- action accept;
+ action deny;
}
+ # after policy-list 3
+ policy-list 4 {
+ srcintf aaa;
+ srcaddr bbb;
+ dstaddr ccc;
+ }
}
}
}
}
}leaf message {
tailf:cli-multi-line-prompt;
type string;
}(config)# message aaa(config)# message
(<string>) (aaa):
[Multiline mode, exit with ctrl-D.]
> Lorem ipsum dolor sit amet, consectetuer adipiscing elit.
> Aenean commodo ligula eget dolor. Aenean massa.
> Cum sociis natoque penatibus et magnis dis parturient montes,
> nascetur ridiculus mus. Donec quam felis, ultricies nec,
> pellentesque eu, pretium quis, sem.
>
(config)# commit
Commit complete.
ubuntu(config)# do show running-config message
message "Lorem ipsum dolor sit amet, consectetuer adipiscing elit. \nAenean
commodo ligula eget dolor. Aenean massa. \nCum sociis natoque penatibus et
magnis dis parturient montes, \nnascetur ridiculus mus. Donec quam felis,
ultricies nec,\n pellentesque eu, pretium quis, sem. \n"
(config)#container foo {
list bar {
key id;
leaf id {
type uint32;
}
leaf a {
type uint32;
}
leaf b {
tailf:link "/example:foo/example:bar[id=current()/../id]/example:a";
type uint32;
}
}
}(config)# foo bar 1
ubuntu(config-bar-1)# ?
Possible completions:
a
b
---
commit Commit current set of changes
describe Display transparent command information
exit Exit from current mode
help Provide help information
no Negate a command or set its defaults
pwd Display current mode path
top Exit to top level and optionally run command
(config-bar-1)# b 100
(config-bar-1)# show config
foo bar 1
b 100
!
(config-bar-1)# commit
Commit complete.
(config-bar-1)# show full
foo bar 1
a 100
b 100
!
(config-bar-1)# a 20
(config-bar-1)# commit
Commit complete.
(config-bar-1)# show full
foo bar 1
a 20
b 20
! public class NedCapability {
public String str;
public String uri;
public String module;
public String features;
public String revision;
public String deviations;
....leaf keepalive {
tailf:info "Enable keepalive";
tailf:cli-boolean-no;
type boolean;
}leaf shutdown {
// Note: default to "no shutdown" in order to be able to bring if up.
tailf:info "Shutdown the selected interface";
tailf:cli-full-command;
tailf:cli-show-no;
type empty;
}aggregate-address 1.1.1.1 255.255.255.0 as-set summary-onlyaggregate-address 1.1.1.1 255.255.255.0 as-setaggregate-address 1.1.1.1
aggregate-address 255.255.255.0
aggregate-address as-set
aggregate-address summary-onlyaggregate-address 1.1.1.1 255.255.255.0 as-set summary-onlylist b {
key "id";
leaf id {
type string;
}
leaf name {
type string;
}
leaf y {
type string;
}
}
list a {
key id;
leaf id {
tailf:cli-diff-dependency "/c[id=current()/../id]" {
tailf:cli-trigger-on-set;
}
tailf:cli-diff-dependency "/b[id=current()/../id]";
type string;
}
}
list c {
key id;
leaf id {
tailf:cli-diff-dependency "/a[id=current()/../id]" {
tailf:cli-trigger-on-set;
}
tailf:cli-diff-dependency "/b[id=current()/../id]";
type string;
}
}tailf:cli-diff-dependency "/a[id=current()/../id]" {
tailf:cli-trigger-on-set;
}b foo
!
a foo
!no a foo
c foono c foo
a foofoo inbound 1
foo inbound 2
foo outbound 3
foo outbound 4
foo mtu 1500container htest {
tailf:cli-add-mode;
container param {
tailf:cli-hide-in-submode;
tailf:cli-flatten-container;
tailf:cli-compact-syntax;
leaf a {
type uint16;
}
leaf b {
type uint16;
}
}
leaf mtu {
type uint16;
}
}container foo {
tailf:cli-compact-syntax;
tailf:cli-sequence-commands;
tailf:cli-incomplete-command;
presence true;
leaf a {
tailf:cli-incomplete-command;
type string;
}
leaf b {
type string;
}
leaf c {
type string;
}
}// class-map * / source-address
container source-address {
tailf:info "Source address";
list mac {
tailf:info "MAC address";
tailf:cli-suppress-mode;
key address;
leaf address {
type string {
tailf:info "H.H.H;;MAC address";
}
}
}
}container foo {
tailf:cli-compact-syntax;
tailf:cli-sequence-commands;
presence true;
leaf a {
tailf:cli-incomplete-command;
type string;
}
leaf b {
tailf:cli-incomplete-command;
tailf:cli-optional-in-sequence;
type string;
}
leaf c {
type string;
}
}// voice translation-rule * / rule *
list rule {
tailf:info "Translation rule";
tailf:cli-suppress-mode;
tailf:cli-delete-when-empty;
tailf:cli-incomplete-command;
tailf:cli-compact-syntax;
tailf:cli-sequence-commands {
tailf:cli-reset-all-siblings;
}
ordered-by "user";
key tag;
leaf tag {
type uint8 {
tailf:info "<1-15>;;Translation rule tag";
range "1..15";
}
}
leaf reject {
tailf:info "Call block rule";
tailf:cli-optional-in-sequence;
type empty;
}
leaf "pattern" {
tailf:cli-drop-node-name;
tailf:cli-full-command;
tailf:cli-multi-value;
type string {
tailf:info "WORD;;Matching pattern";
}
}
}container foo {
tailf:cli-compact-syntax;
tailf:cli-sequence-commands {
tailf:cli-reset-all-siblings;
}
presence true;
leaf a {
type string;
}
leaf b {
tailf:cli-break-sequence-commands;
type string;
}
leaf c {
type string;
}
}// ip access-list extended *
container extended {
tailf:info "Extended Access List";
tailf:cli-incomplete-command;
list ext-named-acl {
tailf:cli-drop-node-name;
tailf:cli-full-command;
tailf:cli-mode-name "config-ext-nacl";
key name;
leaf name {
type ext-acl-type;
}
list ext-access-list-rule {
tailf:cli-suppress-mode;
tailf:cli-delete-when-empty;
tailf:cli-drop-node-name;
tailf:cli-compact-syntax;
tailf:cli-show-long-obu-diffs;
ordered-by user;
key rule;
leaf rule {
tailf:cli-drop-node-name;
tailf:cli-multi-word-key;
type string {
tailf:info "deny;;Specify packets to reject\n"+
"permit;;Specify packets to forwards\n"+
"remark;;Access list entry comment";
pattern "(permit.*)|(deny.*)|(no.*)|(remark.*)|([0-9]+.*)";
}
}
}
}
}// interface * / shutdown
leaf shutdown {
tailf:cli-boolean-no;
type boolean;
}// interface * / shutdown
leaf shutdown {
tailf:cli-show-with-default;
tailf:cli-boolean-no;
type boolean;
default "false";
}cmd-path-modes-only-existing: same as path-mode-only but NSO only supplies the path mode of existing nodes.
nocreate: Merge with a node if it exists. If it does not exist, it will not be created.
config-locked - This means that any transaction which attempts to manipulate the configuration of the device will fail. It is still possible to read the configuration of the device and send live-status commands or RPCs.
from-ip: From which IP the load activity was run.source: Identify the source of the managed device such as the inventory system name or the name of the source file.
lock_reset_candidatestartup: This mode is used for devices that have writable running, no candidate but do support the startup data store. This is the typical mode for Cisco-like devices.
running-only: This mode is used for devices that only support writable running.
NED: The transaction is controlled by a Network Element Driver. The exact transaction mode depends on the type of the NED.
locked: This queue item is locked and will not be processed until it has been unlocked, see the action /ncs:devices/commit-queue/queue-item/unlock. A locked queue item will block all subsequent queue items that are using any device in the locked queue item.
forceforcedevices commit-queue set-atomic-behaviour atomic [ true,false ]. This action sets the atomic behavior of all queue items. If these are set to false, the devices contained in these queue items can start executing if the same devices in other non-atomic queue items ahead of it in the queue are completed. If set to true, the atomic integrity of these queue items is preserved.
devices commit-queue wait-until-empty. This action waits until the commit queue is empty. The default is to wait infinity. A timeout can be specified to wait for a number of seconds. The result is empty if the queue is empty or timeout if there are still items in the queue to be processed.
devices commit-queue queue-item [ id ] lock. This action puts a lock on an existing queue item. A locked queue item will not start executing until it has been unlocked.
devices commit-queue queue-item [ id ] unlock. This action unlocks a locked queue item. Unlocking a queue item that is not locked is silently ignored.
devices commit-queue queue-item [ id ] delete. This action deletes a queue item from the queue. If other queue items are waiting for this (deleted) item, they will all automatically start to run. The devices of the deleted queue item will, after the action has been executed, be out of sync if they haven't started executing. Any error option set for the queue item will also be disregarded. The force option will brutally kill an ongoing commit. This could leave the device in a bad state. It is not recommended in any normal use case.
devices commit-queue queue-item [ id ] prune device [ ... ]. This action prunes the specified devices from the queue item. Devices that are currently being committed to will not be pruned unless the force option is used. Atomic queue items will not be affected, unless all devices in it are pruned. The force option will brutally kill an ongoing commit. This could leave the device in a bad state. It is not recommended in any normal use case.
devices commit-queue queue-item [ id ] set-atomic-behaviour atomic [ true,false ]. This action sets the atomic behavior of this queue item. If this is set to false, the devices contained in this queue item can start executing if the same devices in other non-atomic queue items ahead of it in the queue are completed. If set to true, the atomic integrity of the queue item is preserved.
devices commit-queue queue-item [ id ] wait-until-completed. This action waits until the queue item is completed. The default is to wait infinity. A timeout can be specified to wait for a number of seconds. The result is completed if the queue item is completed or timeout if the timer expired before the queue item was completed.
devices commit-queue queue-item [ id ] retry. This action retries devices with transient errors instead of waiting for the automatic retry attempt. The device option will let you specify the devices to retry.
Fix the problem with the device ce0, remove the lock item and sync from the device:
rollbackce1:cce2:dcdCQ2cdThe queue item from TR1, CQ1, starts to execute. The node a cannot be created on the device. The node b was created on the device but that change is reverted as a failed to be created.
TR2; as the queue item from TR2, CQ2, is not the same service instance and has no overlapping data on the ce1 device, this queue item executes as normal.NSO1:TR2; service s2 dispatches the service to NSO2 through the queue item NSO1:CQ2. In the changes of NSO1:CQ2, NSO2:s2 is created.
The queue item from NSO2:TR1, NSO2:CQ1, starts to execute. The node a cannot be created on the device. The node b was created on the device, but that change is reverted as a failed to be created.
The queue item from NSO3:TR1, NSO3:CQ1, starts to execute. The changes in the queue item are committed successfully to the network.
NSO1:TR3; service s1 is applied with the old parameters. Thus the effect of NSO1:TR1 is reverted. A queue item is created to push the transaction changes to the lower nodes that didn't fail.
NSO3:TR3; service s1 is applied with the old parameters. Thus the effect of NSO3:TR1 is reverted. Since the changes in the queue item NSO3:CQ1 was successfully committed to the network a new queue item NSO3:CQ3 is created to revert those changes.





ncs(config)# devices commit-queue queue-item 9577950918 delete
ncs(config)# devices device ce0 sync-from
result truesubmodule tailf-ncs-devices {
belongs-to tailf-ncs {
prefix ncs;
}
...
container devices {
......
list device {
key name;
description
"This list contains all devices managed by NCS.";
leaf name {
type string;
description
"A string uniquely identifying the managed device.";
}
leaf address {
type inet:host;
mandatory true;
description
"IP address or host name for the management interface on
the device.";
}
leaf port {
type inet:port-number;
description
"Port for the management interface on the device. If this leaf
is not configured, NCS will use a default value based on the
type of device. For example, a NETCONF device uses port 830,
a CLI device over SSH uses port 22, and a SNMP device uses
port 161.";
}
....
leaf authgroup {
....
}
container device-type {
.......
container config {
...
}
}
}ncs(config)# show full-configuration devices device device-type
devices device ce0
device-type cli ned-id cisco-ios-cli-3.8
!
...
devices device p0
device-type cli ned-id cisco-iosxr-cli-3.5
!
devices device p1
device-type cli ned-id cisco-iosxr-cli-3.5
!
...
devices device pe2
device-type netconf ned-id juniper-junos-nc-3.0
!ncs(config)# show full-configuration devices device
devices device ce0
address 127.0.0.1
port 10022
ssh host-key ssh-dss
...
authgroup default
device-type cli ned-id cisco-ios-cli-3.8
state admin-state unlocked
config
...
!
!
devices device ce1
address 127.0.0.1
port 10023
ssh host-key ssh-dss
...
!
authgroup default
device-type cli ned-id cisco-ios-cli-3.8
state admin-state unlocked
config
...
!
!$ netconf-console --get-config -x "/devices/device[name='ce0']"
<?xml version="1.0" encoding="UTF-8"?>
<rpc-reply xmlns="urn:ietf:params:xml:ns:netconf:base:1.0" message-id="1">
<data>
<devices xmlns="http://tail-f.com/ns/ncs">
<device>
<name>ce0</name>
<address>127.0.0.1</address>
<port>10022</port>
<ssh>
<host-key>
<algorithm>ssh-dss</algorithm>
...
<authgroup>default</authgroup>
<device-type>
<cli>
<ned-id xmlns:cisco-ios-cli-3.8="http://tail-f.com/ns/ned-id/cisco-ios-cli-3.8">
cisco-ios-cli-3.8:cisco-ios-cli-3.8
</ned-id>
</cli>
</device-type>
<state>
<admin-state>unlocked</admin-state>
</state>
<config>
...
</config>
</device>
</devices>
</data>
</rpc-reply>ncs# show packages
packages package cisco-ios-cli-3.8
package-version 3.8.0.1
description "NED package for Cisco IOS"
ncs-min-version [ 3.2.2 3.3 3.4 ]
directory ./state/packages-in-use/1/cisco-ios-cli-3.8
component IOSDp2
callback java-class-name [ com.tailf.packages.ned.ios.IOSDp2 ]
component IOSDp
callback java-class-name [ com.tailf.packages.ned.ios.IOSDp ]
component cisco-ios
ned cli ned-id cisco-ios-cli-3.8
ned cli java-class-name com.tailf.packages.ned.ios.IOSNedCli
ned device vendor Cisco
...
oper-status up
packages package cisco-iosxr-cli-3.5
package-version 3.5.0.7
description "NED package for Cisco IOS XR"
ncs-min-version [ 3.2.2 3.3 ]
directory ./state/packages-in-use/1/cisco-iosxr-cli-3.5
component cisco-ios-xr
ned cli ned-id cisco-iosxr-cli-3.5
ned cli java-class-name com.tailf.packages.ned.iosxr.IosxrNedCli
ned device vendor Cisco
...
oper-status up
packages package juniper-junos-nc-3.0
package-version 3.0.14.2
description "NED package for all JunOS based Juniper routers"
ncs-min-version [ 3.0.0.1 3.1 3.2 3.3 3.4 ]
directory ./state/packages-in-use/1/juniper-junos-nc-3.0
component junos
ned netconf ned-id juniper-junos-nc-3.0
ned device vendor Juniper
oper-status up
...$ ls -l $NCS_DIR/examples.ncs/service-provider/mpls-vpn
total 160
...
drwxr-xr-x 8 stefan staff 272 Oct 1 16:57 packages
...
$ ls -l $NCS_DIR/examples.ncs/service-provider/mpls-vpn/packages
total 24
cisco-ios
cisco-iosxr
juniper-junos
...% ncs% ncs -c ./ncs.conf% ncs -c ./ncs.conf --foregound --verbose% ncs --status
vsn: 7.1
SMP support: yes, using 8 threads
Using epoll: yes
available modules: backplane,netconf,cdb,cli,snmp,webui
...
... lots of output% ncs --helpncs# show ncs-state
ncs-state version 7.1
ncs-state smp number-of-threads 8
ncs-state epoll true
ncs-state daemon-status started
...ncs(config)# devices sync-from
sync-result {
device ce0
result true
}
sync-result {
device ce1
result true
}
sync-result {
device ce2
result true
...
ncs(config)# show full-configuration devices device ce0
devices device ce0
...
config
no ios:service pad
no ios:ip domain-lookup
no ios:ip http secure-server
ios:ip source-route
ios:interface GigabitEthernet0/1
exit
ios:interface GigabitEthernet0/10
exit
ios:interface GigabitEthernet0/11
exit
...
[ok][2010-04-13 16:29:15]$ls $NCS_DIR/src/ncs/yang/ grouping sync-from-output {
list sync-result {
key device;
leaf device {
type leafref {
path "/devices/device/name";
}
}
uses sync-result;
}
}
grouping sync-result {
description
"Common result data from a 'sync' action.";
choice outformat {
leaf result {
type boolean;
}
anyxml result-xml;
leaf cli {
tailf:cli-preformatted;
type string;
}
}
leaf info {
type string;
description
"If present, contains additional information about the result.";
}
}
...
container devices {
...
tailf:action sync-from {
description
"Synchronize the configuration by pulling from all unlocked
devices.";
tailf:info "Synchronize the config by pulling from the devices";
tailf:actionpoint ncsinternal {
tailf:internal;
}
input {
leaf suppress-positive-result {
type empty;
description
"Use this additional parameter to only return
devices that failed to sync.";
}
container dry-run {
presence "";
leaf outformat {
type outformat2;
description
"Report what would be done towards CDB, without
actually doing anything.";
}
}
}
output {
uses sync-from-output;
}
}
...
tailf:action sync-to {
...
}
...
list device {
description
"This list contains all devices managed by NCS.";
key name;
leaf name {
description "A string uniquely identifying the managed device";
type string;
}
...
tailf:action sync-from {
description
"Synchronize the configuration by pulling from the device.";
tailf:info "Synchronize the config by pulling from the device";
tailf:actionpoint ncsinternal {
tailf:internal;
}
input {
container dry-run {
presence "";
leaf outformat {
type outformat2;
description
"Report what would be done towards CDB, without
actually doing anything.";
}
}
}
output {
uses sync-result;
}
}
tailf:action sync-to {
...ncs# devices device ce0 sync-to
result truencs# devices device ce0 sync-to dry-run
data {
...
}$ ncs_cli -C -u admin
ncs# devices partial-sync-from path [ \
/devices/device[name='ex0']/config/r:sys/interfaces/interface[name='eth0'] \
/devices/device[name='ex1']/config/r:sys/dns/server ]
sync-result {
device ex0
result true
}
sync-result {
device ex1
result true
}
ncs# show running-config devices device ex0..1 config
devices device ex0
config
r:sys interfaces interface eth0
unit 0
enabled
!
unit 1
enabled
!
unit 2
enabled
description "My Vlan"
vlan-id 18
!
!
!
!
devices device ex1
config
r:sys dns server 10.2.3.4
!
!
!$ ncs_cli -C -u admin
ncs# config
Entering configuration mode terminal
ncs(config)# devices device pe1 config cisco-ios-xr:snmp-server \
community public RO
ncs(config-config)# top
ncs(config)# devices device ce0 config ios:snmp-server community public RO
ncs(config-config)# devices device pe2 config junos:configuration \
snmp community public view RO
ncs(config-community-public)# top
ncs(config)# show configuration
devices device ce0
config
ios:snmp-server community public RO
!
!
devices device pe1
config
cisco-ios-xr:snmp-server community public RO
!
!
devices device pe2
config
! first
junos:configuration snmp community public
view RO
!
!
!
ncs(config)# commit dry-run outformat native
native {
device {
name ce0
data snmp-server community public RO
}
device {
name pe1
data snmp-server community public RO
}
device {
name pe2
data <rpc xmlns="urn:ietf:params:xml:ns:netconf:base:1.0"
message-id="1">
<edit-config xmlns:nc="urn:ietf:params:xml:ns:netconf:base:1.0">
<target>
<candidate/>
</target>
<test-option>test-then-set</test-option>
<error-option>rollback-on-error</error-option>
<config>
<configuration xmlns="http://xml.juniper.net/xnm/1.1/xnm">
<snmp>
<community>
<name>public</name>
<view>RO</view>
</community>
</snmp>
</configuration>
</config>
</edit-config>
</rpc>
}
}
ncs(config)# commitncs# devices connect
connect-result {
device ce0
result true
info (admin) Connected to ce0 - 127.0.0.1:10022
}
connect-result {
device ce1
result true
info (admin) Connected to ce1 - 127.0.0.1:10023
}
...ncs# devices device ce0 connect
result true
info (admin) Connected to ce0 - 127.0.0.1:10022submodule tailf-ncs-devices {
...
container devices {
...
grouping timeouts {
description
"Timeouts used when communicating with a managed device.";
leaf connect-timeout {
type uint32;
units "seconds";
description
"The timeout in seconds for new connections to managed
devices.";
}
leaf read-timeout {
type uint32;
units "seconds";
description
"The timeout in seconds used when reading data from a
managed device.";
}
leaf write-timeout {
type uint32;
units "seconds";
description
"The timeout in seconds used when writing data to a
managed device.";
}
}
...
container global-settings {
...
uses timeouts {
description
"These timeouts can be overridden per device.";
refine connect-timeout {
default 20;
}
refine read-timeout {
default 20;
}
refine write-timeout {
default 20;
}
}
....ncs(config)# devices global-settings connect-timeout 30
ncs(config)# devices global-settings read-timeout 30
ncs(config)# commitncs(config)# devices profiles profile slow-devices connect-timeout 60
ncs(config-profile-slow-devices)# read-timeout 60
ncs(config-profile-slow-devices)# write-timeout 60
ncs(config-profile-slow-devices)# commit
ncs(config)# devices device ce3 device-profile slow-devices
ncs(config-device-ce3)# commitsubmodule tailf-ncs-devices {
...
container devices {
...
container authgroups {
description
"Named authgroups are used to decide how to map a local NCS user to
remote authentication credentials on a managed device.
The list 'group' is used for NETCONF and CLI managed devices.
The list 'snmp-group' is used for SNMP managed devices.";
list group {
key name;
description
"When NCS connects to a managed device, it locates the
authgroup configured for that device. Then NCS looks up
the local NCS user name in the 'umap' list. If an entry is
found, the credentials configured is used when
authenticating to the managed device.
If no entry is found in the 'umap' list, the credentials
configured in 'default-map' are used.
If no 'default-map' has been configured, and the local NCS
user name is not found in the 'umap' list, the connection
to the managed device fails.";
grouping remote-user-remote-auth {
description
"Remote authentication credentials.";
choice login-credentials {
mandatory true;
case stored {
choice remote-user {
mandatory true;
leaf same-user {
type empty;
description
"If this leaf exists, the name of the local NCS user is used
as the remote user name.";
}
leaf remote-name {
type string;
description
"Remote user name.";
}
}
choice remote-auth {
mandatory true;
leaf same-pass {
type empty;
description
"If this leaf exists, the password used by the local user
when logging in to NCS is used as the remote password.";
}
leaf remote-password {
type tailf:aes-256-cfb-128-encrypted-string;
description
"Remote password.";
}
case public-key {
uses public-key-auth;
}
}
leaf remote-secondary-password {
type tailf:aes-256-cfb-128-encrypted-string;
description
"Some CLI based devices require a second
additional password to enter config mode";
}
}
case callback {
leaf callback-node {
description
"Invoke a standalone action to retrieve login credentials for
managed devices on the 'callback-node' instance.
The 'action-name' action is invoked on the callback node that
is specified by an instance identifer.";
mandatory true;
type instance-identifier;
}
leaf action-name {
description
"The action to call when a notification is received.
The action must use 'authgroup-callback-input-params'
grouping for input and 'authgroup-callback-output-params'
grouping for output from tailf-ncs-devices.yang.";
type yang:yang-identifier;
mandatory true;
tailf:validate ncs {
tailf:internal;
tailf:dependency "../callback-node";
}
}
}
}
}
grouping mfa-grouping {
container mfa {
presence "MFA";
description
"Settings for handling multi-factor authentication towards
the device";
leaf executable {
description "Path to the external executable handling MFA";
type string;
mandatory true;
}
leaf opaque {
description
"Opaque data for the external MFA executable.
This string will be base64 encoded and passed to the MFA
executable along with other parameters";
type string;
}
}
}
leaf name {
type string;
description
"The name of the authgroup.";
}
container default-map {
presence "Map unknown users";
description
"If an authgroup has a default-map, it is used if a local
NCS user is not found in the umap list.";
tailf:info "Remote authentication parameters for users not in umap";
uses remote-user-remote-auth;
uses mfa-grouping;
}
list umap {
key local-user;
description
"The umap is a list with the local NCS user name as key.
It maps the local NCS user name to remote authentication
credentials.";
tailf:info "Map NCS users to remote authentication parameters";
leaf local-user {
type string;
description
"The local NCS user name.";
}
uses remote-user-remote-auth;
uses mfa-grouping;
}
}ncs(config)# show full-configuration devices authgroups
devices authgroups group default
umap admin
remote-name admin
remote-password $4$wIo7Yd068FRwhYYI0d4IDw==
!
umap oper
remote-name oper
remote-password $4$zp4zerM68FRwhYYI0d4IDw==
!
!
devices authgroups snmp-group default
default-map community-name public
umap admin
usm remote-name admin
usm security-level auth-priv
usm auth md5 remote-password $4$wIo7Yd068FRwhYYI0d4IDw==
usm priv des remote-password $4$wIo7Yd068FRwhYYI0d4IDw==
!
!ncs(config)# devices authgroups group default default-map same-user same-pass
ncs(config-group-default)# commit
Commit complete.
ncs(config-group-default)# top
ncs(config)# show full-configuration devices authgroups
devices authgroups group default
default-map same-user
default-map same-pass
umap admin
remote-name admin
remote-password $4$wIo7Yd068FRwhYYI0d4IDw==
!
umap oper
remote-name oper
remote-password $4$zp4zerM68FRwhYYI0d4IDw==
!
!
devices authgroups snmp-group default
default-map community-name public
umap admin
usm remote-name admin
usm security-level auth-priv
usm auth md5 remote-password $4$wIo7Yd068FRwhYYI0d4IDw==
usm priv des remote-password $4$wIo7Yd068FRwhYYI0d4IDw==
!
!admin@ncs(config)# devices authgroups group mfa umap admin
admin@ncs(config-umap-admin)# remote-name admin remote-password
(<AES encrypted string>): *********
admin@ncs(config-umap-admin)# mfa executable ./handle_mfa.py opaque foobar
admin@ncs(config-umap-admin)# commit
Commit complete.[ZGV2MA==;YWRtaW4=;YWRtaW4=;Zm9vYmFy;;;YWRtaW5AbG9jYWxob3N0J3MgcGFzc3dvcmQ6IA==;][dev0;admin;admin;foobar;;;admin@localhost's password:;]#!/usr/bin/env python3
import base64
line = input()
(device, user, passwd, opaque, name, instr, prompt, _) = map(
lambda x: base64.b64decode(x).decode('utf-8'),
line.strip('[]').split(';'))
if prompt == "admin@localhost's password: ":
print(passwd)
elif prompt == "Enter SMS passcode:":
print("secretSMScode")
else:
print("2")ncs(config)# devices authgroups group default umap oper
ncs(config-umap-oper)# callback-node /callback action-name auth-cb
ncs(config-group-oper)# commit
Commit complete.
ncs(config-group-oper)# top
ncs(config)# show full-configuration devices authgroups
devices authgroups group default
default-map same-user
default-map same-pass
umap admin
remote-name admin
remote-password $4$wIo7Yd068FRwhYYI0d4IDw==
!
umap oper
callback-node /callback
action-name auth-cb
!
!
devices authgroups snmp-group default
default-map community-name public
umap admin
usm remote-name admin
usm security-level auth-priv
usm auth md5 remote-password $4$wIo7Yd068FRwhYYI0d4IDw==
usm priv des remote-password $4$wIo7Yd068FRwhYYI0d4IDw==
!
!module authgroup-callback {
namespace "http://com/example/authgroup-callback";
prefix authgroup-callback;
import tailf-common {
prefix tailf;
}
import tailf-ncs {
prefix ncs;
}
container callback {
description
"Example callback that defines an action to retrieve
remote authentication credentials";
tailf:action auth-cb {
tailf:actionpoint auth-cb-point;
input {
uses ncs:authgroup-callback-input-params;
}
output {
uses ncs:authgroup-callback-output-params;
}
}
}
}$ ncs_cli -C -u admin
admin connected from 127.0.0.1 using console on ncs
ncs# devices connect suppress-positive-resultncs# show devices session-pool
DEVICE MAX IDLE
DEVICE TYPE SESSIONS SESSIONS TIME
-------------------------------------------
ce0 cli 1 unlimited 30
ce1 cli 1 unlimited 30
ce2 cli 1 unlimited 30
ce3 cli 1 unlimited 30
ce4 cli 1 unlimited 30
ce5 cli 1 unlimited 30
pe0 cli 1 unlimited 30
pe1 cli 1 unlimited 30
pe2 cli 1 unlimited 30ncs# devices session-pool pooled-device pe0 close
ncs# devices session-pool pooled-device pe1 close
ncs# devices session-pool pooled-device pe2 close
ncs# show devices session-pool
DEVICE MAX IDLE
DEVICE TYPE SESSIONS SESSIONS TIME
-------------------------------------------
ce0 cli 1 unlimited 30
ce1 cli 1 unlimited 30
ce2 cli 1 unlimited 30
ce3 cli 1 unlimited 30
ce4 cli 1 unlimited 30
ce5 cli 1 unlimited 30ncs# devices session-pool close
ncs# show devices session-pool
% No entries found.grouping device-profile-parameters {
...
container session-pool {
tailf:info "Control how sessions to related devices can be pooled.";
description
"NCS uses NED sessions when performing transactions, actions
etc towards a device. When such a task is completed the NED
session can either be closed or pooled.
Pooling a NED session means that the session to the
device is kept open for a configurable amount of
time. During this time the session can be re-used for a new
task. Thus the pooling concept exists to reduce the number
of new connections needed towards a device that is often
used.
By default NCS uses pooling for all device types except
SNMP. Normally there is no need to change the default
values.";
leaf max-sessions {
type union {
type enumeration {
enum unlimited;
}
type uint32;
}
description
"Controls the maximum number of open sessions in the pool for
a specific device. When this threshold is exceeded the oldest
session in the pool will be closed.
A Zero value will imply that pooling is disabled for
this specific device. The label 'unlimited' implies that no
upper limit exists for this specific device";
}
leaf idle-time {
tailf:info
"The maximum time that a session is kept open in the pool";
type uint32 {
range "1 .. max";
}
units "seconds";
description
"The maximum time that a session is kept open in the pool.
If the session is not requested and used before the
idle-time has expired, the session is closed.
If no idle-time is set the default is 30 seconds.";
}
}
}
}container global-settings {
tailf:info "Global settings for all managed devices.";
description
"Global settings for all managed devices. Some of these
settings can be overridden per managed device.";
uses device-profile-parameters {
...
augment session-pool {
leaf pool-max-sessions {
type union {
type enumeration {
enum unlimited;
}
type uint32;
}
description
"Controls the grand total session count in the pool.
Independently on how different devices are pooled the grand
total session count can never exceed this value.
A Zero value will imply that pooling is disabled for all devices.
The label 'unlimited' implies that no upper limit exists for
the number open sessions in the pool";
}
}
}
}ncs# configure
ncs(config)# devices global-settings session-pool idle-time 100
ncs(config)# devices profiles profile small session-pool max-sessions 3
ncs(config-profile-small)# top
ncs(config)# devices device ce* device-profile small
ncs(config-device-ce*)# top
ncs(config)# devices device pe0 session-pool max-sessions 0
ncs(config-device-pe0)# top
ncs(config)# commit
Commit complete.
ncs(config)# exitncs# devices connect suppress-positive-result
ncs# show devices session-pool
DEVICE MAX IDLE
DEVICE TYPE SESSIONS SESSIONS TIME
-------------------------------------------
ce0 cli 1 3 100
ce1 cli 1 3 100
ce2 cli 1 3 100
ce3 cli 1 3 100
ce4 cli 1 3 100
ce5 cli 1 3 100
pe1 cli 1 unlimited 100
pe2 cli 1 unlimited 100ncs# configure
ncs(config)# devices global-settings session-pool pool-max-sessions 4
ncs(config)# commit
Commit complete.
ncs(config)# exitncs# show devices session-pool
DEVICE MAX IDLE
DEVICE TYPE SESSIONS SESSIONS TIME
-------------------------------------------
ce4 cli 1 3 100
ce5 cli 1 3 100
pe1 cli 1 unlimited 100
pe2 cli 1 unlimited 100grouping device-profile-parameters {
...
container session-limits {
tailf:info "Parameters for limiting concurrent access to the device.";
leaf max-sessions {
type union {
type enumeration {
enum unlimited;
}
type uint32 {
range "1..max";
}
}
default unlimited;
description
"Puts a limit to the total number of concurrent sessions
allowed for the device. The label 'unlimited' implies that no
upper limit exists for this device.";
}
}
...
}container global-settings {
tailf:info "Global settings for all managed devices.";
description
"Global settings for all managed devices. Some of these
settings can be overridden per managed device.";
uses device-profile-parameters {
...
augment session-limits {
description
"Parameters for limiting concurrent access to devices.";
container connect-rate {
leaf burst {
type union {
type enumeration {
enum unlimited;
}
type uint32 {
range "1..max";
}
}
default unlimited;
description
"The number of concurrent connect attempts allowed.
For example, the devices managed by NSO talk to the same
server for authentication which can only handle a limited
number of connections at a time. Then we can limit
the concurrency of connect attempts with this setting.";
}
}
leaf max-wait-time {
tailf:info
"Max time in seconds to wait for device to be available.";
type union {
type enumeration {
enum unlimited;
}
type uint32 {
range "0..max";
}
}
units "seconds";
default 10;
description
"Max time in seconds to wait for a device being available
to connect. When the maximum time is reached an error
is returned. Setting this to 0 means that the error is
returned immediately.";
}
}
...
}ncs(config)# devices global-settings trace raw trace-dir .logs
ncs(config)# commitncs(config)# devices disconnectncs(config)# do file show logs/ned-cisco-ios-ce0.trace
>> 8-Oct-2014::18:23:18.512 CLI CONNECT to ce0-127.0.0.1:10022 as admin (Trace=true)
*** output 8-Oct-2014::18:23:18.514 ***
-- SSH connecting to host: 127.0.0.1:10022 --
-- SSH initializing session --
*** input 8-Oct-2014::18:23:18.547 ***
admin connected from 127.0.0.1 using ssh on ncs
...
ce0(config)#
*** output 8-Oct-2014::18:23:19.428 ***
snmp-server community topsecret RWncs(config)# devices clear-tracencs# show devices device state last-transaction-id
NAME LAST TRANSACTION ID
----------------------------------------
ce0 ef3bbd344ef94b3fecec5cb93ac7458c
ce1 48e91db163e294bf5c3978d154922c9
ce2 48e91db163e294bf5c3978d154922c9
ce3 48e91db163e294bf5c3978d154922c9
ce4 48e91db163e294bf5c3978d154922c9
ce5 48e91db163e294bf5c3978d154922c9
ce6 48e91db163e294bf5c3978d154922c9
ce7 48e91db163e294bf5c3978d154922c9
ce8 48e91db163e294bf5c3978d154922c9
p0 -
p1 -
p2 -
p3 -
pe0 -
pe1 -
pe2 1412-581909-661436
pe3 -ncs# devices check-sync
sync-result {
device ce0
result in-sync
}
...
sync-result {
device p1
result unsupported
}
...ncs# devices device ce0..3 check-sync
devices device ce0 check-sync
result in-sync
devices device ce1 check-sync
result in-sync
devices device ce2 check-sync
result in-sync
devices device ce3 check-sync
result in-syncgrouping check-sync-result {
description
"Common result data from a 'check-sync' action.";
leaf result {
type enumeration {
enum unknown {
description
"NCS have no record, probably because no
sync actions have been executed towards the device.
This is the initial state for a device.";
}
enum locked {
tailf:code-name 'sync_locked';
description
"The device is administratively locked, meaning that NCS
cannot talk to it.";
}
enum in-sync {
tailf:code-name 'in-sync-result';
description
"The configuration on the device is in sync with NCS.";
}
enum out-of-sync {
description
"The device configuration is known to be out of sync, i.e.,
it has been reconfigured out of band.";
}
enum unsupported {
description
"The device doesn't support the tailf-netconf-monitoring
module.";
}
enum error {
description
"An error occurred when NCS tried to check the sync status.
The leaf 'info' contains additional information.";
}
}
}
}$ ncs-netsim cli-i ce0
admin connected from 127.0.0.1 using console on ncs
ce0> enable
ce0# configure
Enter configuration commands, one per line. End with CNTL/Z.
ce0(config)# snmp-server community foobar RW
ce0(config)# exit
ce0# exit
$ ncs_cli -C -u admin
admin connected from 127.0.0.1 using console on ncs
ncs# devices device ce0 check-sync
result out-of-sync
info got: 290fa2b49608df9975c9912e4306110 expected: ef3bbd344ef94b3fecec5cb93ac7458c
ncs# devices device ce0 compare-config
diff
devices {
device ce0 {
config {
ios:snmp-server {
+ community foobar {
+ RW;
+ }
}
}
}
}ncs# devices device ce0 sync-to dry-run
data
no snmp-server community foobar RW
ncs#ncs# devices device ce0 sync-to
result true
ncs#ncs# devices device ce0 compare-config
ncs#ncs(config)# show full-configuration devices device ce0
devices device ce0
address 127.0.0.1
port 10022
ssh host-key ssh-dss
key-data "AAAAB3NzaC1kc3MAAACBAO9tkTdZgAqJMz8m...
!
authgroup default
device-type cli ned-id cisco-ios-cli-3.8
state admin-state unlocked
config
no ios:service pad
no ios:ip domain-lookup
no ios:ip http secure-server
ios:ip source-route
ios:interface GigabitEthernet0/1
exit
ios:interface GigabitEthernet0/10
exit
ios:interface GigabitEthernet0/11
exit
ios:interface GigabitEthernet0/12
exit
ios:interface GigabitEthernet0/13
exit
ios:interface GigabitEthernet0/14
exit
....ncs(config)# devices device ce9 address 127.0.0.1 port 10031
ncs(config-device-ce9)# device-type cli ned-id cisco-ios-cli-3.8
ncs(config-device-ce9)# authgroup default
ncs(config-device-ce9)# instantiate-from-other-device device-name ce0
ncs(config-device-ce9)# top
ncs(config)# show configuration
devices device ce9
address 127.0.0.1
port 10031
authgroup default
device-type cli ned-id cisco-ios-cli-3.8
config
no ios:service pad
no ios:ip domain-lookup
no ios:ip http secure-server
ios:ip source-route
ios:interface GigabitEthernet0/1
exit
....
ncs(config)# commit
Commit complete.ncs(config)# devices device ce9 sync-to
result false
info Device ce9 is southbound locked(config)# show full-configuration devices device ce9 state | details
devices device ce9
state admin-state southbound-locked
!submodule tailf-ncs-devices {
namespace "http://tail-f.com/ns/ncs";
...
container devices {
........
list template {
description
"This list is used to define named template configurations that
can be used to either instantiate the configuration for new
devices, or to apply snippets of configurations to existing
devices.
...
";
key name;
leaf name {
description "The name of a specific template configuration";
type string;
}
list ned-id {
key id;
leaf id {
type identityref {
base ned:ned-id;
}
}
container config {
tailf:mount-point ncs-template-config;
tailf:cli-add-mode;
tailf:cli-expose-ns-prefix;
description
"This container is augmented with data models from the devices.";
}
}
}ncs(config)# devices template ce-initialize ned-id cisco-ios-cli-3.8 config
ncs(config-config)# no ios:service pad
ncs(config-config)# no ios:ip domain-lookup
ncs(config-config)# ios:ip dns server
ncs(config-config)# no ios:ip http server
ncs(config-config)# no ios:ip http secure-server
ncs(config-config)# ios:ip source-route true
ncs(config-config)# ios:interface GigabitEthernet 0/1
ncs(config- GigabitEthernet-0/1)# exit
ncs(config-config)# ios:interface GigabitEthernet 0/2
ncs(config- GigabitEthernet-0/2)# exit
ncs(config-config)# ios:interface GigabitEthernet 0/3
ncs(config- GigabitEthernet-0/3)# exit
ncs(config-config)# ios:interface Loopback 0
ncs(config-Loopback-0)# exit
ncs(config-config)# ios:snmp-server community public RO
ncs(config-community-public)# exit
ncs(config-config)# ios:snmp-server trap-source GigabitEthernet 0/2
ncs(config-config)# top
ncs(config)# commitncs(config)# devices device ce10 address 127.0.0.1 port 10032
ncs(config-device-ce10)# device-type cli ned-id cisco-ios-cli-3.8
ncs(config-device-ce10)# authgroup default
ncs(config-device-ce10)# top
ncs(config)# commitncs(config)# devices device ce10 apply-template template-name ce-initialize
apply-template-result {
device ce10
result no-capabilities
info No capabilities found for device: ce10. Has a sync-from the device
been performed?
}ncs(config)# devices device ce10 \
apply-template template-name ce-initialize accept-empty-capabilities
apply-template-result {
device ce10
result ok
}ncs(config)# show configuration
devices device ce10
config
ios:ip dns server
ios:interface GigabitEthernet0/1
exit
ios:interface GigabitEthernet0/2
exit
ios:interface GigabitEthernet0/3
exit
ios:interface Loopback0
exit
ios:snmp-server community public RO
ios:snmp-server trap-source GigabitEthernet0/2
!
!ncs(config)# show full-configuration devices template
devices template snmp1
ned-id cisco-ios-cli-3.8
config
ios:snmp-server community {$COMMUNITY}
RO
!
!
!
ned-id cisco-iosxr-cli-3.5
config
cisco-ios-xr:snmp-server community {$COMMUNITY}
RO
!
!
!
ned-id juniper-junos-nc-3.0
config
junos:configuration snmp community {$COMMUNITY}
authorization read-only
!
!
!
!ncs(config)# devices device ce2 apply-template template-name \
snmp1 variable { name COMMUNITY value 'FUZBAR' }
ncs(config)# show configuration
devices device ce2
config
ios:snmp-server community FUZBAR RO
!
!
ncs(config)# commit dry-run outformat native
native {
device {
name ce2
data snmp-server community FUZBAR RO
}
}
ncs(config)# commit
Commit complete.ncs(config)# show full-configuration devices device ce2 config\
ios:snmp-server
devices device ce2
config
ios:snmp-server community FUZBAR RO
!
!ncs(config)# tag add devices template snmp1 ned-id cisco-ios-cli-3.8 config\
ios:snmp-server community {$COMMUNITY} replacencs(config)# show configuration
devices template snmp1
ned-id cisco-ios-cli-3.8
config
! Tags: replace
ios:snmp-server community {$COMMUNITY}
!
!
!
!ncs(config)# devices device ce2 apply-template template-name \
snmp1 variable { name COMMUNITY value 'FUZBAR' } | debug template
Operation 'merge' on existing node: /devices/device[name='ce2']
The device /devices/device[name='ce2'] does not support
namespace 'http://tail-f.com/ned/cisco-ios-xr' for node "'snmp-server'"
Skipping...
The device /devices/device[name='ce2'] does not support
namespace 'http://xml.juniper.net/xnm/1.1/xnm' for node "configuration"
Skipping...
Variable $COMMUNITY is set to "FUZBAR"
Operation 'merge' on non-existing node:
/devices/device[name='ce2']/config/ios:snmp-server/community[name='FUZBAR']
Operation 'merge' on non-existing node:
/devices/device[name='ce2']/config/ios:snmp-server/community[name='FUZBAR']/ROadmin@ncs> request devices device ex0 rename new-name foo
result true
[ok][2024-04-16 20:51:51]
admin@ncs> show devices device foo rename-history | tab
FROM TO WHEN USER
----------------------------------------------------
ex0 foo 2024-04-16T18:51:51.578439+00:00 admin
[ok][2024-04-16 20:52:07]
admin@ncs> show configuration devices device ex0
---------------------------------------------^
syntax error: element does not exist
[error][2024-04-16 20:52:09]
admin@ncs> show configuration devices device foo
address 127.0.0.1;
port 12022;
...admin@ncs> request devices commit-queue add-lock device [ ex1 ]
commit-queue-id 1713297244546
[ok][2024-04-16 21:54:04]
admin@ncs> request devices device ex1 rename new-name foo wait-for-lock { timeout 5 }
result false
info ex1: A timeout occured when trying to add device lock to the commit queue
[ok][2024-04-16 21:54:26]admin@ncs% run show vlan-state test plan | tab
POST
BACK ACTION
TYPE NAME TRACK GOAL STATE STATUS WHEN ref STATUS
---------------------------------------------------------------------------------
self self false - init reached 2024-04-16T21:38:34 - -
ready reached 2024-04-16T21:38:34 - -
vlan ex1 false - init reached 2024-04-16T21:38:34 - -
router-init reached 2024-04-16T21:38:34 - -
ready reached 2024-04-16T21:38:34 - -
vlan ex2 false - init reached 2024-04-16T21:38:34 - -
router-init reached 2024-04-16T21:38:34 - -
ready reached 2024-04-16T21:38:34 - -
[ok][2024-04-16 21:38:44]admin@ncs% request devices device ex1 rename new-name newex1
result true
[ok][2024-04-16 21:39:21]
[edit]
admin@ncs% run show vlan-state test plan | tab
POST
BACK ACTION
TYPE NAME TRACK GOAL STATE STATUS WHEN ref STATUS
---------------------------------------------------------------------------------
self self false - init reached 2024-04-16T21:38:34 - -
ready reached 2024-04-16T21:38:34 - -
vlan ex1 false - init reached 2024-04-16T21:38:34 - -
router-init reached 2024-04-16T21:38:34 - -
ready reached 2024-04-16T21:38:34 - -
vlan ex2 false - init reached 2024-04-16T21:38:34 - -
router-init reached 2024-04-16T21:38:34 - -
ready reached 2024-04-16T21:38:34 - -
[ok][2024-04-16 21:39:24]admin@ncs% request vlan-state test plan component vlan ex1 force-back-track
result true
[ok][2024-04-16 21:39:51]
[edit]
admin@ncs% run show vlan-state test plan | tab
POST
BACK ACTION
TYPE NAME TRACK GOAL STATE STATUS WHEN ref STATUS
---------------------------------------------------------------------------------
self self false - init reached 2024-04-16T21:38:34 - -
ready reached 2024-04-16T21:38:34 - -
vlan ex2 false - init reached 2024-04-16T21:38:34 - -
router-init reached 2024-04-16T21:38:34 - -
ready reached 2024-04-16T21:38:34 - -
[ok][2024-04-16 21:39:54]
[edit]
admin@ncs% request vlan test re-deploy
[ok][2024-04-16 21:40:02]
[edit]
admin@ncs% run show vlan-state test plan | tab
POST
BACK ACTION
TYPE NAME TRACK GOAL STATE STATUS WHEN ref STATUS
-----------------------------------------------------------------------------------
self self false - init reached 2024-04-16T21:38:34 - -
ready reached 2024-04-16T21:40:02 - -
vlan ex2 false - init reached 2024-04-16T21:38:34 - -
router-init reached 2024-04-16T21:38:34 - -
ready reached 2024-04-16T21:38:34 - -
vlan newex1 false - init reached 2024-04-16T21:40:02 - -
router-init reached 2024-04-16T21:40:02 - -
ready reached 2024-04-16T21:40:02 - -
[ok][2024-04-17 08:40:05]admin@ncs% show packages package component ned device
packages package router-nc-1.0
component router
ned device vendor "Acme Inc."
ned device product-family [ "Acme Netconf router 1.0" ]
ned device operating-system [ AcmeOS "AcmeOS 2.0" ]
[ok][2024-04-16 19:53:20]
admin@ncs% set devices device mydev address 127.0.0.1 port 12022 authgroup default
[ok][2024-04-16 19:53:34]
[edit]
admin@ncs% set devices device mydev auto-configure vendor "Acme Inc." operating-system AcmeOs
[ok][2024-04-16 19:53:36]
[edit]
admin@ncs% commit | details
...
2024-04-16T19:53:37.655 device mydev: auto-configuring...
2024-04-16T19:53:37.659 device mydev: configuring admin state... ok (0.000 s)
2024-04-16T19:53:37.659 device mydev: fetching ssh host keys... ok (0.011 s)
2024-04-16T19:53:37.671 device mydev: copying configuration from device... ok (0.054 s)
2024-04-16T19:53:37.726 device mydev: auto-configuring: ok (0.070 s)
...admin@ncs% set devices device d1 auto-configure vendor "Acme Inc." product-family "Acme router"
admin@ncs% set devices device d2 auto-configure vendor "Acme Inc." operating-system AcmeOS
admin@ncs% set devices device d3 auto-configure ned-id router-nc-1.0admin@ncs% set devices device mydev2 auto-configure vendor "Acme Inc." operating-system AcmeOS
[ok][2024-04-16 20:03:05]
[edit]
admin@ncs% set devices device mydev2 state admin-state southbound-locked
[ok][2024-04-16 20:03:05]
[edit]
admin@ncs% commit | details
...
2024-04-16T20:03:08.604 device mydev2: auto-configuring...
2024-04-16T20:03:08.606 device mydev2: configuring admin state... ok (0.000 s)
2024-04-16T20:03:08.606 device mydev2: fetching ssh host keys... skipped - 'southbound-locked' configured (0.001 s)
2024-04-16T20:03:08.608 device mydev2: auto-configuring: ok (0.003 s)
...ncs# show devices device ce9 state oper-state
state oper-state disabledncs# show devices device state oper-state
OPER
NAME STATE
----------------
ce0 enabled
ce1 enabled
ce10 disabled
ce2 enabled
ce3 enabled
ce4 enabled
ce5 enabled
ce6 enabled
ce7 enabled
ce8 enabled
ce9 disabled
p0 enabled
p1 enabled
p2 enabled
p3 enabled
pe0 enabled
pe1 enabled
pe2 enabled
pe3 enabled
ncs# show devices device ce0..9 state oper-state
OPER
NAME STATE
----------------
ce0 enabled
ce1 enabled
ce2 enabled
ce3 enabled
ce4 enabled
ce5 enabled
ce6 enabled
ce7 enabled
ce8 enabled
ce9 disabled$ ncs-netsim stop ce0
DEVICE ce0 STOPPED
$ ncs_cli -C -u admin
ncs# show devices device ce0 state oper-state
state oper-state enabledncs(config)# devices device ce0 config ios:snmp-server contact [email protected]
ncs(config-config)# commit
Aborted: Failed to connect to device ce0: connection refused: Connection refused
ncs(config-config)# *** ALARM connection-failure: Failed to
connect to device ce0: connection refused: Connection refusedncs# show devices device ce0 state oper-state
state oper-state disabledsubmodule tailf-ncs-devices {
....
typedef admin-state {
type enumeration {
enum locked {
description
"When a device is administratively locked, it is not possible
to modify its configuration, and no changes are ever
pushed to the device.";
}
enum unlocked {
description
"Device is assumed to be operational.
All changes are attempted to be sent southbound.";
}
enum southbound-locked {
description
"It is possible to configure the device, but
no changes are sent to the device. Useful admin mode
when pre provisioning devices. This is the default
when a new device is created.";
}
enum config-locked {
description
"It is possible to send live-status commands or RPCs
but it is not possible to modify the configuration
of the device.";
}
}
}
....
container devices {
....
container state {
....
leaf admin-state {
type admin-state;
default southbound-locked;
}
leaf admin-state-description {
type string;
description
"Reason for the admin state.";
}submodule tailf-ncs-devices {
...
container source {
tailf:info "How the device was added to NCS";
leaf added-by-user {
type string;
}
leaf context {
type string;
}
leaf when {
type yang:date-and-time;
}
leaf from-ip {
type inet:ip-address;
}
leaf source {
type string;
reference "TMF518 NRB Network Resource Basics";
}
}ncs# show devices device ce0 capability
capability urn:ietf:params:netconf:capability:with-defaults:1.0?basic-mode=trim
capability urn:ios
revision 2015-03-16
module tailf-ned-cisco-ios
capability urn:ios-stats
revision 2015-03-16
module tailf-ned-cisco-ios-stats
ncs# show devices device ce0 capability module
NAME REVISION FEATURE DEVIATION
-----------------------------------------------------------
tailf-ned-cisco-ios 2015-03-16 - -
tailf-ned-cisco-ios-stats 2015-03-16 - -ncs(config)# devices device foo address 127.0.0.1 port 12033 authgroup default
ncs(config-device-foo)# device-type netconf ned-id netconf
ncs(config-device-foo)# state admin-state unlocked
ncs(config-device-foo)# commit
Commit complete.
ncs(config-device-foo)# exit
ncs(config)# exit
ncs# devices fetch-ssh-host-keys device foo
fetch-result {
device foo
result updated
fingerprint {
algorithm ssh-rsa
value 14:3c:79:87:69:8e:e2:f0:6d:43:07:8c:89:41:fd:7f
}
}
ncs# devices device foo connect
result true
info (admin) Connected to foo - 127.0.0.1:12033
ncs# show devices device foo capability
capability :candidate:1.0
capability :confirmed-commit:1.0
...
capability http://xml.juniper.net/xnm/1.1/xnm
module junos
capability urn:ietf:params:xml:ns:yang:ietf-yang-types
revision 2013-07-15
module ietf-yang-types
capability urn:juniper-rpc
module junos-rpc
...ncs# show devices ned-ids
ID NAME REVISION
--------------------------------------------------------------
cisco-ios-xr-v2 tailf-ned-cisco-ios-xr -
tailf-ned-cisco-ios-xr-stats -
lsa-netconf
netconf
snmp
alu-sr-cli-3.4 tailf-ned-alu-sr -
tailf-ned-alu-sr-stats -
cisco-ios-cli-3.8 tailf-ned-cisco-ios -
tailf-ned-cisco-ios-stats -
cisco-iosxr-cli-3.5 tailf-ned-cisco-ios-xr -
tailf-ned-cisco-ios-xr-stats -
juniper-junos-nc-3.0 junos -
junos-rpc -
ncs# config
Entering configuration mode terminal
ncs(config)# devices device foo device-type netconf ned-id juniper-junos-nc-3.0
ncs(config-device-foo)# commit
Commit complete.ncs# show devices device ce0 state transaction-mode
state transaction-mode ned
ncs# show devices device pe2 state transaction-mode
state transaction-mode lock-candidatemodule junos-rpc {
...
rpc request-package-add {
...
rpc request-reboot {
...
rpc get-software-information {
...
rpc ping {ncs(config)# devices device pe2 rpc rpc-
Possible completions:
rpc-get-software-information rpc-idle-timeout rpc-ping \
rpc-request-package-add rpc-request-reboot
ncs(config)# devices device pe2 rpc \
rpc-get-software-information get-software-information briefsubmodule tailf-ncs-devices {
namespace "http://tail-f.com/ns/ncs";
...
container devices {
.....
list device {
...
}
list device-group {
key name;
leaf name {
type string;
}
description
"A named group of devices, some actions can be
applied to an entire group of devices, for example
apply-template, and the sync actions.";
leaf-list device-name {
type leafref {
path "/devices/device/name";
}
}
leaf-list device-group {
type leafref {
path "/devices/device-group/name";
}
description
"A list of device groups contained in this device group.
Recursive definitions are not valid.";
}
leaf-list member {
type leafref {
path "/devices/device/name";
}
config false;
description
"The current members of the device-group. This is a flat list
of all the devices in the group.";
}
uses connect-grouping ;
uses sync-grouping;
uses check-sync-grouping;
uses apply-template-grouping;
}
}
}ncs(config)# show full-configuration devices device-group
devices device-group C
device-name [ ce0 ce1 ce3 ce4 ce5 ce6 ce7 ce8 ]
!
devices device-group P
device-name [ p0 p1 p2 p3 ]
!
devices device-group PE
device-name [ pe0 pe1 pe2 pe3 ]
!ncs(config)# devices device-group my-group device-name ce0
ncs(config-device-group-my-group)# device-name pe
Possible completions:
pe0 pe1 pe2 pe3
ncs(config-device-group-my-group)# device-name pe0
ncs(config-device-group-my-group)# device-name p0
ncs(config-device-group-my-group)# commitncs(config-device-group-my-group)# device-group PE
ncs(config-device-group-my-group)# commit
ncs(config)# show full-configuration devices device-group my-group
devices device-group my-group
device-name [ ce0 p0 pe0 ]
device-group [ PE ]
!
ncs(config)# exit
ncs# show devices device-group my-group
NAME MEMBER INDETERMINATES CRITICALS MAJORS MINORS WARNINGS
-------------------------------------------------------------------------------------------
my-group [ ce0 p0 pe0 pe1 pe2 pe3 ] 0 0 1 0 0ncs# devices device-group C sync-toncs(config)# devices device-group C apply-template \
template-name snmp1 variable { name COMMUNITY value 'cinderella' }
ncs(config)# show configuration
devices device ce0
config
ios:snmp-server community cinderella RO
!
!
devices device ce1
config
ios:snmp-server community cinderella RO
!
!
...
ncs(config)# commitncs(config)# policy rule gb-one-zero
ncs(config-rule-gb-one-zero)# foreach /ncs:devices/device[starts-with(name,'ce')]/config
ncs(config-rule-gb-one-zero)# expr ios:interface/ios:GigabitEthernet[ios:name='0/1']
ncs(config-rule-gb-one-zero)# warning-message "{../name} should have 0/1 interface"
ncs(config-rule-gb-one-zero)# commit
zork(config-rule-gb-one-zero)# top
zork(config)# !
ncs(config)# show full-configuration policy
policy rule gb-one-zero
foreach /ncs:devices/device[starts-with(name,'ce')]/config
expr ios:interface/ios:GigabitEthernet[ios:name='0/1']
warning-message "{../name} should have 0/1 interface"
!
ncs(config)# no devices device ce0 config ios:interface GigabitEthernet 0/1
ncs(config)# validate
Validation completed with warnings:
ce0 should have 0/1 interface
ncs(config)# no devices device ce1 config ios:interface GigabitEthernet 0/1
ncs(config)# validate
Validation completed with warnings:
ce1 should have 0/1 interface
ce0 should have 0/1 interface
ncs(config)# commit
The following warnings were generated:
ce1 should have 0/1 interface
ce0 should have 0/1 interface
Proceed? [yes,no] yes
Commit complete.$ ncs-netsim stop ce0
DEVICE ce0 STOPPED
$ ncs_cli -C -u admin
admin connected from 127.0.0.1 using console on ncs
ncs# configncs(config)# commit commit-queue
Possible completions:
async Commit through commit queue and return immediately
bypass Bypass commit-queue when queue is enabled by default
sync Commit through commit queue and wait for reply
ncs(config)# commit commit-queue asyncncs(config)# devices global-settings commit-queue enabled-by-default
[false,true] (false): true
ncs(config)# commitncs(config)# devices device ce0..2 commit-queue enabled-by-default
[false,true] (false): true
ncs(config)# commitncs(config)# devices device ce0..2 config ios:snmp-server \
trap-source GigabitEthernet 0/1
ncs(config-config)# commit
commit-queue-id 9494446997
Commit complete.
ncs(config-config)# *** ALARM connection-failure: Failed to
connect to device ce0: connection refused: Connection refusedncs(config)# devices device ce0 config ios:interface GigabitEthernet 0/25
ncs(config-if)# commit
commit-queue-id 9494530158
Commit complete.
ncs(config-if)# *** ALARM commit-through-queue-blocked:
Commit Queue item 9494530158 is blocked because qitem 9494446997
cannot connect to ce0ncs# show devices commit-queue | notab
devices commit-queue queue-item 9494446997
age 144
status executing
devices [ ce0 ce1 ce2 ]
transient ce0
reason "Failed to connect to device ce0: connection refused"
is-atomic true
devices commit-queue queue-item 9494530158
age 61
status blocked
devices [ ce0 ]
waiting-for [ ce0 ]
is-atomic truencs# show devices commit-queue queue-item 9494530158 details | notab
devices commit-queue queue-item 9494530158
age 278
status blocked
devices [ ce0 ]
waiting-for [ ce0 ]
is-atomic true
modification ce0
data <interface xmlns="urn:ios">
<GigabitEthernet>
<name>0/25</name>
</GigabitEthernet>
</interface>
local-user adminncs# show alarms alarm-list alarm ce0 commit-through-queue-blocked
alarms alarm-list alarm ce0 commit-through-queue-blocked /devices/device[name='ce0'] 9494530158
is-cleared false
last-status-change 2015-02-09T16:48:17.915+00:00
last-perceived-severity warning
last-alarm-text "Commit queue item 9494530158 is blocked because item 9494446997 cannot connect to ce0"
status-change 2015-02-09T16:48:17.915+00:00
received-time 2015-02-09T16:48:17.915+00:00
perceived-severity warning
alarm-text "Commit queue item 9494530158 is blocked because item 9494446997 cannot connect to ce0"ncs(config)# devices commit-queue add-lock device [ ce0 ] block-others
commit-queue-id 9577950918
ncs# show devices commit-queue | notab
devices commit-queue queue-item 9494446997
age 1444
status executing
devices [ ce0 ce1 ce2 ]
transient ce0
reason "Failed to connect to device ce0: connection refused"
is-atomic true
devices commit-queue queue-item 9494530158
age 1361
status blocked
devices [ ce0 ]
waiting-for [ ce0 ]
is-atomic true
devices commit-queue queue-item 9577950918
age 55
status locked
devices [ ce0 ]
waiting-for [ ce0 ]
is-atomic truencs(config)# devices commit-queue set-atomic-behaviour atomic false
ncs(config)# devices commit-queue prune device [ ce0 ]
num-affected-queue-items 2
num-deleted-queue-items 1
ncs(config)# show devices commit-queue | notab
devices commit-queue queue-item 9577950918
age 102
status locked
kilo-bytes-size 1
devices [ ce0 ]
is-atomic truencs(config)# show configuration
vpn l3vpn volvo
as-number 65101
endpoint branch-office1
ce-device ce1
ce-interface GigabitEthernet0/11
ip-network 10.7.7.0/24
bandwidth 6000000
!
endpoint main-office
ce-device ce0
ce-interface GigabitEthernet0/11
ip-network 10.10.1.0/24
bandwidth 12000000
!
!
ncs(config-if)# commit commit-queue async
commit-queue-id 9494530158
ncs# show devices commit-queue | notab
devices commit-queue queue-item 9494446997
age 60
status executing
devices [ lsa-nso2 lsa-nso3 ]
is-atomic true
ncs# show devices commit-queue | notab
devices commit-queue queue-item 9494446997
age 66
status executing
devices [ lsa-nso2 ]
completed [ lsa-nso3 ]
is-atomic true
ncs# show devices commit-queue
% No entries found.<stream>
<name>ncs-events</name>
<description>NCS event according to tailf-ncs-devices.yang</description>
<replay-support>true</replay-support>
<builtin-replay-store>
<enabled>true</enabled>
<dir>./state</dir>
<max-size>S10M</max-size>
<max-files>50</max-files>
</builtin-replay-store>
</stream>ncs(config)# cluster commit-queue enabled
ncs(config)# commitncs# show devices commit-queue completed | notab
devices commit-queue completed queue-item 9494446997
when 2015-02-09T16:48:17.915+00:00
succeeded false
devices [ ce0 ce1 ce2 ]
failed ce0
reason "Failed to connect to device ce0: closed"
devices commit-queue completed queue-item 9494530158
when 2015-02-09T16:48:17.915+00:00
succeeded false
devices [ ce0 ]
failed ce0
reason "Deleted by user"ncs(config)# devices commit-queue completed queue-item 9494446997 rollbackmodule notif {
namespace "http://router.com/notif";
prefix notif;
import ietf-inet-types {
prefix inet;
}
notification startUp {
leaf node-id {
type string;
}
}
notification linkUp {
leaf ifName {
type string;
mandatory true;
}
leaf extraId {
type string;
}
list linkProperty {
max-elements 64;
leaf newlyAdded {
type empty;
}
leaf flags {
type uint32;
default 0;
}
list extensions {
max-elements 64;
leaf name {
type uint32;
mandatory true;
}
leaf value {
type uint32;
mandatory true;
}
}
}
list address {
key ip;
leaf ip {
type inet:ipv4-address;
}
leaf mask {
type inet:ipv4-address;
}
}
leaf-list iface-flags {
type enumeration {
enum UP;
enum DOWN;
enum BROADCAST;
enum RUNNING;
enum MULTICAST;
enum LOOPBACK;
}
}
}
notification linkDown {
leaf ifName {
type string;
mandatory true;
}
}
}admin@ncs# show devices device pe2 notifications stream | notab
notifications stream NETCONF
description "default NETCONF event stream"
replay-support false
notifications stream tailf-audit
description "Tailf Commit Audit events"
replay-support true
notifications stream interface
description "Example notifications"
replay-support true
replay-log-creation-time 2014-10-14T11:21:12+00:00
replay-log-aged-time 2014-10-14T11:53:19.649207+00:00module tailf-ncs {
namespace "http://tail-f.com/ns/ncs";
...
container devices {
list device {
....
container notifications {
....
list stream {
description "A list of the notification streams
provided by the device. NCS reads this list in
real time";
config false;
key name;
leaf name {
description "The name of the the stream";
type string;
}
leaf description {
description "A textual description of the stream";
type string;
}
leaf replay-support {
description "An indication of whether or not event replay
is available on this stream.";
type boolean;
}
leaf replay-log-creation-time {
description "The timestamp of the creation of the log
used to support the replay function on
this stream.
Note that this might be earlier then
the earliest available
notification in the log. This object
is updated if the log resets
for some reason.";
type yang:date-and-time;
}
leaf replay-log-aged-time {
description "The timestamp of the last notification
aged out of the log";
type yang:date-and-time;
}
}admin@ncs(config)# devices device www0..2 notifications \
subscription mysub stream interface
admin@ncs(config-subscription-mysub)# commitadmin@ncs# show devices device notifications | notab
devices device www0
notifications subscription mysub
local-user admin
status running
notifications stream NETCONF
description "default NETCONF event stream"
replay-support false
notifications stream tailf-audit
description "Tailf Commit Audit events"
replay-support true
notifications stream interface
description "Example notifications"
replay-support true
replay-log-creation-time 2014-10-14T11:21:12+00:00
replay-log-aged-time 2014-10-14T11:56:45.755964+00:00
notifications notification-name startUp
uri http://router.com/notif
notifications notification-name linkUp
uri http://router.com/notif
notifications notification-name linkDown
uri http://router.com/notif
notifications received-notifications notification 2014-10-14T11:54:43.692371+00:00 0
user admin
subscription mysub
stream interface
received-time 2014-10-14T11:54:43.695191+00:00
data linkUp ifName eth2
data linkUp linkProperty
newlyAdded
flags 42
extensions
name 1
value 3
extensions
name 2
value 4668
data linkUp address 192.168.128.55
mask 255.255.255.0ncs(config)# devices global-settings trace pretty trace-dir ./logs
ncs(config)# commit
ncs(config)# devices disconnect
ncs(config)# devices device pe2 notifications \
subscription foo stream interface
ncs(config-subscription-foo)# top
ncs(config)# exit
ncs# file show ./logs/netconf-pe2.trace
<<<<in 14-Oct-2014::13:59:52.295 device=pe2 session-id=14
<notification xmlns="urn:ietf:params:xml:ns:netconf:notification:1.0">
<eventTime>2014-10-14T11:58:51.816077+00:00</eventTime>
<linkUp xmlns="http://router.com/notif">
<ifName>eth2</ifName>
<linkProperty>
<newlyAdded/>
<flags>42</flags>
<extensions>
<name>1</name>
<value>3</value>
</extensions>
<extensions>
<name>2</name>
<value>4668</value>
</extensions>
</linkProperty>
<address>
<ip>192.168.128.55</ip>
<mask>255.255.255.0</mask>
</address>
</linkUp>
</notification>
.........ncs(config)# devices device www0 notifications \
received-notifications max-size 100
admin@ncs(config-device-www0)# commitmodule tailf-ncs {
namespace "http://tail-f.com/ns/ncs";
...
container devices {
list device {
....
container notifications {
....
list subscription {
.....
leaf status {
description "Is this subscription currently running";
config false;
type enumeration {
enum running {
description "The subscription is established and we should
be receiving notifications";
}
enum connecting {
description "Attempting to establish the subscription";
}
enum failed {
description
"The subscription has failed, unless the failure is
in the connection establishing, i.e connect() failed
there will be no automatic re-connect";
}
}
}ncs# show devices device notifications subscription
LOCAL FAILURE ERROR
NAME NAME USER STATUS REASON INFO
---------------------------------------------
www0 foo admin running - -
mysub admin running - -
www1 mysub admin running - -
www2 mysub admin running - -