4 min read time

Easily collect custom metrics using Perl and visualize in OBM

by   in IT Operations Cloud

Do you want to store custom metrics to the Operations agent performance data store?  Have you been doing this with Data Source Integration (DSI) and looking for a better approach?

One of the hidden gems of Operations agent 12 is the ability to log custom metrics to the performance data store using Perl.  It offers several advantages over using Data Source Integration (DSI):

  • Support for multi-instance* data
  • Support for 64-bit data types
  • Less effort, since there is no need to pre-create and compile a class specification file

* Multi-instance data refers to multiple data points submitted for a given metric at a given time.  For example, logging JVM heap usage for 3 different JVMs running on the same system.

This article describes how to use Operations agent 12’s simplified custom data logging within an OBM policy.  This example shows how to log the number of events to the performance data store on the Data Processing Server (DPS).  While the example is based on embedding Perl into a policy, you can use the same Perl as a standalone script.  In either case, both the agent APIs and OBM APIs facilitate monitoring as code.

The end result enables you to graph the event count and event rate over time in OBM’s Performance Dashboard.chart.jpg

 

 

To capture this data over time for historical graphing and reporting, you can create a policy that leverages the Operations agent’s Perl API capability of logging custom data.

Details on how to use the Perl API are in the Operations Agent documentation.  The Perl script shown below is one example of how to use the API.

Example: Create an OBM policy to log event count statistics

Create a Scheduled Task policy to run every 5 minutes.  Select the Task Type “Perl Script” and enter the Perl script shown below.  The blue text show the use of the Operations agent 12 submittal API.

use strict;
use warnings;
use Time::Local;
use oaperlapi;

# Environment
my $omihome;
my $cmd;

if (defined $ENV{"TOPAZ_HOME"}) {
  $omihome = $ENV{"TOPAZ_HOME"};
} else {
  if ( $^O =~ /MSWin32/ ) {
    $omihome = "C:/HPBSM";
  } else {
    $omihome = "/opt/HP/BSM";
  }
}

if ( $^O =~ /MSWin32/ ) {
  $cmd = "$omihome/opr/support/opr-jmxClient.bat -u \"admin\" -p \"\" -s localhost:29622 -b opr.backend:name=EventStatisticsMBean -m showSummedEventStatistics -a ";
} else {
  $cmd = "$omihome/opr/support/opr-jmxClient.sh -u \"admin\" -p \"\" -s localhost:29622 -b opr.backend:name=EventStatisticsMBean -m showSummedEventStatistics -a ";
}

my $timeRange = 300; # Get event count for the previous 5 minute (300 sec) window which is the minimum window size
my $currentEpochTime = time;
my $startEpochTime = $currentEpochTime - (2 * $timeRange); # 10 minutes ago
my $endEpochTime = $currentEpochTime - $timeRange; # 5 minutes ago

my $sec; my $min; my $hour; my $day; my $month; my $year;
($sec, $min, $hour, $day, $month, $year) = (localtime($startEpochTime))[0,1,2,3,4,5];
my $startTime = ($year 1900)."-".($month   1)."-".$day." ".$hour.":".$min; # Eg 2016-2-26 10:38

($sec, $min, $hour, $day, $month, $year) = (localtime($endEpochTime))[0,1,2,3,4,5];
my $endTime = ($year 1900)."-".($month   1)."-".$day." ".$hour.":".$min; # Eg 2016-2-26 10:38

# get event pipeline stats
my $eventStats = qx{$cmd "$startTime" "$endTime"};

# process interesting records
my $eventCount = 0;
my $eventRate = 0;
($eventCount) = $eventStats =~ /RECEIVED (\S )/;
$eventRate = sprintf("%.2f", $eventCount / $timeRange);


# Submit to the OA Perf Data Store via Perl APIs

my $access = oaperlapi::OAAccess->new();
my $molist = oaperlapi::MetricObservationList->new();
my $interval = $endEpochTime;
my $mo = oaperlapi::MetricObservation->new($interval);

$mo->AddGauge("OMi:Event:EventCount","OMi","Event Count",$eventCount+0);
$mo->AddGauge("OMi:Event:EventRate","OMi","Events Per Second",$eventRate+0);
$molist->AddObservation($mo);
$access->SubmitObservations($molist);

 

After saving the policy, all you need do is assign and deploy the policy to the DPS node.  After 5 to 10 minutes, you should see that it is logging metrics, like this:

ovcodautil -dumpds OMi
===
10/29/18 10:36:00 AM|Event_Id                      |OMi       |
10/29/18 10:36:00 AM|EventCount                    |     53.00|
10/29/18 10:36:00 AM|EventRate                     |      0.18|

Create the performance dashboard

In OBM’s Performance Perspective, select the appropriate view and CI.  In this example, the metrics are applicable only to the OBM DPS rather than all nodes.  Therefore, select the OMi Deployment view and then the OMi Processing Server CI in that view.  Create a new dashboard, containing a graph with the new metrics:graph.jpg

 

This has been just one example of logging OBM metrics to Operations agent 12 for visualization in the OBM Performance Dashboard.  This approach has been used in several policies of the OBM Server self-monitoring content pack.

Events

To get more information on this release and how customers are using Operations Bridge we are happy to announce the following events

Read all our news at the OpsBridge blog

References

Explore all the capabilities of the Operations Bridge and technology integrations by visiting these sites:

 

 

Labels:

Data Center Automation
Hybrid Cloud Management
Network Operations Management
Operations Bridge
Service Management Automation
Operations Bridge