NNMi SNMP Context Polling

NNMi version 23.4

We are an ISP using Cisco ASR9Ks that have many VRFs and Bridge Domains configured. Currently we have not defined snmp context names and mapped them to community strings for the VRFs and bridge domains because it requires a lot of extra configuration and the VRFs/BridgeDomains are dynamic as there is customer turnover.

NNMi detects that the non default contexts exist and tries to poll them, creating continuous SNMP authentication errors.

Sample error message in device:

"SNMP-SNMP-3-AUTH_FAIL : Received snmp request on unknown community from ..."

Sample of Configuration poll detecting the snmp contexts:

5/31/24 11:03:40 AM  Get SnmpContext. Found entries:88

Is there any way to configure NNMi so that it does not try polling the VRFs?

Thanks in advance.

Tags:

  • 0

    did you ever find a solution here? I think i'm fighting with a rather similar issue on our Nexus9k.

  • 0 in reply to 

    Not yet, I'll post resolution if I find one.

  • Suggested Answer

    0  

    Hello Dan,

    NNMi discovers VRF context names from MIBs in default context. As you mentioned it seems NNMi was able to discover 88 names.

    Since names were found NNMi will try to discover the content of contexts, IP addresses, for example. Since mapping communities to VRF

    contexts were not done you see authentication failures on devices. I am not aware of any solution on NNMi side for such environment.

    We do not have a configuration property to disable discovery of VRF contexts. The VRF discovery is triggered automatically as soon as context

    names are known. You can submit an idea to implement such property, but it can be done only in new NNMi releases. I could mention only a couple

    of solutions on devices. For example, you could use "exclude" option when configuring SNMP OID views to exclude OIDs with context names.

    After that NNMi will not be able to discover context names. If you post here device sysOID I could try to provide the OIDs to be blocked (nnmsnmpget.ovpl device-name  sysObjectID.0). Another approach would be to properly configure VRF contexts. For details of SNMP and VRF configuration please check the links below.


    www.cisco.com/.../b-system-management-cg-asr9000-75x.pdf
    www.cisco.com/.../snmp-server-commands.html

    Thank you.

    Best regards,

    Sergey Pankratov

  • 0 in reply to   

    The sysOID for these is .1.3.6.1.4.1.9.1.1709

    Thanks for your assistance.

  • 0   in reply to 

    The only source of context names for this device I can see

    SNMP-VIEW-BASED-ACM-MIB

    vacmContextName .1.3.6.1.6.3.16.1.1.1.1

    To verify

    nnmsnmpwalk.ovpl  device-name  .1.3.6.1.6.3.16.1.1.1.1

    If it returns context names you can block the OID .1.3.6.1.6.3.16.1.1.1.1

  • 0 in reply to   

    The SNMP-VIEW-BASED-ACM-MIB (.1.3.6.1.6.3.16) did not contain any information on our devices:
    nnmsnmpwalk.ovpl myrouter .1.3.6.1.6.3.16
    No MIB objects contained under subtree.

    I've been trying some snmpwalks of the CISCO-CONTEXT-MAPPING-MIB (1.3.6.1.4.1.9.9.468) and have found that our devices do respond with useful information here:

    1.3.6.1.4.1.9.9.468.1.1.1.2 # cContextMappingVrfName
    1.3.6.1.4.1.9.9.468.1.3.1.1 # cContextMappingBridgeInstName

    My next step will be to create an snmp view to block access to the CISCO-CONTEXT-MAPPING-MIB (1.3.6.1.4.1.9.9.468) and see if that helps. I will post the results when testing is completed.

  • 0 in reply to 

    After creating an SNMP view to exclude CISCO-CONTEXT-MAPPING-MIB (1.3.6.1.4.1.9.9.468), I found that NNMi still detected other context items. It tried to poll them with a name like "vpls_xxxx", for example "vpls_3500". I was not able to find out exactly what the "3500" is detected from ("routed interface", etc) and what other SNMP branch would have to be excluded from the SNMP view in order to prevent NNMi from discovering these. My concern was that if I exclude too many SNMP branches from the view, important information may inadvertantly get blocked as well, so I am not pursuing this further.

  • Verified Answer

    +1 in reply to 

    Thanks for that update, I was looking forward to the result of your tests.

    As mentioned I think I have a similar issue but discovered on N9k: Currently they can't be automatically discovered anymore. When seeding them I just get a "Failed". And I currently believe this is also due to the "wrong" contexts (not sure yet if this is due to the device providing wrong information or NNM). At least a tcpdump shows NNM trying to query lots of snmp-contexts that are not existing (well, VRFs that are existing, but not as snmp-context).

    It seems IOS devices are still auto-discovered but I see a lot of similar messages as you do in the logs. But since those devices reply with an authorizationError NNM seems to be able to handle them better, just generating lots of messages. On NX the devices reply on queries to a non-existing context with a timeout.

    Unfortunately I don't have any other Information yet, case is still pending with OpenText as well as Cisco. I'll obviously let you know if I have an update.

    By the way, if somebody else has a similar issue with N9K: I know of 2 workarounds:
    workaround 1:
    - specific communication settings for that device as icmp only
    - seed the device
    - remove specific communication setting
    - do 2 configuration polls (first one will fail)
    workaround 2:
    - for each vrf that you have configure an snmp-context on the device

  • 0 in reply to 

    Thanks for your response.

  • 0 in reply to 

    Just as an update. After a few sessions with support I finally got a patch "TB-NNMI-23.4P2-DISCOVERY-20240926" that at least allows you to exclude snmpContextDiscovery based on sysOID. Which seems to be working in my environment (at least with the Nexus 9k i can regularly seed them again). 

    In my book that is still not a fix of the root cause with discovering the wrong snmp contexts, but at least it's a clean workaround.