All Apps and Add-ons

Configuration of Splunk for Citrix NetScaler App with AppFlow

jmkosky
New Member

We are running Netscalar 10.1 and I have installed version 5.0 of 'Splunk for Citrix Netscaler ' and the 'Splunk Add-on for IPFIX' but so far I cannot see any information coming up in either the NetScaler Overview or AppFlow Overview areas. I am running Splunk on a linux box. I can see that data is arriving from the Netscaler. I would welcome any tips or instructions on what extra steps I need to take with indexes or data inputs.

0 Karma
1 Solution

jconger
Splunk Employee
Splunk Employee

The first thing we need to do is verify that data is coming into your Splunk environment from your Netscaler environment. To do this, launch your Netscaler app on your Splunk instance. Then, from the menu, navigate to "Splunk for Citrix NetScaler" -> "Search NetScaler Data". This will open another dashboard that says "New Search" and have a default search term of "eventtype=netscaler". Change the time range from "All Time" to "Last 24 hours" to make sure you have data coming in (screen shot = http://grab.by/AGwo ).

If you do have data, then change the search term to "eventtype=netscaler_appflow" to ensure AppFlow data is coming in. Let us know the results of these 2 searches and we can determine next steps.

View solution in original post

tkropp
Path Finder

we are are experimenting with the SplunkForNetscaler application.

It is important that you look at the python scripts which come with the app in /bin directory. These scripts define the source, source type, index which are expected. If you do NOT wish to have these preferences you will need to change the script.

/opt/splunk/etc/deployment-apps/SplunkforCitrixNetScaler/bin/scripted_inputs

def create_inputs(appdir, disabled):
localdir = os.path.join(appdir, 'local')
if not os.path.exists(localdir):
os.makedirs(localdir)
inputs_file = os.path.join(localdir, 'inputs.conf')
fo = open(inputs_file, 'w')
fo.write( "[udp://8514]\n")
fo.write( "#connection_host = dns\n")
fo.write( "sourcetype = ns_log\n")
fo.write( "index = netscaler\n")
fo.write( "disabled = %d\n" % disabled)

(PS I wish splunk had better support for Markdown like Github)

0 Karma

millern4
Communicator

Hello everyone,

I'm having a very similar issue although for a period of time my AppFlow Dahboards within the app were populating successfully but have since stopped.

I'm seeing a very similar message in my logs and was looking to see if any resolution ever came from this thread?

10-07-2014 13:44:17.092 -0400 ERROR ExecProcessor - message from "python /splunk/etc/apps/Splunk_TA_ipfix/bin/ipfix.py" CRITICAL:ipfix:Traceback (most recent call last): ||   File "/splunk/etc/apps/Splunk_TA_ipfix/bin/splunklib/modularinput/script.py", line 74, in run_script ||     self.stream_events(self._input_definition, event_writer) ||   File "/splunk/etc/apps/Splunk_TA_ipfix/bin/IPFIX/ModInput.py", line 117, in stream_events ||     self.handle_message(data, address, stanza, writer) ||   File "/splunk/etc/apps/Splunk_TA_ipfix/bin/IPFIX/ModInput.py", line 77, in handle_message ||     source=":".join([str(v) for v in source]))) ||   File "/splunk/etc/apps/Splunk_TA_ipfix/bin/splunklib/modularinput/event_writer.py", line 104, in write_event ||     event.write_to(self._out) ||   File "/splunk/etc/apps/Splunk_TA_ipfix/bin/splunklib/modularinput/event.py", line 106, in write_to ||     stream.write(ET.tostring(event)) ||   File "/splunk/lib/python2.7/xml/etree/ElementTree.py", line 1126, in tostring ||     ElementTree(element).write(file, encoding, method=method) ||   File "/splunk/lib/python2.7/xml/etree/ElementTree.py", line 820, in write ||     serialize(write, self._root, encoding, qnames, namespaces) ||   File "/splunk/lib/python2.7/xml/etree/ElementTree.py", line 939, in _serialize_xml ||     _serialize_xml(write, e, encoding, qnames, None) ||   File "/splunk/lib/python2.7/xml/etree/ElementTree.py", line 937, in _serialize_xml ||     write(_escape_cdata(text, encoding)) ||   File "/splunk/lib/python2.7/xml/etree/ElementTree.py", line 1073, in _escape_cdata ||     return text.encode(encoding, "xmlcharrefreplace") || UnicodeDecodeError: 'utf8' codec can't decode byte 0xfa in position 206: invalid start byte

My inputs.conf for AppFlow is as follows:

[ipfix://NetScaler_AppFlow]
sourcetype = appflow
index = netscaler
address = 0.0.0.0
port = 4739
buffer = 1048576
disabled = false

Thank you in advance - I'm going to continue to troubleshoot this from my end.

0 Karma

jconger
Splunk Employee
Splunk Employee

Try this search and let me know what you see:

index=* *netscaler* | stats count by index

jmkosky
New Member

Thanks Jconger.
I did what you asked and the search came back empty. I believe data is coming from the netscaler as it was working prior to upgrading Splunk for Citrix Netscaler. I can see that data is arriving on the Splunk box and netscaler is the only device Splunk is monitoring at the moment.

0 Karma

jmkosky
New Member

Hi Jconger
I had a breakthrough earlier and now have Splunk for Netscaler working. I still cannot see any appflow data so I am now trying to sort that out.

0 Karma

jconger
Splunk Employee
Splunk Employee

Nice. What index is the NetScaler data in?

0 Karma

jconger
Splunk Employee
Splunk Employee

The first thing we need to do is verify that data is coming into your Splunk environment from your Netscaler environment. To do this, launch your Netscaler app on your Splunk instance. Then, from the menu, navigate to "Splunk for Citrix NetScaler" -> "Search NetScaler Data". This will open another dashboard that says "New Search" and have a default search term of "eventtype=netscaler". Change the time range from "All Time" to "Last 24 hours" to make sure you have data coming in (screen shot = http://grab.by/AGwo ).

If you do have data, then change the search term to "eventtype=netscaler_appflow" to ensure AppFlow data is coming in. Let us know the results of these 2 searches and we can determine next steps.

jmkosky
New Member

Hi Jbennett_splunk
Thanks for the reply. The netscaler configuration should be ok and it was working up until the point where I updated Splunk for Citrix Netscaler to the latest version. As far as I can tell the setting are exactly the same as in the example above. I am not sure what you mean by 'match where you are sending the data' so if you could elaborate that would be great.

If we are talking about the [ipfix://...] section of the inputs.conf file then I believe that is ok. I am a real newbie at this so I am unsure of how exactly one does a manual search.

To me it looks like Splunk is receiving data from the netscaler but not doing anything meaningful with it.

I am seeing the following error messages.

09-25-2014 16:12:23.590 +1000 ERROR SearchScheduler - Error in 'SearchOperator:copyresults': Cannot find results for search_id 'scheduler__nobody__SplunkforCitrixNetScaler__RMD5e6c1124fdfffb39d_at_1411625511_0'., search='copyresults dest="appid_lookup" sid="scheduler__nobody__SplunkforCitrixNetScaler__RMD5e6c1124fdfffb39d_at_1411625511_0"' 

09-25-2014 16:11:57.614 +1000 ERROR ExecProcessor - message from "python /opt/splunk/etc/apps/Splunk_TA_ipfix/bin/ipfix.py" CRITICAL:ipfix:Traceback (most recent call last): || File "/opt/splunk/etc/apps/Splunk_TA_ipfix/bin/splunklib/modularinput/script.py", line 74, in run_script || self.stream_events(self._input_definition, event_writer) || File "/opt/splunk/etc/apps/Splunk_TA_ipfix/bin/IPFIX/ModInput.py", line 105, in stream_events || s.bind((bind_host, bind_port)) || File "/opt/splunk/lib/python2.7/socket.py", line 224, in meth || return getattr(self._sock,name)(*args) || error: [Errno 98] Address already in use 
0 Karma

jbennett_splunk
Splunk Employee
Splunk Employee

The first thing I would say is double-check that the address and port in your [ipfix://...] stanza are correct (0.0.0.0 means it's listening on all local IP addresses, which is usually fine), and match where you're sending the data.

Also, of course, make sure you're forwarding both syslog and netflow (aka IPFIX) from the Netscaler 😉

The second thing would be to double-check the index and sourcetype settings in your [ipfix://...] configuration, and do a manual search to see if there's any data in that index/sourcetype.

MarioM
Motivator

The app put the data in index=netscaler which is not searched by default thus you need index=netscaler in your search.

As well to collect the data you need to put the Splunk_TA_Citrix-NetScaler folder in SplunkforCitrixNetScaler/appserver/addons/ to your splunk data collection instance SPLUNK_HOME\etc\apps\ and put it on your splunk search head too.

Your might need to either modify your netscaler config or the SPLUNK_HOME\etc\apps\Splunk_TA_Citrix-NetScaler\default\inputs.conf ports:

[udp://8514]
#connection_host = dns
sourcetype = ns_log
index = netscaler
disabled = true

# A separate IPFIX addon is needed in order for the following stanza to work.  http://apps.splunk.com/app/1801/
[ipfix://NetScaler_AppFlow]
sourcetype = appflow
index = netscaler
address = 0.0.0.0
port = 4739
buffer = 1048576
disabled = true
0 Karma

jmkosky
New Member

Hi Mario
Thanks for replying. I have the above in my SPLUNK_HOME\etc\apps\Splunk_TA_Citrix-NetScaler\default\inputs.conf file already but it has made no difference. Could I please ask you to explain a little more about the 'index=netscaler' portion of your answer? I am unsure where to perform this action. Apologies for being a Splunk newbie. 🙂

0 Karma

MarioM
Motivator

to see if you have data you need to check if the index netscaler has been created in /manager/launcher/data/indexes then search /app/search/search?q=search index%3Dnetscaler | head 1000 to see if there is data/

0 Karma

jmkosky
New Member

Hi Mario
As far as I can tell I do not have that directory. I am running Splunk on Ubuntu Linux. I have searched on the linux box for that directory but have found nothing.

0 Karma

MarioM
Motivator

which directory do you mean? do you have an index named netscaler in http:///en-US/manager/search/data/indexes ?

0 Karma
Get Updates on the Splunk Community!

Extending Observability Content to Splunk Cloud

Register to join us !   In this Extending Observability Content to Splunk Cloud Tech Talk, you'll see how to ...

What's new in Splunk Cloud Platform 9.1.2312?

Hi Splunky people! We are excited to share the newest updates in Splunk Cloud Platform 9.1.2312! Analysts can ...

What’s New in Splunk Security Essentials 3.8.0?

Splunk Security Essentials (SSE) is an app that can amplify the power of your existing Splunk Cloud Platform, ...