All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi all, I have Splunk ESS Version: 7.1.3. After updating the GeoLite2-City.mmdb db (last 17/3/20) I noticed that in my query the location is still wrong. Example: index=network eventtype=... See more...
Hi all, I have Splunk ESS Version: 7.1.3. After updating the GeoLite2-City.mmdb db (last 17/3/20) I noticed that in my query the location is still wrong. Example: index=network eventtype=cisco_vpn_start "AnyConnect parent session started." | where src_ip ="37.161.xx.xxx" | iplocation src_ip | table src_ip, Country src_ip Country 37.161.xx.xxx France However when I running the following command: | makeresults | eval ip="37.161.xx.xxx" | iplocation ip | table Country, ip Country ip Italy 37.161.xx.xxx Why in one case is the correct value returned to me (Italy) while in the first case France appears to me? The behavior is very anomalous to me Do you have any suggestion or fix for first query? Thanks a lot, Saverio
Hello all! I'm trying to build dropdowns in a dashboard for fields I've built via 'rex field' and eval statements seen in the search below. I am having trouble tying these fields into $token$ v... See more...
Hello all! I'm trying to build dropdowns in a dashboard for fields I've built via 'rex field' and eval statements seen in the search below. I am having trouble tying these fields into $token$ values. I've tried placing them into the search in a couple of different places, but the search just fails: Here is the search as it is built currently. Thanks for any direction you can provide. index=pcf_* cf_org_name="Network Software Development and Automation" cf_space_name="Development" cf_app_name=*privatecloud-dev* msg=*VALUES* *user_logs* user="$fields,0$" | rex field=msg "VALUES (?<valuees>.*)" | eval fields=split(valuees,"'") | eval user=mvindex(fields,0) | eval user=mvindex(fields,1) | eval method=mvindex(fields,3) | eval page=mvindex(fields,5) | eval params=mvindex(fields,7) | eval datetime =mvindex(fields,9) | search user=$"fields,0"$ | stats count by datetime user method page params
Hi there, I'm trying to create a time series data using streamstats function. Got it figured out, but is there any way to avoid the rename function and have the new columns produced by strea... See more...
Hi there, I'm trying to create a time series data using streamstats function. Got it figured out, but is there any way to avoid the rename function and have the new columns produced by streamstats replace the old values? I read the streamstats documentation, but didn't see any optional fields applied for my use cases (or I may not have understood it). splunk_server=indexer* index=wsi sourcetype=fdpwsiperf partnerId!=*test* partnerId=* error_msg_service=* tax_year=2019 ofx_appid=* capability=* | eval error_msg_service = case(error_msg_service="OK", "Success", match(error_msg_service, "Provider/Host Request Error"), "HTTP Request Error", match(error_msg_service, "Provider/Host Response Error"), "HTTP Response Error", match(error_msg_service, "Provider/Host Not Available"), "Server Exception", 1==1, "Import Failure") | timechart span=10m dc(intuit_tid) by error_msg_service | streamstats sum | rename "sum(Success)" as "Success", "sum(HTTP Request Error)" as "HTTP Request Error", "sum(HTTP Response Error)" as "HTTP Response Error", "sum(Import Failure)" as "Import Failure", "sum(Server Exception)" as "Server Exception" Thanks!
Hello, I would like to be able to format the values shown in my heatmap, for example for a given cell with a value of 500,000 I would like to show 500k and so forth. I am willing to write some java... See more...
Hello, I would like to be able to format the values shown in my heatmap, for example for a given cell with a value of 500,000 I would like to show 500k and so forth. I am willing to write some javascript and i have even seen in the app's source code that there is a format propertie that i could use. How could i accomplish this customization ? I appreciate any help given.
Our organization is using the Solarwinds MSP N-Central Security Manager to handle antivirus. Does anyone have any experience in pulling in the AV data from this? I cannot find anything on getting S... See more...
Our organization is using the Solarwinds MSP N-Central Security Manager to handle antivirus. Does anyone have any experience in pulling in the AV data from this? I cannot find anything on getting Solarwinds Security Manager and Splunk to talk to each other. I am guessing this is because Solarwinds is trying to compete against Splunk for a SIEM...
Event data has multiple time values in the Epoch time format. I am able to convert the one used for event timestamp without issue. Having trouble with the eval statements in props.conf to convert the... See more...
Event data has multiple time values in the Epoch time format. I am able to convert the one used for event timestamp without issue. Having trouble with the eval statements in props.conf to convert the additional fields to a human-readable time for indexing. example of times in the event (referenced as time.event, time.receive, and time.report) example of EVAL statements
In the Splunk Add-on Builder, I configured a modular input using a REST API to pull data from FortiOS/FortiGate. I am trying to pass global account values (setup parameters), but the recommended vari... See more...
In the Splunk Add-on Builder, I configured a modular input using a REST API to pull data from FortiOS/FortiGate. I am trying to pass global account values (setup parameters), but the recommended variables from the following link do not work when I test the connection: https://docs.splunk.com/Documentation/AddonBuilder/3.0.1/UserGuide/ConfigureDataCollection#Pass_values_from_setup_parameters Here is the error message: [ERROR] - [test] HTTPError reason=HTTP Error Unable to find the server at %7b%7bglobal_account.username%7d%7d when sending request to url=https://{{global_account.username}}/api/v2/cmdb/firewall/address?with_meta=true&vdom=*&access_token={{global_account.password}} method=GET I'm guessing the variable syntax is incorrect, or am I wrong to try and pass the global account values to the URL?
Hi , I am looking for some information on Splunk Universal forwarder upgrade. We have 3000 + forwarders that needs a mass upgrade. What are the things do i need to consider for upgrading like bac... See more...
Hi , I am looking for some information on Splunk Universal forwarder upgrade. We have 3000 + forwarders that needs a mass upgrade. What are the things do i need to consider for upgrading like backups etc ? i am looking for some scripts for linux system to remote upgrade the forwarders. Please let me know if any one have some scripts available. I am looking for the same info for windows as well.
In the Splunk Add-on Builder, I configured a modular input using a REST API to pull data from FortiOS/FortiGate. I tested my REST URL and received the following error message: [ERROR] - [test] HT... See more...
In the Splunk Add-on Builder, I configured a modular input using a REST API to pull data from FortiOS/FortiGate. I tested my REST URL and received the following error message: [ERROR] - [test] HTTPError reason=HTTP Error Content purported to be compressed with deflate but failed to decompress. when sending request to url=https://x.x.x.x/api/v2/cmdb/firewall/address?with_meta=true&vdom=*&access_token=xxxxxxxxxxxxx method=GET When I tested the URL in a browser, the JSON comes back fine. Authentication takes place in the URL via the access_token parameter. Any ideas on how to get around this compression error? Here is the full log: 2020-03-19 19:42:56,347 - test_fortigate_configuration_firewall_addresses - [ERROR] - [test] HTTPError reason=HTTP Error Content purported to be compressed with deflate but failed to decompress. when sending request to url=https://x.x.x.x/api/v2/cmdb/firewall/address?with_meta=true&vdom=*&access_token=xxxxxxxxxxxxx method=GET Traceback (most recent call last): File "/opt/splunk/etc/apps/sherlock_fortigate-config_ta/bin/sherlock_fortigate_config_ta/aob_py3/httplib2/__init__.py", line 455, in _decompressContent content = zlib.decompress(content, -zlib.MAX_WBITS) zlib.error: Error -3 while decompressing data: invalid stored block lengths During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/splunk/etc/apps/sherlock_fortigate-config_ta/bin/sherlock_fortigate_config_ta/aob_py3/cloudconnectlib/core/http.py", line 224, in _retry_send_request_if_needed uri=uri, body=body, method=method, headers=headers File "/opt/splunk/etc/apps/sherlock_fortigate-config_ta/bin/sherlock_fortigate_config_ta/aob_py3/cloudconnectlib/core/http.py", line 195, in _send_internal uri, body=body, method=method, headers=headers File "/opt/splunk/etc/apps/sherlock_fortigate-config_ta/bin/sherlock_fortigate_config_ta/aob_py3/httplib2/__init__.py", line 1957, in request cachekey, File "/opt/splunk/etc/apps/sherlock_fortigate-config_ta/bin/sherlock_fortigate_config_ta/aob_py3/httplib2/__init__.py", line 1622, in _request conn, request_uri, method, body, headers File "/opt/splunk/etc/apps/sherlock_fortigate-config_ta/bin/sherlock_fortigate_config_ta/aob_py3/httplib2/__init__.py", line 1592, in _conn_request content = _decompressContent(response, content) File "/opt/splunk/etc/apps/sherlock_fortigate-config_ta/bin/sherlock_fortigate_config_ta/aob_py3/httplib2/__init__.py", line 466, in _decompressContent content, httplib2.FailedToDecompressContent: Content purported to be compressed with deflate but failed to decompress. During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/splunk/etc/apps/sherlock_fortigate-config_ta/bin/sherlock_fortigate_config_ta/aob_py3/cloudconnectlib/core/engine.py", line 291, in _send_request response = self._client.send(request) File "/opt/splunk/etc/apps/sherlock_fortigate-config_ta/bin/sherlock_fortigate_config_ta/aob_py3/cloudconnectlib/core/http.py", line 275, in send url, request.method, request.headers, request.body File "/opt/splunk/etc/apps/sherlock_fortigate-config_ta/bin/sherlock_fortigate_config_ta/aob_py3/cloudconnectlib/core/http.py", line 229, in _retry_send_request_if_needed raise HTTPError('HTTP Error %s' % str(err)) cloudconnectlib.core.exceptions.HTTPError: HTTP Error Content purported to be compressed with deflate but failed to decompress. 2020-03-19 19:42:56,347 - test_fortigate_configuration_firewall_addresses - [INFO] - [test] This job need to be terminated. 2020-03-19 19:42:56,347 - test_fortigate_configuration_firewall_addresses - [INFO] - [test] Job processing finished 2020-03-19 19:42:56,347 - test_fortigate_configuration_firewall_addresses - [INFO] - [test] 1 job(s) process finished 2020-03-19 19:42:56,347 - test_fortigate_configuration_firewall_addresses - [INFO] - [test] Engine executing finished
Dear community, I am lost in creating a regexp that will ease up my data input creation. So I do have a file share being monitored by splunk with the following structure: /data/reports/Applica... See more...
Dear community, I am lost in creating a regexp that will ease up my data input creation. So I do have a file share being monitored by splunk with the following structure: /data/reports/ApplicationA/LocationA/very_interesting.log /data/reports/ApplicationA/LocationB/very_interesting.log /data/reports/ApplicationB/LocationB/very_interesting.log To scale at ease, I would like to define a single data input for ApplicationA which extracts the host using 2 parameters of the path. i.e. ApplicationA_LocationA ApplicationA_LocationB Do you have any idea, how I could transform the / between ApplicationA and the location subfolders to a _ and after do the pattern matching to extract the host? Thanks in advance!
Hi Everyone! I have researched this issue and found a few solutions, though not completely. I followed this link: https://answers.splunk.com/answers/557838/create-an-alert-based-on-cpu-being-at-9... See more...
Hi Everyone! I have researched this issue and found a few solutions, though not completely. I followed this link: https://answers.splunk.com/answers/557838/create-an-alert-based-on-cpu-being-at-95-for-a-spa.html and wanted to know if I can use "%_Processor_Time" instead of CPUPct as I am not able to extract "CPUPct" field. Also, I followed this link: https://answers.splunk.com/answers/693250/how-do-i-alert-if-cpu-is-greater-than-97-for-more.html here, I wanted to understand what does "instance=Total" mean? Also, which one of the accepted answers is better to use? The queries I used are as follows: SPL 1: index="perfmoncpu" | bin _time span=1m | stats max(%_Processor_Time) as PercentProcessorTime by host _time | eval PercentProcessorTime = round(PercentProcessorTime, 2) | eval overload = if(PercentProcessorTime >= 90, 1, 0) |streamstats current=f last(overload) as prevload by host |eval newgroup=case(isnull(prevload),1, prevload!=overload,1, true(),0) |streamstats sum(newgroup) as groupno by host |eventstats count as LoadDuration by host groupno | where overload = 1 and LoadDuration >= 10 | table host _time PercentProcessorTime LoadDuration SPL 2: index="perfmoncpu" source="PerfmonMk:CPU" instance=_Total | sort 0 _time | streamstats time_window=15min avg(cpu_load_percent) as last15min_load count by host | eval last15min_load = if (count < 90,null,round(last15min_load, 2)) | where (last15min_load) >= 90 | table host, cpu_load_percent, last15min_load I have used count<90 as the above SPL generates a count of 90 mins throughout Please let me know if you guys have any further questions. Thank You! PS: I am a newbie trying to learn splunk!
I am troubleshooting an issue with an app and the searchoperatorKV error I am receiving is: Invalid key-value parser, ignoring it, transform_name='my_transform'. When investigating the tran... See more...
I am troubleshooting an issue with an app and the searchoperatorKV error I am receiving is: Invalid key-value parser, ignoring it, transform_name='my_transform'. When investigating the transforms.conf file, the transform in question looks like this: # transforms.conf [my_transform] MV_ADD = True My question is are the boolean values true and false case sensitive within the .conf files?
My Seattle-based company just onboarded Zoom for our rapid remote access expansion in response to COVID-19. We are looking to perform analytics on the logs immediately. I see there is an app pushing ... See more...
My Seattle-based company just onboarded Zoom for our rapid remote access expansion in response to COVID-19. We are looking to perform analytics on the logs immediately. I see there is an app pushing out from Splunk to Zoom, but cannot find any documentation how to ingest Zoom data. I do see that Zoom has an API. Any assistance here?
Hi all, For a search similar to the following: index=myindex "Search Term" NOT field=value source="mylog.log" | eval totalx=aCount+bCount | stats sum(totalx) by y | sort -sum(totalx) Splunk r... See more...
Hi all, For a search similar to the following: index=myindex "Search Term" NOT field=value source="mylog.log" | eval totalx=aCount+bCount | stats sum(totalx) by y | sort -sum(totalx) Splunk returns exactly the data I am looking for. A table of sum(totalx) by y (of which there are around 5 different values of y). I have a request to combine the sum(totalx) values for 2 of the 5 values and treat them as one value but leave the rest unchanged. What would be the best way to accomplish this? For instance, right now my search returns a table similar to this: y sum(totalx) 1 10 2 20 3 30 4 40 5 50 I am essentially trying to create an additional field, let's call it 45, which represents the sum of 4 and 5 at all times. So instead, the data being visualized is: y sum(totalx) 1 10 2 20 3 30 45 90 Thanks!
25days convert to seconds and difference with current time to seconds and display the difference time
We are ingesting the firewall data from the panorama and GP cloud service logs from Cortex and ingesting the data to the same index pan_logs with sourcetype=pan:log. The logs from panorama are gett... See more...
We are ingesting the firewall data from the panorama and GP cloud service logs from Cortex and ingesting the data to the same index pan_logs with sourcetype=pan:log. The logs from panorama are getting parsed properly, however, the data from the cortex data lake for global protect cloud service is not getting parsed. Does the Palo Alto Networks for Splunk add-on support data coming from Cortex? Any suggestions to make this work?
I am trying to set up my forwarders to use SSL without having to use the built in client certs on version 8.0.2.1. It looks like the option useSSL in the outputs.conf file doesn't do what the documen... See more...
I am trying to set up my forwarders to use SSL without having to use the built in client certs on version 8.0.2.1. It looks like the option useSSL in the outputs.conf file doesn't do what the documentation says. In https://docs.splunk.com/Documentation/Splunk/8.0.2/Admin/Outputsconf it says * Whether or not the forwarder uses SSL to connect to the receiver, or relies on the 'clientCert' setting to be active for SSL connections. * If set to "true", then the forwarder uses SSL to connect to the receiver. * Default: legacy Here is my outputs.conf file [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] server = splunkserver:9997 useSSL = true Here is the inputs.conf file on the server [splunktcp-ssl://9997] connection_host = ip [SSL] requireClientCert = false serverCert = $SPLUNK_HOME/etc/auth/servercert.pem #Use sslPassword = password sslPassword = password This outputs.conf file does work [tcpout] defaultGroup = default-autolb-group [tcpout:default-autolb-group] server = splunkserver:9997 useSSL = true clientCert = $SPLUNK_HOME/etc/auth/server.pem sslPassword = password
Hi, I install PAVO Network Traffic App for Splunk for splunk enterprise 8.0 (60 day trial), but I does not see any data on dashboard. I'm just start splunk 2 weeks ago. I already installed Spl... See more...
Hi, I install PAVO Network Traffic App for Splunk for splunk enterprise 8.0 (60 day trial), but I does not see any data on dashboard. I'm just start splunk 2 weeks ago. I already installed Splunk Common Information Model. I'm already monitor syslog windows 10 and linuxMint VM's (by using respective forwarder) Please, could you help ? What is the correct SPL in order to check data? On which index, PAVO Network Traffic App expected data? Many thanks in advance for your help. Samir
I would like to understand what is the EMEA address space used for SaaS controllers. I am looking at the document: https://docs.appdynamics.com/display/PAA/SaaS+Domains+and+IP+Ranges but I am not s... See more...
I would like to understand what is the EMEA address space used for SaaS controllers. I am looking at the document: https://docs.appdynamics.com/display/PAA/SaaS+Domains+and+IP+Ranges but I am not sure if address ranges mentioned in the "AppD Data Center" are valid for all the regions in the world. Because, when I open "EMEA" tab, there are no IP addresses in there. It says "AppDynamics will provide a unique set of IP addresses at the time of provisioning the Controller within AWS." What does this mean? Are these addresses provided at the provisioning moment from the same address block mentioned before, or not? I need to know this because our customers want to allow outbound connectivity to certain IP addresses - they don't want to generally open port 443 for the whole internet.
I recently activated my 7-days trial sandbox for Splunk Enterprise Security as i want to evaluate the functionality of the Use Case library. However, the sandbox is running version 7.1.10 of Splunk ... See more...
I recently activated my 7-days trial sandbox for Splunk Enterprise Security as i want to evaluate the functionality of the Use Case library. However, the sandbox is running version 7.1.10 of Splunk and version 5.1.1 of the ES application. As i understand, the use case library was added in version 5.2 of the ES app. I therefore cannot access the feature. Is there another way to evaluate the "Use Case Library" feature?