All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

How to convert below query where summarization status is unknown . | index="netsec_firewall" sourcetype="pan:traffic" action="allowed" app:technology="client-server" | stats first(start_time) AS ... See more...
How to convert below query where summarization status is unknown . | index="netsec_firewall" sourcetype="pan:traffic" action="allowed" app:technology="client-server" | stats first(start_time) AS start_time count by app user src_ip src_host dest_ip dest_host dest_port duration
We have upgraded our NIPS and the management tool has a different IP address than the old one. The NIPS is sending data to our syslog server and putting it in our unassigned folder. I edited the sysl... See more...
We have upgraded our NIPS and the management tool has a different IP address than the old one. The NIPS is sending data to our syslog server and putting it in our unassigned folder. I edited the syslog filters to put any messages from the new IP address in the NIPS folder for monitoring. I have restarted the syslog-ng service and the Splunk service. I have confirmed the logs are being written to the NIPS folder on the syslog server now, but Splunk is still importing them to the unassigned index instead of the NIPS index and showing the source as the old folder. I have also confirmed that no new entries have been put in the old folder since the change. The inputs.conf looks like this. Where else can I look that might be causing the data to not go into the NIPS index?     [monitor:///app01/syslog/data/siem01/nips/.../*messages] _rcvbuf = 1572864 disabled = false host = siem01 host_segment = 6 index = nips sourcetype = mcafee:ips [monitor:///app01/syslog/data/siem01/unassigned/.../*messages] _rcvbuf = 1572864 disabled = false host = siem01 host_segment = 6 index = unassigned sourcetype = syslog      
I'm having issues with downloading , after i press download it takes me to the Splunk Software License Agreement page without downloading anything and there is no check box or any button to continue ... See more...
I'm having issues with downloading , after i press download it takes me to the Splunk Software License Agreement page without downloading anything and there is no check box or any button to continue the process on the page. 
If I want to use a field(alarm_time) from the main search as a search criteria for a sub-search, what code should I write? In the following code, I want to search for the time they are working  I... See more...
If I want to use a field(alarm_time) from the main search as a search criteria for a sub-search, what code should I write? In the following code, I want to search for the time they are working  I want to search Conditions : work_start < alarm_time < work_end  search results you want to get : (work_name=work_b) ____________________________________ | makeresults |eval _raw="alarm_time,host,message 2022/03/26 18:05,test_node,test_down" | multikv forceheader=1 | eval alarm_time_strp = strptime(alarm_time,"%Y/%m/%d %H:%M") | join type=left host [| makeresults |eval _raw="host,work_start,work_end,work_name test_node,2022/03/26 17:00,2022/03/26 18:00,work_a test_node,22022/03/26 18:00,2022/03/26 19:00,work_b test_node,2022/03/26 19:00,2022/03/26 20:00,work_c" | multikv forceheader=1 | eval work_start_strp = strptime(work_start,"%Y/%m/%d %H:%M") | eval work_end_strp = strptime(work_end,"%Y/%m/%d %H:%M") ]
I have a data set from where I am trying to apply the group by function on multiple columns. I tried stats with list and ended up with this output. country state time #travel Indi... See more...
I have a data set from where I am trying to apply the group by function on multiple columns. I tried stats with list and ended up with this output. country state time #travel India Bangalore 20220326023652 1     20220326023652 1     20220327023321 1     20220327023321 1     20220327023321 1 Whereas I am looking for something below ... country state time #travel India Bangalore 20220326023652 2     20220327023321 3   Any suggestions on the right query please!
Hello, I am trying to setup a search where we look for single source IP's hitting multiple destination IP's on our firewall. 1. When I do a search, I get TONS of results for destinations, but I w... See more...
Hello, I am trying to setup a search where we look for single source IP's hitting multiple destination IP's on our firewall. 1. When I do a search, I get TONS of results for destinations, but I want to limit the destination results to only show a few sample set. 2. I also have results showing up which only show one destination IP, which we do not want.   The search I am using as an example is    index=pan_logs eventtype=pan_traffic dvc="FD0*.*" action=allow OR action=allowed OR action=alert app=sip OR dest_port=5060 OR dest_port=5061 AND src_ip!=10.0.0.0/8 AND src_ip!=172.16.0.0/12 AND src_ip!=192.168.0.0/16  | stats values(rule) values(dest_ip) values(dest_port) count by src_ip vendor_action app dvc vsys | sort bt count desc limit=10 | sort dest_ip | where count > 500 | fields src_ip dvc vsys values(rule) app values(dest_ip) values(dest_port) vendor_action count | rename src_ip AS "Source IP", vendor_action AS "Action", values(rule) AS "Firewall Rule", values(dest_ip) AS "Target IP", values(dest_port) AS "Destination Port", count AS "Total Count", dvc AS "Device", app AS "Application" | head 20     Example search result
Using the Splunk addon for AWS to collect ec2 instance metadata I get an array called tags with key/value pairs such as below. What I want to do is extract the cluster name as a distinct var so that ... See more...
Using the Splunk addon for AWS to collect ec2 instance metadata I get an array called tags with key/value pairs such as below. What I want to do is extract the cluster name as a distinct var so that I can search on it or even better aggregate on it. Thoughts?        { [-]        Key: hostname        Value: elasticsearch001      }      { [-]        Key: cluster        Value: systemlogs
Hi there, hoping this is a quick question: I've got a search which polls for several eventlog types, and I want to put them into a table by event type using number of hosts in each event type, rath... See more...
Hi there, hoping this is a quick question: I've got a search which polls for several eventlog types, and I want to put them into a table by event type using number of hosts in each event type, rather than just the total of events per type. Right now it looks something like: (searchForA=A) (searchForB=B) (searchForC=C) (searchForD=D)  | eval EventType=case( match(searchForA, "A"), "Results of A", match(searchForB, "B"), "Results of B", match(searchForC, "C"), "Results of C", match(searchForD, "D"), "Results of D") stats count by EventType This shows me the counts of each event type no problem, and it works to show really big numbers, but what I'd like to show is a count of hosts per event type.... so like... | stats count by host PER EventType. Any help would be great!
Hi, I have a simple stats      | stats values(Field1) sum(TB) by Field2 date_month     This gives me one row for each month.  Field1 10 Field2 Jan Field1 15 Field2 Feb I want to ... See more...
Hi, I have a simple stats      | stats values(Field1) sum(TB) by Field2 date_month     This gives me one row for each month.  Field1 10 Field2 Jan Field1 15 Field2 Feb I want to see it like below so each month is on the same row grouped by the fields.  Field1 Field2 Jan 10 Feb 15 Tried transpose and some other suggestions. I just keep missing.  Thanks, Chris
Hi! I have unstructured log in the following format, and I can't seem to figure out how I can count the number of occurrences for each key in "keys"   log: I, [2022-03-25T18:29:43.325002 #55] IN... See more...
Hi! I have unstructured log in the following format, and I can't seem to figure out how I can count the number of occurrences for each key in "keys"   log: I, [2022-03-25T18:29:43.325002 #55] INFO -- : {:entry=>[{:op=>"operation1", :keys=>["key:my_key5, size:6309"]}]} log: I, [2022-03-25T18:29:43.324043 #56] INFO -- : {:entry=>[{:op=>"operation2", :keys=>["key:my_key6, size:159", "key:my_key5, size:6309", "key:my_key7, size:151", "key:my_key8, size:132"]}]} log: I, [2022-03-25T18:29:43.322759 #57] INFO -- : {:entry=>[{:op=>"operation3", :keys=>["key:smy_key9, size:4"]}]} log: I, [2022-03-25T18:29:43.317421 #58] INFO -- : {:entry=>[{:op=>"operation3", :keys=>["key:my_key6, size:159"]}]} log: I, [2022-03-25T18:29:43.311789 #55] INFO -- : {:entry=>[{:op=>"operation1", :keys=>["key:7, size:151"]}]}    What I'm trying to get is the count of each key in "keys[]". For example, the above would yield the following result:     my_key5 2 my_key6 2 my_key7 1 my_key8 1 my_key9 1     Ideally I can display the "size" of each key as well, like a table or something. But that might be too complicated.   What I have so far is only a query that can count the number of occurrences for each operation:   | rex field=log "op=>\"(?<operation>\w*)\"" | stats count by operation    but not sure how I can count the unique keys inside the array.
Is it possible to create a custom script that is a search command that can take in the search's results, do something, and then return the new results to splunk in a different language than python?
I'm really overthinking this, but am lost. I need to show when new correlation searches are introduced into the environment. I have a lookup with the current correlation searches, along w/ relevant... See more...
I'm really overthinking this, but am lost. I need to show when new correlation searches are introduced into the environment. I have a lookup with the current correlation searches, along w/ relevant data, and last time updated. How do I compare current rest call results with the lookup and show the most recently created searches? Any help would be appreciated.
I am working to upgrade SPLUNK Version from 8.0.1 to 8.2.2.1 (Solaris 11.3 O.S.). After the upgrade I see the below output, which is fine. pkg info -l splunkforwarder Name: application/splunkfor... See more...
I am working to upgrade SPLUNK Version from 8.0.1 to 8.2.2.1 (Solaris 11.3 O.S.). After the upgrade I see the below output, which is fine. pkg info -l splunkforwarder Name: application/splunkforwarder Summary: Splunk Universal Forwarder Description: Splunk> The platform for machine data. Category: Applications/Internet State: Installed Publisher: splunk Version: 8.2.2.1 Build Release: 191762265523787 Branch: None Packaging Date: Wed Sep 15 09:07:09 2021 Last Install Time: Fri Mar 25 17:43:36 2022 Size: 58.43 MB FMRI: pkg://splunk/application/splunkforwarder@8.2.2.1,191762265523787:20210915T090709Z when I try to start splunk I see the below error:  ./splunk start ld.so.1: splunk: fatal: relocation error: file splunk: symbol in6addr_any: referenced symbol not found Killed  Please any suggestion on this specific error ?  Thanks.  RB            
Does anyone have suggestions on integrating a SNMP enabled device into Splunk Enterprise?  I'm very new to Splunk and have been asked to integrate an SNMP enabled device into our Splunk Enterprise.  ... See more...
Does anyone have suggestions on integrating a SNMP enabled device into Splunk Enterprise?  I'm very new to Splunk and have been asked to integrate an SNMP enabled device into our Splunk Enterprise.  I think I need to somehow link a Forwarder to the device and have the Forwarder act as a receiver of device's information.  Once that data is in the Forwarder, I think it should be processed by an associated Indexer and then it should be available within Splunk.  Is that correct or do I misunderstand?
Hi all, I have a platform sending me events every 30 seconds, and will batch the events based on a distinct variable “tomatoes” and send to the relevant team every 10 mins as an alert. I wrote th... See more...
Hi all, I have a platform sending me events every 30 seconds, and will batch the events based on a distinct variable “tomatoes” and send to the relevant team every 10 mins as an alert. I wrote the below to  show management the total number of raw events vs the number of alerts being sent, based on historical data. I have now been asked to report on what the numbers would be if I throttled the alerts so that a distinct tomato would not create a new alert for 1 hour, and I have no idea how to do this. I don't need help with writing the alert, but I need help on creating a report. The throttled alerts have not been created yet, I need to figure out how to remove a distinct IP from the results for 1hour and then put them back in. index=* | bin _time span=10m | eval time=strftime(_time, "%m/%d/%Y %H:%M") | stats dc(tomatoes), count by time | rename dc(tomatoes) as tomatoes, count as tomatoes | table time, distinct_ tomatoes, total_ tomatoes | appendpipe [stats sum(distinct_ tomatoes) as distinct_ tomatoes sum(total_ tomatoes) as total_ tomatoes | eval time="Total" ] | appendpipe [where time!="Total" | stats avg(distinct_ tomatoes) as distinct_ tomatoes avg(total_ tomatoes) as total_ tomatoes | eval distinct_ tomatoes =round(distinct_IP,1), total_ tomatoes =round(total_IP,1) | eval time="Average"] time distinct_tomatoes total_tomatoes 03/24/2022 19:00 1 4 03/24/2022 19:10 1 2 03/24/2022 19:20 2 5 03/24/2022 19:30 1 4 03/24/2022 19:40 1 5 03/24/2022 19:50 3 5 Total 9 25 Average 1.5 4.2
I am looking to search in one Index for a specific field name and then use a second field from that Index to search a second Index for that value.  For example IndexA has field names Project and IRN... See more...
I am looking to search in one Index for a specific field name and then use a second field from that Index to search a second Index for that value.  For example IndexA has field names Project and IRNumber / IndexB has a field named InternalRequest IRNumber in Index A and InternalRequest in IndexB are the same values I would like to search IndexA by Project and then use the associated IRNumber from IndexA to search IndexB for the InternalRequest with the same value and then table various values from IndexB associated with that InternalRequest value.  Is there some way to use a sub-search to do this?
I've got a lot of LDAP users who no longer exist (500+ folders in /etc/users). What's the proper way to clean this? If I just delete the folder won't the search head cluster just resync it from the c... See more...
I've got a lot of LDAP users who no longer exist (500+ folders in /etc/users). What's the proper way to clean this? If I just delete the folder won't the search head cluster just resync it from the captain's running configuration? Also, someone pushed these using the Deployer when we migrated to 8.x. If I remove all user folders from the /etc/shcluster/users folder of the Deployer will anything bad happen next time I push? I have no desire to manage user settings with the Deployer; just push shcluster apps.
In a large environment with search heads in multiple locations, I would like to create a single dashboard that switches search queries values based on the search head URL.  This would allow us to hav... See more...
In a large environment with search heads in multiple locations, I would like to create a single dashboard that switches search queries values based on the search head URL.  This would allow us to have a single agnostic dashboard for multiple locations and/or environments. where the index values differ from one to the other.  Such as: https://splunk.NorthAmerica.Mycompany.com https://splunk.EU.Mycompany.com etc. token value = $URL$ Where I can set the locale token as: Locale=case($URL$ LIKE %NorthAmerica%, "NorthAmerica", $URL$ LIKE %EU%,Europe)  Is there any build-in splunk value that reads the requesting URL?  If not, how to construct a js that leverages the URLSearchParams() method to set the token? 
Greetings, I am writing a REST API call ingestion that requires authentication via first obtaining an auth token before the actual input, so I'm making use of the custom python code data ingestion b... See more...
Greetings, I am writing a REST API call ingestion that requires authentication via first obtaining an auth token before the actual input, so I'm making use of the custom python code data ingestion builder for the first time.  Having issues with the initial setting of a checkpoint for the start date of the api call specifically.  I figure I need to check to see if the checkpoint exists, and if it doesn't - set the check point to the provided date from within the Data Input settings.   When I try this with the below code snippet, I just get an error from the first line: AttributeError: 'NoneType' object has no attribute 'encode' ERROR 'NoneType' object has no attribute 'encode'   Not sure if there is a better way to set the initial check point somewhere that I'm not finding (this is in the collect_events function definition - so probably), or if there is a better way within python to handle the "if this doesn't exist, do this thing"     start_date = helper.get_check_point(input_name) if start_date is None: helper.save_check_point(input_name, helper.get_arg("data_ingestion_start_date")) start_date = helper.get_check_point(input_name) else: start_date = helper.get_check_point(input_name)     
Can someone help with Splunk Placeholder? What is Placeholder? How to create it? How does it work in lookup? How to make changes to existing Placeholder