All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

  2020/11/12 12:37:17 pid 282689 compute end .028s 23+5us 0+32io 0+0net 16472k 0pf Perforce server info: Server network estimates: files added/updated/deleted=0/0/0, bytes added/updated=0/0 2020/11... See more...
  2020/11/12 12:37:17 pid 282689 compute end .028s 23+5us 0+32io 0+0net 16472k 0pf Perforce server info: Server network estimates: files added/updated/deleted=0/0/0, bytes added/updated=0/0 2020/11/12 08:53:57 pid 249917 xyz@admin-client-for-stag 127.0.0.1 [p4/2018.1/LINUX26X86_64/1738923] 'user-sizes //ddc/...' --- lapse 98.5s --- rpc msgs/size in+out 0+1814189/0mb+509mb himarks 795800/318788 snd/rcv 92.8s/.000s --- db.revhx --- locks read/write 1/0 rows get+pos+scan put+del 0+1+1814190 0+0   2020/11/12 08:21:39 pid 245315 xyz@Admin-client-for-stag 127.0.0.1 [p4/2018.1/LINUX26X86_64/1738923] 'user-sizes -s -a -b 512 //mapgrp/...' --- lapse 106s --- usage 51584+13969us 75284368+0io 0+0net 8832k 0pf --- db.rev --- pages in+out+cached 4704508+0+96   2020/11/12 08:14:10 pid 243592 yyyz@admin-client-for-stag 127.0.0.1 [p4/2018.1/LINUX26X86_64/1738923] 'user-sizes -s -a -b 512 //projects/...' --- lapse 80.4s --- usage 38774+9874us 49562128+0io 0+0net 8832k 0pf --- db.rev --- pages in+out+cached 3374543+0+96       I have logs which shows lapse in seconds , they are several other logs along with this , i want to extract if logs pattern contains lapse and if lapse is greater than 100s ,and then print "xyz@admin-client-for-stag-21" who is the user who did this change ,may be extract only xyz user   Any help ?   Thanks,
I registered for an ES sandbox trial but there is no pre-populated data . Plus, there is a message stating: "Health Check: Splunk server "si-i-0fa8cd1f1571aa99c.prd-p-ek6ui.splunkcloud.com" does not... See more...
I registered for an ES sandbox trial but there is no pre-populated data . Plus, there is a message stating: "Health Check: Splunk server "si-i-0fa8cd1f1571aa99c.prd-p-ek6ui.splunkcloud.com" does not meet the recommended minimum system requirements." What should I do to make it right? warning not pre-populated Thanks in advance for your help.
Hi All, We have recently updated our "Splunk Add-on for ServiceNow" to the latest available (6.2.0) from a 4.# release on our Cloud IDM and have been experiencing dropped events ever since. Par... See more...
Hi All, We have recently updated our "Splunk Add-on for ServiceNow" to the latest available (6.2.0) from a 4.# release on our Cloud IDM and have been experiencing dropped events ever since. Particularly with our high traffic tables (sc_tasks, incident) which are frequently updated, we've noticed that the TA will pick up the initial entry for these when created in SNOW but will miss any subsequent updates (i.e. ticket will get closed in SNOW but this will not get reflected/updated in the data in Splunk), there was no issue before so we may need to downgrade the TA if necessary We currently have cases open with both ServiceNow and Splunk but was wondering if anyone else had experienced similar? Our instance is all cloud (Splunk Cloud, ServiceNow) and as I said was functioning before the update, which was done as part of 8.1 readiness
Hi, Is it possible to get splunk to search for a query and not include the last X number of events? thanks,
hello i'm wondering if there is an officiant way to query results using paganation and show results from 1-100, 100-200 etc so it will be easy to use them with a ui? maybe there is an offset mode ?... See more...
hello i'm wondering if there is an officiant way to query results using paganation and show results from 1-100, 100-200 etc so it will be easy to use them with a ui? maybe there is an offset mode ? thanks
Hello all, I've been searching this community up and down and also came of course across @niketn  references about it (https://community.splunk.com/t5/Dashboards-Visualizations/Submit-Button-not-wo... See more...
Hello all, I've been searching this community up and down and also came of course across @niketn  references about it (https://community.splunk.com/t5/Dashboards-Visualizations/Submit-Button-not-working/m-p/387165#M25366).  The all assume that the dashboard/form is "entered" empty. All the sample code I found failed when form already receives values via the URL e.g. https://community.splunk.com/t5/Dashboards-Visualizations/searchWhenChanged-false-not-honored/td-p/430993 my_form?form.tokDDLogLevel=INFO <input type="dropdown" token="tokDDLogLevel" searchWhenChanged="false"> <label>Select Log Level</label> <choice value="INFO">INFO</choice> <choice value="ERROR">ERROR</choice> <choice value="WARN">WARN</choice>      <initialValue>$form.tokDDLogLevel$</initalValue> I have 3 dropdowns and 1 textfield expecting 1 to all 4 values to be modifed. When they get prefilled like this it triggers the immediate execution of my search: <search> | makeresults | eval field1=$tokDDLogLevel$ field2=$...$...</search> The <initialValue> as well as the <default> seem to trigger the change event immediately without the submit button. Another example by @dmarling : https://community.splunk.com/t5/Dashboards-Visualizations/In-a-Simple-XML-dashboard-is-it-possible-to-set-a-token-when-a/td-p/428410 <dashboard> <init> <set token="query">| makeresults count=1 | eval x= "Yes" | table x</set> </init> <label>demodash</label> <fieldset submitButton="true" autoRun="false"> <input type="dropdown" token="Input1"> <label>Input1</label> <choice value="A">A</choice> <choice value="B">B</choice> </input> <input type="dropdown" token="Input2"> <label>Input2</label> <choice value="C">C</choice> <choice value="D">D</choice> </input> </fieldset> <search> <query>| makeresults count=1 | eval search=if("A"=$Input1|s$ AND "C"=$Input2|s$, "| makeresults | eval x=\"No\" | table x", "| makeresults | eval x=\"Yes\" | table x") | table search</query> <done> <condition match="$job.doneProgress$=1"> <set token="query">$result.search$</set> </condition> </done> </search> <row> <panel> <table> <search> <query>$query$</query> </search> </table> </panel> </row> </dashboard>   Now call this dashboard via e.g. "test?form.Input1=A&form.Input2=D" - The query gets built and executed without pressing the Submit button.  BTW, I have the same issue for empty forms where some drop downs are set dynamically via searchOnChange. As soon as all fields have values the query is executed automatically. This seems to be a particular problem when using forms in combination with makeresults + eval causing the autoRun=false being ignored.   Is there some JavaScript workaround to prevent the form submission until Submit was actually pressed?  (am desperate to even consider turning a checkbox somehow into a submit button and then use "makeresult `comment $checkbox_tok$` the submit button somehow - otherwise only clicking checkbox will once again execute makeresult + eval due to the checkbox, not the Submit button <- any idea how to this nasty hack?)  (Note: using Splunk 7.3.6 which I can't update at the moment) Thanks guys. Been fighting with this for way too much time already.  
I want to create two pie chart each based upon the value of index I am choosing. using below two queries   1. index = index1 host=.......| ... 2. index=index2 host=....| then i want to include bo... See more...
I want to create two pie chart each based upon the value of index I am choosing. using below two queries   1. index = index1 host=.......| ... 2. index=index2 host=....| then i want to include both of these pie charts into the same report so that I can send them as alert in the same mail. How can I do that ? tried append and multi search but didnt help.
There's already a question that I want to ask: https://community.splunk.com/t5/Dashboards-Visualizations/How-to-make-label-and-dropdown-in-the-same-line-on-a-dashboard/m-p/448680 The solution to us... See more...
There's already a question that I want to ask: https://community.splunk.com/t5/Dashboards-Visualizations/How-to-make-label-and-dropdown-in-the-same-line-on-a-dashboard/m-p/448680 The solution to use CSS as display:flex and padding works for a single input but I want to keep multiple inputs: time, multiselect, text and checkbox ALL *in a single line*. When I use the answer by niketilay, it makes labels next to them but moves all the inputs on different lines. Can anyone help me with a sample XML/CSS code to work it out?
this is how my xml events look like:   <AttackCoords>-80.33100097073213,25.10742916222947</AttackCoords> <Outcome>Interdiction</Outcome> <Infiltrators>23</Infiltrators> <Enforcer>Ironwood</En... See more...
this is how my xml events look like:   <AttackCoords>-80.33100097073213,25.10742916222947</AttackCoords> <Outcome>Interdiction</Outcome> <Infiltrators>23</Infiltrators> <Enforcer>Ironwood</Enforcer> <ActionDate>2013-04-24</ActionDate> <ActionTime>00:07:00</ActionTime> <RecordNotes></RecordNotes> <NumEscaped>0</NumEscaped> <LaunchCoords>-80.23429525620114,24.08680387475695</LaunchCoords> <AttackVessel>Rustic</AttackVessel>   I didn't find a good explanation on how to do this. my painpoint is that I don't know how to glue the values from ActionDate and ActionTime together so I can generate a _time field out of it.  so, what I have :    <ActionDate>2013-04-24</ActionDate> <ActionTime>00:07:00</ActionTime>   what I want:   _time = 2013-04-24 00:07:00   I hope anyone can help
I have my spark logs in Splunk .  I have got 2 Spark  streaming jobs running .It will have different logs ( INFO, WARN, ERROR etc) . I want to create a dashboard for the error Count  by hour or any... See more...
I have my spark logs in Splunk .  I have got 2 Spark  streaming jobs running .It will have different logs ( INFO, WARN, ERROR etc) . I want to create a dashboard for the error Count  by hour or any better way ( suggest please) index=myindex AND (sourcetype=sparkjob1 OR sourcetype=sparkjob2 ) | stats count as total_logs count(eval(level="INFO")) as total_errors</query> Please also advise if you have any better suggestion with useful dashboard.   
Splunk 8.0.4.1: 11-12-2020 06:29:03.713 INFO ChunkedExternProcessor - Running process: /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/decrypt/bin/decrypt.py 11-12-2020 06:29:03.770 ERROR ChunkedExt... See more...
Splunk 8.0.4.1: 11-12-2020 06:29:03.713 INFO ChunkedExternProcessor - Running process: /opt/splunk/bin/python3.7 /opt/splunk/etc/apps/decrypt/bin/decrypt.py 11-12-2020 06:29:03.770 ERROR ChunkedExternProcessor - stderr: Traceback (most recent call last): 11-12-2020 06:29:03.770 ERROR ChunkedExternProcessor - stderr: File "/opt/splunk/etc/apps/decrypt/bin/decrypt.py", line 12, in <module> 11-12-2020 06:29:03.771 ERROR ChunkedExternProcessor - stderr: import decryptlib 11-12-2020 06:29:03.771 ERROR ChunkedExternProcessor - stderr: File "/opt/splunk/etc/apps/decrypt/bin/decryptlib.py", line 1, in <module> 11-12-2020 06:29:03.771 ERROR ChunkedExternProcessor - stderr: import StringIO 11-12-2020 06:29:03.771 ERROR ChunkedExternProcessor - stderr: ModuleNotFoundError: No module named 'StringIO' 11-12-2020 06:29:03.777 ERROR ChunkedExternProcessor - EOF while attempting to read transport header read_size=0 11-12-2020 06:29:03.777 ERROR ChunkedExternProcessor - Error in 'decrypt' command: External search command exited unexpectedly with non-zero error code 1. The library exists in python2 but not the local python3 install in 8.0.x (which is confusing) Setting this back to python2 appears to work...
Hi All, Maybe a silly question but when i am running below search it's not returning the tag values as expected. |tstats summariesonly=false count from datamodel= Malware where sourcetype=mysourcet... See more...
Hi All, Maybe a silly question but when i am running below search it's not returning the tag values as expected. |tstats summariesonly=false count from datamodel= Malware where sourcetype=mysourcetype by index sourcetype Malware_Attacks.tag  But when I run below query this shows the result. index=myindex sourcetype=mysourcetype tag=malware tag=attack Any solution will be most appreciated how can I get the TAG values using data model query/ tstats query.   Thanks, Bhaskar  
we are getting severity medium and high data with time into splunk. normal data not sending into splunk. if there is no data for next 3 minutes we need to consider that severity as normal with start ... See more...
we are getting severity medium and high data with time into splunk. normal data not sending into splunk. if there is no data for next 3 minutes we need to consider that severity as normal with start time and end time(end_time=till next severity(high or medium) start time). need to show severity start time and end time when severity changes from normal to medium, medium to high, high to normal. sample data: 12/11/2020 09:24:46:00 severity: high 12/11/2020 10:24:46:00 severity: medium 12/11/2020 12:34:46:00 severity: medium 12/11/2020 14:44:46:00 severity: high   output:- severity=high start_time=12/11/2020 09:24:46:00 end_time=12/11/2020 09:27:46:00 severity=normal start_time=12/11/2020 09:27:46:00 end_time=12/11/2020 10:24:46:00 severity=medium start_time=12/11/2020 10:24:46:00 end_time=12/11/2020 10:27:46:00  severity=normal start_time=12/11/2020 10:27:46:00 end_time=12/11/2020 12:34:46:00 severity=medium start_time=12/11/2020 12:34:46:00 end_time=12/11/2020 12:37:46:00 severity=normal start_time=12/11/2020 12:37:46:00 end_time=12/11/2020 14:44:46:00 severity=high start_time=12/11/2020 14:44:46:00 end_time=12/11/2020 14:47:46:00 severity=normal start_time=12/11/2020 14:47:46:00 end_time=12/11/2020 23:59:59:00  i just tried to adding 3minutes time to the "_time" by using "eval time=_time+180". now am getting next 3minutes as time. end time also will get by using "| streamstats last(_time) as end_time".but how to add the severity to that next 3mins time.       
Hi Team, We have created a dashboard and passing the device number from the form input token to the dashboard panel, it works properly when passing the device number but when we pass (*) wildcard ch... See more...
Hi Team, We have created a dashboard and passing the device number from the form input token to the dashboard panel, it works properly when passing the device number but when we pass (*) wildcard character, we get an error message " Error in 'rename' command: Wildcard mismatch", when we use the rename command like this "rename abc* as mcd* it show the number of columns for all the device number. However, we want to show only 5 columns For example : We get correct data when we pass the device number but when we pass * wildcard character it increases the device numbers columns   Time Device_Number Router_Model Router_Name 10.30 abc123 ABC1234 Cisco123 11.30 abc123 DEF1234 Cisco1234 12.30 abc123 EFG1234 Cisco567  1.30 abc123 CDE1234 Cisco789    When passing wildcard character in the query: Time abc123_Device_Number def123_Device_Number ghi123_Device_Number abc123_Router_Model def123_Router_Model ghi123_Router_Model abc123_Router_Name def123_Router_Name ghi_Router_Name 10.30 abc123 def123 ghi123 ABC1234 NCFMM_Cisco AGBDVDMM_Cisco Cisco Netgear Netgear 11.30 abc123 def123 ghi123 CDE3434 NCFMM_Cisco AGACJJBCJM_Cisco Netgear Netgear Netgear 12.30 abc123 def123 ghi123 GHI1233 NCFNNM_Cisco AGAGDNM_Cisco Cisco Netgear Netgear 1.30 abc123 def123 ghi123 LEF1232 NCDDMM_Cisco AGABDBDMM_Cisco Cisco Cisco Cisco 2.30 abc123 def123 ghi123 HDDF112 NCDDM_Cisco AGAGBBM_Cisco Netgear Netgear Netgear   when passing wildcard character from the token then using in the rename increases the columns, when normally passing token without wildcard it is proper, can we wildcard in such a way that columns count don't increase.    Thanks in advance for the help!  
I have created a new cluster with a Master, a search head, indexers and forwarders, similar to the architecture here https://docs.splunk.com/Documentation/Splunk/8.1.0/Indexer/Clusterdeploymentovervi... See more...
I have created a new cluster with a Master, a search head, indexers and forwarders, similar to the architecture here https://docs.splunk.com/Documentation/Splunk/8.1.0/Indexer/Clusterdeploymentoverview.  I am unable to search for anything on the search head. Indexers are receiving messages from forwarders. I see the following message on the search head  unable to distribute to peer named index- ... . I have verified that the indexer can accept connections from the search head and I can see that both are up on the master (status up/ indexes displayed/ buckets displayed) However, when i look at search peers on the Master and the Seach Head I see these messages. REST interface to peer is not responding. Check var/log/splunk/splunkd_access.log on the peer. On the access logs in the indexer I can see warnings for both the master and the search head :- GET /services/admin/bundles/search- ... 404 GET /services/admin/bundles/master- ... 404 Thanks.
Hello SMEs: I need some assistance extracting everything between the 5th and 6th "/" from URLs like the one below.  Note: URLs may change.  https://notify.webproxy.com/https://www.elitetools.c... See more...
Hello SMEs: I need some assistance extracting everything between the 5th and 6th "/" from URLs like the one below.  Note: URLs may change.  https://notify.webproxy.com/https://www.elitetools.ca/en/ Your assistance would be appreciated!
Currently we manually monitor splunk dashboards during our deploys. We would like to automate this. For this, we would like to come up with an ansible playbook with the splunk queries. This playbook ... See more...
Currently we manually monitor splunk dashboards during our deploys. We would like to automate this. For this, we would like to come up with an ansible playbook with the splunk queries. This playbook will be run during deployment. I am successfully able to make connection to splunk, but I am not able to get the search query working   #### # type: task # # vars: # 5xxcheck_output(str,command): raw output from command # 5xxcheck_response(str,command): raw output to json # # desc: # uses splunk to get 5xxcheck --- - name: Tasks to query splunk hosts: localhost connection: local tasks: - name: get search_id for 5xx check from splunk uri: url: https://<splunk_instance>/services/search/jobs follow_redirects: all method: POST user: xxxxxx password: xxxxxxx force_basic_auth: yes body: "search host=tc1* ResponseCode=500 earliest=-15m" body_format: raw validate_certs: no status_code: 201 return_content: true register: search_id - debug: msg="{{ search_id.status }}" - name: use the search_id to get the 5xx check results uri: url: https://<splunk_instance>/services/search/jobs/{{ search_id }}/results/ method: GET user: xxxxxx password: xxxxxxx force_basic_auth: yes body_format: raw return_content: true register: 5xxcheck_output until: 5xxcheck_output.status > 0 and 5xxcheck_output.status != 500 - name: Put results into 5xxcheck_response set_fact: 5xxcheck_response: "{{ 5xxcheck_output.json }}" - name: Print 5xxcheck_response if -v debug: var: 5xxcheck_response verbosity: 1   I would like to use uri module to parameterize the splunk search. I am able to execute the following 2 steps from terminal, to get the response Step1: Get the SID(Search ID)   curl -u user:pwd -k https://<splunk-instance>/services/search/jobs -d search="search host=t1* ResponseCode=200 earliest=-15m" <?xml version="1.0" encoding="UTF-8"?> <response> <sid>1604947864.xxxxxx</sid> </response>   Step2: Use the SID to get the response   curl -u user:pwd -k https://<splunk-instance>/services/search/jobs/<SID>/results/ --get -d output_mode=raw  
Hello team, In order to take an inventory of my Indexers, Forwarders, & host to get started what do I need to do. 1. What would be some SPL scripts would I need to use?  2. Should I be doing this of... See more...
Hello team, In order to take an inventory of my Indexers, Forwarders, & host to get started what do I need to do. 1. What would be some SPL scripts would I need to use?  2. Should I be doing this off the hours or middle of the day is fine as well? Thank u
Good evening.  I have a ASCII event message that looks like the following: The timestamp is in GMT time.  When Splunk coverts the timestamp the result is off by 5 hours. For this event message, the ... See more...
Good evening.  I have a ASCII event message that looks like the following: The timestamp is in GMT time.  When Splunk coverts the timestamp the result is off by 5 hours. For this event message, the resulting timestamp is "11/11/20 5:46:39.969 PM" but should really be "11/11/20 12:46:39.969 PM". I have the servers local time zone set to "UTC -5 Eastern Time".  I already created a "props.conf" file and placed the following "TZ=Etc/GMT0", but it did not change the Splunk time stamp.  INFO Stol 20-314-17:46:39.969: !!!!!!!!!INST Telemetry Started !!!!!! Thank for your assistance.
All,  I have about 200 machines with UF installed. I want to monitor bash_history and a few other Linux /home items. The challenge is on about half the machines the home directory is an NFS mount an... See more...
All,  I have about 200 machines with UF installed. I want to monitor bash_history and a few other Linux /home items. The challenge is on about half the machines the home directory is an NFS mount and the other half are local file system.  Monitoring the NFS every every end point is IO prohibitive and double indexes the same data.  Is there a way in Splunk to programmatically handle this? That is, I only need to gather the files/logs from one host if it's mounted from NFS but if it's local file system I need to run the input on each machine.  Any recommendations? I was thinking of writing a script input in a Splunk app that creates and manages an app in the UF app folder. But seems very clunky.