All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Splunk Support, Good day! We need to install a Heavy forwarder to test out splunk. We need some guidance since this is the first time I am going to do this. Our infra got two physical hyperV (h... See more...
Hi Splunk Support, Good day! We need to install a Heavy forwarder to test out splunk. We need some guidance since this is the first time I am going to do this. Our infra got two physical hyperV (hyperV1 and hyperV2) our client gave me a 300GB hyperV VM disk/virtual machine file. They say this is the splunk HF and need to be exported to our current HyperV  infa. This is the screenshot of the folder of what they gave me.   I would like to know how in the world can I export the 300GB HF image to our current HyperV setup? I would like to seek your expertise and assistance on this and would appreciate to have a step by step guide on how to do this. Best Regards, Oasis
Hello, I had random problems when i  try execute a search, following job inspector, search.log show error like that: ERROR SearchResultTransaction - Got status 502 from https://x.x.x.x:18089/servi... See more...
Hello, I had random problems when i  try execute a search, following job inspector, search.log show error like that: ERROR SearchResultTransaction - Got status 502 from https://x.x.x.x:18089/services/streams/search?sh_sid=scheduler__nobody_SVNPX2F1ZGl0__RMD5e21e138c00642ff9_at_1609913400_774_BF9B9E7A-91AD-4585-AFE9-DA7D03E57F47 ERROR SearchResultTransaction - Got status 502 from https://y.y.y.y:18089/services/streams/search?sh_sid=scheduler__nobody_SVNPX2F1ZGl0__RMD5e21e138c00642ff9_at_1609913400_774_BF9B9E7A-91AD-4585-AFE9-DA7D03E57F47 ERROR SearchResultParser - HTTP error status message from https://x.x.x.x:18089/services/streams/search?sh_sid=scheduler__nobody_SVNPX2F1ZGl0__RMD5e21e138c00642ff9_at_1609913400_774_BF9B9E7A-91AD-4585-AFE9-DA7D03E57F47: Error connecting: Connect Timeout SearchResultCollator - Failure received on retry collector. _unresolvedRetries=1 ERROR HttpListener - Exception while processing request from SH_IP:46819 for /services/search/jobs/searchhead_1609914001.2117_BF9B9E7A-91AD-4585-AFE9-DA7D03E57F47/search.log: Connection closed by peer I've confirmed that connection between search head and search peer is still normal but sometimes the search peer show error "Broken pipe or Connection closed by peer" and on search head show "502 Status" when connect to search peer. I used Splunk 7.3.7.1 with OS CentOS 7 and some tunning configuration: net.core.somaxconn = 1024 net.ipv4.tcp_max_syn_backlog = 4096 server.conf on search pear [httpServer] maxSockets = 400000 maxThreads = 150000 keepAliveIdleTimeout = 7200 busyKeepAliveIdleTimeout = 12 limits.conf on search head [search] max_chunk_queue_size = 30000000 Anybody can help me to fix this or understand why is happening this. Thanks.
Hello, I need help in integrating the Apache-airflow metrics with Splunk. The configuration is done at airflow server by following the below link. Metrics — Airflow Documentation (apache.org) The... See more...
Hello, I need help in integrating the Apache-airflow metrics with Splunk. The configuration is done at airflow server by following the below link. Metrics — Airflow Documentation (apache.org) The statsd was installed on airflow server. On Splunk side we have used UDP input at Heavy Forwarder. But was not successful. Could you please suggest what is the correct process to be followed to pull the metrics. Is UF is necessary? where the statsd should be installed? Appreciate your reply, Thanks in advance!
I have a Splunk application having 10 dashboards in it I would like to restrict dashboard/views based on user roles Can it be possible to have 2 dashboard visible to one set of users and 5 dashboar... See more...
I have a Splunk application having 10 dashboards in it I would like to restrict dashboard/views based on user roles Can it be possible to have 2 dashboard visible to one set of users and 5 dashboards to another set of user  and so on ?
Hi When i search in Splunk I only find logs in last 52 days I need to increase the retention period  to be available and searchable for 6 mounths how can I do it?  Should I increase the Cold data ?... See more...
Hi When i search in Splunk I only find logs in last 52 days I need to increase the retention period  to be available and searchable for 6 mounths how can I do it?  Should I increase the Cold data ? I have 3 indexers(Clustered) should I do it for 3 Indexers? any advice please thanks  
Hello splunkers     i want to create a visualization for my command to create a bar chart that contains the (src_ip/user) information on the x-axis and the count on the y-access   i tried this c... See more...
Hello splunkers     i want to create a visualization for my command to create a bar chart that contains the (src_ip/user) information on the x-axis and the count on the y-access   i tried this command but it  aint working: index=MyApp  eventtype=My_App_Authentication_logs user=* action=failure | top 10 user by src | sort -count | where count>=4 | head 20| table src user count i need your help please     Thanks in advance    
Hello, In splunk Enterprise Has anyone experienced cases where notable events are generated after 10+hrs the trigger time? scenario is - I have created correlation searches which runs every 10mins .... See more...
Hello, In splunk Enterprise Has anyone experienced cases where notable events are generated after 10+hrs the trigger time? scenario is - I have created correlation searches which runs every 10mins . for adaptive responses i have configured email alert and notable. Although email alerts get triggered properly as per schedule but the notables are getting generated 10-12hrs past the alert triggered time. Can anyone suggest on how to proceed on troubleshooting this ?
Hi, I want to access Oracle DB through Oracle CLI in Splunk. Can this be achieved and how?
Hi Everyone I am trying to detect RDP connection to a remote host. I read up some web post suggests looking for 4624 with logon type 10 event. I made an RDP to a remote host, however all 4624 evens ... See more...
Hi Everyone I am trying to detect RDP connection to a remote host. I read up some web post suggests looking for 4624 with logon type 10 event. I made an RDP to a remote host, however all 4624 evens I can see is logon type 3. Then I realize 4624 events can be collected from 3 places The workstation where the user phycially present The AD: where the authentication takes place The remote host: where the user wants to log in, which is the destination host. I am wondering whether the logon type 10 events only occur on the remote host and on the AD log the 4624 event will have logon type 3 instead. Anyone has come across this kind of situation before? Thank you for the help. Cheers Linsong
Can I use Web Terminal for Splunk (CLI) App in splunk to back fill summary indexes?   If so, what could be the sample command to backfill using CLI?
I passed the new exam in January 2019. My understanding was under the new system the Admin certification expired 2 years from the date of passing the exam. Now that I am coming up on the 2 year mark,... See more...
I passed the new exam in January 2019. My understanding was under the new system the Admin certification expired 2 years from the date of passing the exam. Now that I am coming up on the 2 year mark, however, I see in the FAQ's Recertification section that "all certifications are now subject to a 3-year life cycle." I can't seem to find where my certification expiration date would be (though I see it is still active for now in the Splunk Partner+ My Accreditations & Certifications page) and have a few questions: Does my Splunk Enterprise Certified Admin certification that I thought lasted 2 years now last 3? Or does it still only last 2 since when I earned it it only lasted 2? Where can I view my certification's expiration date? Do I have to retake the Admin courses to recertify or can I just go straight to the exam? Thanks in advance
We have created two use cases and set up correlation search, Trigger time is every 10 minutes. When notable event generate in incident review tab, What we observer there is fluctuation in time, trig... See more...
We have created two use cases and set up correlation search, Trigger time is every 10 minutes. When notable event generate in incident review tab, What we observer there is fluctuation in time, trigger time and notable event time are different. Please refer below screenshot for time difference highlighted. Trigger time: 1/6/21 4:06:45.000 AM Audit User Account x was Locked out by Host     Adaptive Responses:  Response Mode Time User Status Notable saved 2021-01-05T17:12:35+1100    success   Need to understand why there is time difference?   Thanks, Sahil
Hi, I am creating alerts in my splunk instance using splunk python SDK. I am using example libraries/API as mentioned ---> splunklib.client and saved_searches.create The alerts are created and also ... See more...
Hi, I am creating alerts in my splunk instance using splunk python SDK. I am using example libraries/API as mentioned ---> splunklib.client and saved_searches.create The alerts are created and also scheduled via cron successfully. Is there a way  for me to allow users ability to test the newly created or existing alerts using python SDK interface? That is as soon as alerts are created they should get fired and user who created them can get a results of all triggered alerts. Thanks    
Hello All, I am new to splunk and looking for suggestion on search queries. In our environment, we have phantom app installed and fetches data from splunk cloud. Recently we have OOM event in splunk... See more...
Hello All, I am new to splunk and looking for suggestion on search queries. In our environment, we have phantom app installed and fetches data from splunk cloud. Recently we have OOM event in splunk cloud and found that search queries triggered by phantom are consuming high memory and i have been asked to investigate the issue. As part of investigation, i want to list all search queries triggered by phantom in splunk and analyze query that is consuming high memory. Can you please help me with a search query to search in splunk to extract all search queries triggered by phantom playbook/use cases. Thanks in Advance!!
Hi  I have started historical indexing by copying the .gz files on the HF. After that, I  am seeing below in splunkd.log 01-05-2021 18:43:00.728 -0500 WARN  TailReader - Could not send data to outp... See more...
Hi  I have started historical indexing by copying the .gz files on the HF. After that, I  am seeing below in splunkd.log 01-05-2021 18:43:00.728 -0500 WARN  TailReader - Could not send data to output queue (parsingQueue), retrying... 01-05-2021 18:43:01.039 -0500 WARN  TcpOutputProc - The TCP output processor has paused the data flow. Forwarding to output group p2s has been blocked for 10 seconds. This will probably stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data. 01-05-2021 18:43:06.013 -0500 WARN  TcpOutputProc - The TCP output processor has paused the data flow. Forwarding to output group p2s has been blocked for 10 seconds. This will probably stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data. 01-05-2021 18:43:11.049 -0500 WARN  TcpOutputProc - The TCP output processor has paused the data flow. Forwarding to output group p2s has been blocked for 20 seconds. This will probably stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data. 01-05-2021 18:43:20.032 -0500 WARN  TcpOutputProc - The TCP output processor has paused the data flow. Forwarding to output group p2s has been blocked for 10 seconds. This will probably stall the data flow towards indexing and other network outputs. Review the receiving system's health in the Splunk Monitoring Console. It is probably not accepting data. ==> In metric.log  on HF 01-05-2021 18:47:08.734 -0500 INFO  Metrics - group=queue, ingest_pipe=1, name=indexqueue, blocked=true, max_size_kb=20480, current_size_kb=20479, current_size=7457, largest_size=7703, smallest_size=6737 01-05-2021 18:47:08.735 -0500 INFO  Metrics - group=queue, ingest_pipe=2, name=indexqueue, blocked=true, max_size_kb=20480, current_size_kb=20479, current_size=7443, largest_size=7482, smallest_size=6719 01-05-2021 18:47:08.735 -0500 INFO  Metrics - group=queue, ingest_pipe=2, name=typingqueue, blocked=true, max_size_kb=20480, current_size_kb=20479, current_size=7476, largest_size=7489, smallest_size=6735 01-05-2021 18:47:08.736 -0500 INFO  Metrics - group=queue, ingest_pipe=3, name=aggqueue, blocked=true, max_size_kb=1024, current_size_kb=1023, current_size=367, largest_size=415, smallest_size=0 01-05-2021 18:48:59.729 -0500 INFO  Metrics - group=queue, ingest_pipe=1, name=indexqueue, blocked=true, max_size_kb=20480, current_size_kb=20479, current_size=7676, largest_size=7703, smallest_size=6666 01-05-2021 18:48:59.730 -0500 INFO  Metrics - group=queue, ingest_pipe=3, name=aggqueue, blocked=true, max_size_kb=1024, current_size_kb=1023, current_size=357, largest_size=368, smallest_size=0 01-05-2021 18:52:03.732 -0500 INFO  Metrics - group=queue, ingest_pipe=0, name=indexqueue, blocked=true, max_size_kb=20480, current_size_kb=20479, current_size=7241, largest_size=7491, smallest_size=6542 01-05-2021 18:52:03.736 -0500 INFO  Metrics - group=queue, ingest_pipe=2, name=typingqueue, blocked=true, max_size_kb=20480, current_size_kb=20479, current_size=7468, largest_size=7478, smallest_size=6443 01-05-2021 18:52:03.737 -0500 INFO  Metrics - group=queue, ingest_pipe=3, name=aggqueue, blocked=true, max_size_kb=1024, current_size_kb=1023, current_size=360, largest_size=370, smallest_size=0 01-05-2021 18:55:01.732 -0500 INFO  Metrics - group=queue, ingest_pipe=0, name=indexqueue, blocked=true, max_size_kb=20480, current_size_kb=20479, current_size=7243, largest_size=7316, smallest_size=6545 01-05-2021 18:55:01.732 -0500 INFO  Metrics - group=queue, ingest_pipe=0, name=parsingqueue, blocked=true, max_size_kb=10240, current_size_kb=10239, current_size=1266, largest_size=1272, smallest_size=1030 01-05-2021 18:55:01.733 -0500 INFO  Metrics - group=queue, ingest_pipe=0, name=typingqueue, blocked=true, max_size_kb=20480, current_size_kb=20479, current_size=7238, largest_size=7323, smallest_size=6578 ------ I have below setting on  HF .  limits.conf [thruput] maxKBps = 0 Server.conf [general] parallelIngestionPipelines = 4 [queue] maxSize = 20MB [queue=parsingQueue] maxSize = 10MB My HF is on-prem  server and splunk indexer cluster  is on AWS . Can you please let me know way speed up my indexing .  
I'd like to convert the output of the below SPL to reflect HH:MM:SS rather than just seconds.  Any help is greatly appreciated!    index=* host=* user="username" sourcetype="WinEventLog:Security" E... See more...
I'd like to convert the output of the below SPL to reflect HH:MM:SS rather than just seconds.  Any help is greatly appreciated!    index=* host=* user="username" sourcetype="WinEventLog:Security" EventCode="4624" OR EventCode=4634 | transaction user maxevents=2 startswith="EventCode=4624" endswith="EventCode=4634" maxspan=-1 | eval Logontime=if(EventCode="4624",_time,null()) | eval Logofftime=Logontime+duration | convert ctime(Logontime) as Logontime | convert ctime(Logofftime) as Logofftime | table host, src_nt_host, user, Logontime, Logofftime, duration | sort user, host, -duration | rename duration AS "Duration (seconds)"  
I'd like to get the logon/logoff duration times of just one user, what would be the best SPL to go with to determine this?  Any help is greatly appreciated! 
I have a Splunk event with the following lines logged from a .txt file. HeaderField1 | HeaderField2 | HeaderField3 HeaderValue1 | HeaderValue2 | HeaderValue3 How can I manipulate the event (and fu... See more...
I have a Splunk event with the following lines logged from a .txt file. HeaderField1 | HeaderField2 | HeaderField3 HeaderValue1 | HeaderValue2 | HeaderValue3 How can I manipulate the event (and future events) using configuration files (props and/or transform) so that the event text is replaced with the following extracted fields names and values: HeaderField1 = HeaderValue1 HeaderField2 = HeaderValue2 HeaderField3 = HeaderValue3 Note: The actual header field names are always the same. The Header Values change in each text file.
Hi Team, I have a table where employee name are group by manager name and their project count. PFB structure of my table in which i want to link to other dashboard (which have mgr and emp tokens ... See more...
Hi Team, I have a table where employee name are group by manager name and their project count. PFB structure of my table in which i want to link to other dashboard (which have mgr and emp tokens for manager and employee name respectively) based on selection i did. For selection of manager value i added this in link target. form.mgr=$click.value$ I'm not sure how do i select employee name for his manager. Also, i want when i click on count no query string will add to query. Suppose i click on Khushboo how would i get this link target search. form.mgr=Shaurya&form.emp=Khushboo Manager Name Employee Name Project Count Shaurya Khushboo Neha Sachin 23 30 12 Mohan Virat 14 Harry Larry 23 Meghan Jack Nick Carson 12 13 14
Hi, I am trying to use Split command to separate and get few fields. However I am getting different fields value due to null present in event data. I applied Split on coming events and below are 2 ... See more...
Hi, I am trying to use Split command to separate and get few fields. However I am getting different fields value due to null present in event data. I applied Split on coming events and below are 2 cases from event data 1) F0,F1,F2,F3,F4,F5,F6 - Here I am able to get fields F5 and F6. Good. 2) F0,F1,,,F4,F5,F6 - Here I am getting F4 filed data in F5 and F5 filed data into F6. Because of  null values on F2 and F3. Is there any solution for this?   Thank you!!!