All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Good Afternoon, My leadership informed me that CrowdStrike is sending our logs to Splunk. Has anyone done any queries to show when a device is infected with malware? I don't know the CrowdStrik... See more...
Good Afternoon, My leadership informed me that CrowdStrike is sending our logs to Splunk. Has anyone done any queries to show when a device is infected with malware? I don't know the CrowdStrike logs, but I'm hoping someone here can give me some guidance to get started. 
Hello All, I am using Splunk as Datasource and trying to build dashboards in Grafana (v10.2.2 on Linux OS). Is there anything in Grafana wherein I do not have to write 10 queries in 10 panels. Jus... See more...
Hello All, I am using Splunk as Datasource and trying to build dashboards in Grafana (v10.2.2 on Linux OS). Is there anything in Grafana wherein I do not have to write 10 queries in 10 panels. Just one base query will fetch data from Splunk and then in Grafana I can write additional commands or functions which will be used in each panel on top of the base query, so Splunk load is reduced. Similar to “Post process search” in Splunk. Post Process Searching - How to Optimize Dashboards in Splunk (sp6.io) I followed below instructions and able to fetch data in Splunk but it causes heavy load and stops working next day and all the panels shows “No Data”. Splunk data source | Grafana Enterprise plugins documentation Your help will be greatly Appreciated! Thanks in Advance!
Hi Team  I tried the below search but not getting any result,  index=aws component=Metrics group=per_index_thruput earliest=-1w@d latest=-0d@d | timechart span=1d sum(kb) as Usage by series | for... See more...
Hi Team  I tried the below search but not getting any result,  index=aws component=Metrics group=per_index_thruput earliest=-1w@d latest=-0d@d | timechart span=1d sum(kb) as Usage by series | foreach * [eval <<FIELD>>=round('<<FIELD>>'/1024/1024, 3)]      
I was wondering if anyone knew where I could find it either in the logs or even better the audit REST endpoint if an automation account regenerates it's auth-token.     I've looked through the audi... See more...
I was wondering if anyone knew where I could find it either in the logs or even better the audit REST endpoint if an automation account regenerates it's auth-token.     I've looked through the audit logs but I haven't seen an entry for it.     Any leads or tips would be appreciated.    Thank you
Hello! I wanted to ask what is the best way/configuration to get network device logs directly into splunk? Thanks in advance!
A snippet from strace output seems to indicate that the 30-40 mins may be taken by the ssl certificate generating steps: <<<snipped>>>>> wait4(9855, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL... See more...
A snippet from strace output seems to indicate that the 30-40 mins may be taken by the ssl certificate generating steps: <<<snipped>>>>> wait4(9855, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL) = 9855        stat("/opt/splunkforwarder/etc/auth/server.pem", 0x7ffdec4c4580) = -1 ENOENT (No such file or directory) clone(child_stack=NULL, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7f143df47e50) = 9857 wait4(9857,                                                                                                                                                                                                                                                                                                         < < <  stuck here for 30-40 mins > > > >   0x7ffdec4c45f4, 0, NULL)    = ? ERESTARTSYS (To be restarted if SA_RESTART is set) --- SIGWINCH {si_signo=SIGWINCH, si_code=SI_KERNEL} --- wait4(9857, New certs have been generated in '/opt/splunkforwarder/etc/auth'. [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL) = 9857 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=9857, si_uid=0, si_status=0, si_utime=11, si_stime=5} --- Strangely, this is only happening on Linux on Azure.  Using openssl, I am able to generate self-sign cert within seconds on the same machine. Our Linux on premises (on vmware) does not experience this performance issue.   Any thoughts on what the issue may be?  How to troubleshoot? Thank you
Thanks in advance for the assistance, I am very new to Splunk it is a great tool but I need some assistance.  I am trying to create a filtered report with the following criteria.  - I am filtering ... See more...
Thanks in advance for the assistance, I am very new to Splunk it is a great tool but I need some assistance.  I am trying to create a filtered report with the following criteria.  - I am filtering the data down based on phishing, and now I need to grab each of the individual src_ip and count them.  over a 30 day period.  Unfortunately I do not know have a prelist of IP addresses based on all of the examples.   My goal is to go down the list and count the number of occurrences in this list and show the report on a front panel.  Also, any good books or video training for learning how to do advanced filtering in Splunk.  Thanks 
I have another requirement like, I want to show an bar chart which should show the total login count in basis of the time period we submit   for example if we select 2 days it should show the bar c... See more...
I have another requirement like, I want to show an bar chart which should show the total login count in basis of the time period we submit   for example if we select 2 days it should show the bar chart where y is for login count and x is for time slection (in basis of day interval like 6thfeb  7th feb like this)
Hello, is it possible to install SA-cim_vladiator on clustered search heads? Thanks.  
Is there any efficient way to block queries without the sourcetype? Educating users is not working and we wanted to block it so that there is no degradation of the environment
I am attempting to identify when Splunk users are running searches against historic data (over 180 days old). Additionally, as part of the same request, looking to identify where users have recovered... See more...
I am attempting to identify when Splunk users are running searches against historic data (over 180 days old). Additionally, as part of the same request, looking to identify where users have recovered data from DDAA to DDAS to run searches against that. This is to build a greater understanding of how often historic data is accessed to help guide data retention requirements in Splunk Cloud (i.e. is retention set appropriately or can we extend/reduce retention periods based on the frequency of data access).
Hi Team, Looking for help on configuring the statuspage.io addon to ingest incidents/Collect all scheduled maintenance from statuspage.io.  
Search Query 1   Search Query 2 Would like to join search query 1 and 2 and get the results, but no results found. index=imdc_gold_hadoopmon_metrics sourcetype=hadoop_resourcemanager "All... See more...
Search Query 1   Search Query 2 Would like to join search query 1 and 2 and get the results, but no results found. index=imdc_gold_hadoopmon_metrics sourcetype=hadoop_resourcemanager "Allocated new applicationId" | rex field=_raw "^(?:[^ \n]* ){4}(?P<App1>.+)" | eval _time=strftime(_time,"%Y-%m-%d %H:%M") | table _time, App1 | rename _time as Time1 | join type=inner App1 [ search index=imdc_gold_hadoopmon_metrics sourcetype=hadoop_resourcemanager "OPERATION=Submit Application Request" | rex field=_raw "^(?:[^=\n]*=){6}\w+_\d+_(?P<App2>.+)" | eval _time=strftime(_time,"%Y-%m-%d %H:%M") | table _time, App2 | search App2=App1 | rename _time as Time2] | table Time1, App1, Time2, App2  
Good morning, Let me tell you about my situation. We have a forwarder inside a Docker container python:3.11-slim-bullseye. We've noticed that when we deploy an application from the deployment server... See more...
Good morning, Let me tell you about my situation. We have a forwarder inside a Docker container python:3.11-slim-bullseye. We've noticed that when we deploy an application from the deployment server to the forwarder by adding a stanza to the inputs.conf file, the forwarder's ExecProcessor doesn't detect the change. Could you please help me understand why? Thank you very much, regards.
Hi All, How we can modify the below search to get to see only the status enabled list of correlation searches which did not trigger a notable in past X days. | rest /services/saved/searches | sear... See more...
Hi All, How we can modify the below search to get to see only the status enabled list of correlation searches which did not trigger a notable in past X days. | rest /services/saved/searches | search title="*Rule" action.notable=1 | fields title | eval has_triggered_notables = "false" | join type=outer title [ search index=notable search_name="*Rule" orig_action_name=notable | stats count by search_name | fields - count | rename search_name as title | eval has_triggered_notables = "true" ] Thanks..  
Good morning, Let me tell you about my case. In my company, we have five indexers, one for development and the other four for production. We have an inputs.conf in a forwarder inside a Docker contai... See more...
Good morning, Let me tell you about my case. In my company, we have five indexers, one for development and the other four for production. We have an inputs.conf in a forwarder inside a Docker container python:3.11-slim-bullseye that has three stanzas that execute a script with arguments. One stanza sends the data to development and runs every two minutes, and the other two send the data to production, one running every minute and the other every two minutes. We have noticed that during a period of time last night, we did not receive any data from the forwarder. Regarding the development stanza, it's correct as the machine was being patched, and Splunk was stopped just during that period. We have observed that during those hours, the forwarder did not execute any scripts. During that time frame, we found these traces in the watchdog.log file of the forwarder: 02-05-2024 20:02:18.220 +0000 ERROR Watchdog - No response received from IMonitoredThread=0x7fabb87fec60 within 8000 ms. Looks like thread name='ExecProcessor' tid=1937852 is busy !? Starting to trace with 8000 ms interval.   Could you please help me understand why the forwarder did not execute any scripts during that time frame? Thank you very much. Best regards.
Colleagues. Hi all !! Can you give me some advice on editing dashboards? I have 4 static tables And I need to arrange them so that the first three are on the left and go in order, and stretch the ... See more...
Colleagues. Hi all !! Can you give me some advice on editing dashboards? I have 4 static tables And I need to arrange them so that the first three are on the left and go in order, and stretch the right so that it is large and long and there is no empty space. I tried to play around with xml somehow, but to no avail. The xml itself is in the file. If this can be done at all, if not. So sorry for such a question! Thanks to all!     <form> <label>Testimg</label> <row> <panel depends="$alwaysHideCSS$"> <title>Настройка по ширине</title> <html> <style> #test_1{ width:50% !important; } #test_2{ width:50% !important; } #test_3{ width:50% !important; } #test_4{ width:50% !important; } </style> </html> </panel> </row> <row> <panel id="test_1"> <title>Table 1</title> <table> <search> <query>| makeresults count=10 | eval no=5 | table no</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel id="test_2"> <title>Table 2</title> <table> <search> <query>| makeresults count=10 | eval no=6 | table no</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel id="test_3"> <title>Table 4</title> <table> <search> <query>| makeresults count=10 | eval no=20 | table no</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> </table> </panel> </row> <row> <panel id="test_4"> <title>Table 3</title> <table> <search> <query>| makeresults count=10 | eval no=7 | table no</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> </table> </panel> </row> </form>
I’m reaching you submitting  this community thread   because we are stuck in deployment premium app IT Service intelligence on Splunk Enterprise on Prem. Below troubles we ran into despite  follow... See more...
I’m reaching you submitting  this community thread   because we are stuck in deployment premium app IT Service intelligence on Splunk Enterprise on Prem. Below troubles we ran into despite  following installation steps: •  I stopped splunk service •  I extracted spl ITSI package in according to documentation •  I ran services but splunkd component wasn’t able to activate appserver and so web server  Digging either into web_service.log  or mainly into splunkd.log I‘ve found these entries 01-26-2024 17:26:50.164 +0000 ERROR UiPythonFallback [115369 WebuiStartup] - Couldn't start any appserver processes, UI will probably no t function correctly! 01-26-2024 17:26:50.164 +0000 ERROR UiHttpListener [115369 WebuiStartup] - No app server is running, stop initializing http server. So I proceeded stopping services , uninstalling app components folders  and its indexes storage repositories (  in according to docs) ; then I ran services again and all components including webservice worked fine . We ‘ve deployed Splunk enterprise on ubuntu server ( relative package is splunk-9.1.2-b6b9c8185839-linux-2.6-amd64.deb)  And download ITSI app from its splunkbase link https://splunkbase.splunk.com/app/1841 Could you address with some hints about it ? we 'd try to verify some ITSI features as soon as possible   Thanks in advance and regards   Luigi
Hi Team, Our Splunk is hosted in Cloud. And my requirement is that if an index is getting created then i need to get an alert and similarly if an index is getting deleted from the Search head i need... See more...
Hi Team, Our Splunk is hosted in Cloud. And my requirement is that if an index is getting created then i need to get an alert and similarly if an index is getting deleted from the Search head i need to get an alert. So kindly help with the query.