All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello Has anyone encountered the situation of incomplete log transmission using UDP 514? Would changing to TCP be useful? I would appreciate your support. greetings
OS Version: Server 2019 I'm trying to install Splunk UF in my test lab. Using the GUI install, I put all the necessary options needed for my indexing server and the install starts rolling back durin... See more...
OS Version: Server 2019 I'm trying to install Splunk UF in my test lab. Using the GUI install, I put all the necessary options needed for my indexing server and the install starts rolling back during what appears to be the last step of the install. The server once had a successful install of 9.4.0. Since the uninstall, I can no longer get another version of the UF to install anymore.  I've tried:  -re-downloading the UF and using the "newer" download to install -deleting the Splunk folder from c:\pro files -restarting the VM after the failed install and starting over -installing as "local system account" and "virtual account" -- both failed I'm at my wits end now. 
Hi Community, can someone please help me by using stats instead of join for this search? | rest /services/authentication/users splunk_server=local | search type=SAML | fields title | re... See more...
Hi Community, can someone please help me by using stats instead of join for this search? | rest /services/authentication/users splunk_server=local | search type=SAML | fields title | rename title AS User | search [| inputlookup 12k_line.csv | fields User ] | join type=inner User [| rest /servicesNS/-/-/directory | fields author | dedup author | sort author | rename author AS User ]
All,  Our SentinelOne EDR started detecting Alert Manager Enterprise's vsw.exe as Malware https://www.virustotal.com/gui/file/1cb09276e415c198137a87ba17fd05d0425d0c6f1f8c5afef81bac4fede84f6a/detecti... See more...
All,  Our SentinelOne EDR started detecting Alert Manager Enterprise's vsw.exe as Malware https://www.virustotal.com/gui/file/1cb09276e415c198137a87ba17fd05d0425d0c6f1f8c5afef81bac4fede84f6a/detection. Anyone else run into this before I start digging into this? Is there a proper course of action Splunkbase would like if this ends up being positive?  thanks -Daniel
Hi Team,   We have 2 search head cluster, and few reports scheduled with email action, reports running on one search head is working fine and delivering emails as configured. but another search hea... See more...
Hi Team,   We have 2 search head cluster, and few reports scheduled with email action, reports running on one search head is working fine and delivering emails as configured. but another search head is running the report but email is not delivered, I see the following ERROR logs in Inspect job.  04-01-2025 01:00:10.298 ERROR HttpClientRequest [1028078 StatusEnforcerThread] - HTTP client error=Read Timeout while accessing server=https://127.0.0.1:8089 for request=https://127.0.0.1:8089/servicesNS/nobody/botcop/saved/searches/SOMEREPORT/notify. 04-01-2025 01:00:10.299 ERROR SearchNotification [1028078 StatusEnforcerThread] - OnResult notification failed uri=https://127.0.0.1:8089/servicesNS/nobody/botcop/saved/searches/SOMEREPORT/notify postData= method=POST err=Read Timeout status=502 any idea how to fix this? I see the port on SH is listening and accepting connection--tested with telnet.  thanks in advance for any help you may have.  
Hey @PickleRick  I like that approach - hadnt thought of the subsearch in the timechart to achieve so bookmarking that for future ref   Not to be pedantic....I'd probably go for <90000=1h instead... See more...
Hey @PickleRick  I like that approach - hadnt thought of the subsearch in the timechart to achieve so bookmarking that for future ref   Not to be pedantic....I'd probably go for <90000=1h instead of <86400 because if you select "Last 24 hours" then you get slightly more than 24 hours (something like 31-03-2025 22:00:00 to 01-04-2025 22:09:12) @tkwaller1 if you want to get a single value out of it then you could do something like the below - Ive added an appendcols to add the span info into a field so people know what its an average of. index=_internal | timechart [| makeresults | addinfo | eval range=info_max_time-info_min_time | eval span=case(range<60,"1s",range<3600,"1m",range<90000,"1h",1=1,"1d") | eval search="span=\"".span."\"" | table search ] count partial=f | stats avg(count) as avgCount | appendcols [| makeresults | addinfo | eval range=info_max_time-info_min_time | eval span="per ".case(range<60,"1s",range<3600,"1m",range<90000,"1h",1=1,"1d") | table span ]   Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
Why for 36 hours you want an average from 36 hour-long counts and for 192 hours you want day-long counts? Is there a threshold? Anyway, the time picker has its purpose and it's not providing a span.... See more...
Why for 36 hours you want an average from 36 hour-long counts and for 192 hours you want day-long counts? Is there a threshold? Anyway, the time picker has its purpose and it's not providing a span. Anyway. What seems to be working in my lab is indeed generating the span with a subsearch. The trick is that you have to do the whole "span=something" string. Like this: index=winevents | timechart [ | makeresults | addinfo | eval range=info_max_time-info_min_time | eval span=case(range<60,"1s",range<3600,"1m",range<86400,"1h",1=1,"1d") | eval search="span=\"".span."\"" | table search ] count
Thanks @livehybrid  this is promising. Do you happen to know a search command that would give information on the artifacts associated with an incident?
Awesome @livehybrid , Thanks
We are trying to run the splunk forwarder local to fix few vulnerabilities and getting the following error message and stopped. Can you please help with fix for this?     Dockerfile: FROM splunk/u... See more...
We are trying to run the splunk forwarder local to fix few vulnerabilities and getting the following error message and stopped. Can you please help with fix for this?     Dockerfile: FROM splunk/universalforwarder:9:3   Commands run: docker build -t suf . docker run -d -p 9997:9997 -e "SPLUNK_START_ARGS=--accept-license" -e "SPLUNK_PASSWORD=changeme" --name uf suf     2025-04-01 06:40:50 2025-04-01 06:40:50 TASK [splunk_universal_forwarder : include_tasks] ****************************** 2025-04-01 06:40:50 included: /opt/ansible/roles/splunk_universal_forwarder/tasks/../../../roles/splunk_common/tasks/check_for_required_restarts.yml for localhost 2025-04-01 06:40:50 Tuesday 01 April 2025  13:40:50 +0000 (0:00:00.045)       0:00:19.675 ********* 2025-04-01 06:41:23 FAILED - RETRYING: [localhost]: Check for required restarts (5 retries left). 2025-04-01 06:41:23 FAILED - RETRYING: [localhost]: Check for required restarts (4 retries left). 2025-04-01 06:41:23 FAILED - RETRYING: [localhost]: Check for required restarts (3 retries left). 2025-04-01 06:41:23 FAILED - RETRYING: [localhost]: Check for required restarts (2 retries left). 2025-04-01 06:41:23 FAILED - RETRYING: [localhost]: Check for required restarts (1 retries left). 2025-04-01 06:41:23 2025-04-01 06:41:23 TASK [splunk_universal_forwarder : Check for required restarts] **************** 2025-04-01 06:41:23 fatal: [localhost]: FAILED! => { 2025-04-01 06:41:23     "attempts": 5, 2025-04-01 06:41:23     "changed": false, 2025-04-01 06:41:23     "changed_when_result": "The conditional check 'restart_required.status == 200' failed. The error was: error while evaluating conditional (restart_required.status == 200): 'dict object' has no attribute 'status'. 'dict object' has no attribute 'status'" 2025-04-01 06:41:23 } 2025-04-01 06:41:23 2025-04-01 06:41:23 MSG: 2025-04-01 06:41:23 2025-04-01 06:41:23 GET/services/messages/restart_required?output_mode=jsonadmin********8089NoneNoneNone[200, 404];;; failed with NO RESPONSE and EXCEP_STR as Not supported URL scheme http+unix 2025-04-01 06:41:23 2025-04-01 06:41:23 PLAY RECAP ********************************************************************* 2025-04-01 06:41:23 localhost                  : ok=68   changed=2    unreachable=0    failed=1    skipped=81   rescued=0    ignored=0   2025-04-01 06:41:23 2025-04-01 06:41:23 Tuesday 01 April 2025  13:41:23 +0000 (0:00:33.184)       0:00:52.859 *********
Hi @TGibbons87 , While the Splunk Add-on for NetApp Data ONTAP documentation explicitly lists support for ONTAP 9.6 and 9.7, it's possible it may function with later versions like 9.15 or 9.16, espe... See more...
Hi @TGibbons87 , While the Splunk Add-on for NetApp Data ONTAP documentation explicitly lists support for ONTAP 9.6 and 9.7, it's possible it may function with later versions like 9.15 or 9.16, especially if the underlying APIs used by the add-on haven't significantly changed in those newer ONTAP releases.  It could be that the developers havent been able to test it on the newer versions for some reason. The best way to confirm compatibility with ONTAP versions above 9.7 is to test the add-on (version 3.1.0) in a non-production environment connected to your newer ONTAP system. As this is an app built within (but not supported by) Splunk, you might be able to log a support case to ask if this could be updated or ask for more information on compatibility, support may then be able to speak internally with the developer(s) of the app. Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Excellent information and writeup, thank you for the verbose explanation!
Hi @Ombessam , To get the desired visualisation comparing the counts of field1 and field2, you need to transform the results of your stats command so that the field names become values in one column... See more...
Hi @Ombessam , To get the desired visualisation comparing the counts of field1 and field2, you need to transform the results of your stats command so that the field names become values in one column and the counts become values in another. The transpose command is suitable for this. (index="my_index" sourcetype="sourcetype1") OR (index="my_index" sourcetype="sourcetype2") | stats count(field1) as Field1, count(field2) as Field2 | transpose | rename column as FieldName, "row 1" as Count   This structure (FieldName, Count) allows you to configure your bar chart visualization.   You could also achieve this with: (index="my_index" sourcetype="sourcetype1") OR (index="my_index" sourcetype="sourcetype2") | eval x="x" | stats count(status) as field1, count(bytes) as field2 by x   Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Not sure why you think this is improperly defined, sounds pretty easy to me. What about a 36h search range - total divided by 36 hours (just not 36h@h) Or 8 days - total divided by 8 days (just not... See more...
Not sure why you think this is improperly defined, sounds pretty easy to me. What about a 36h search range - total divided by 36 hours (just not 36h@h) Or 8 days - total divided by 8 days (just not 8d@d) It sounds like the issue REALLY lies in the pickers standard formatting of time in conjunction with dashboard studio which has less control over coding, its easier to do in XML.  No worries, thanks for the thoughts, I'll find a way to work around this.
Hi @martin_mueller, The App Exporter don’t seem to work anymore, can that be fixed? PS. Seems like the app I just need Many thanks in advance. Cheers, Bjarne
Unfortunately, your requirement might not only be hard/impossible to implement in dashboard studio (afair simplexml dashboards allow custom js so you have much more freedom there), but is also imprec... See more...
Unfortunately, your requirement might not only be hard/impossible to implement in dashboard studio (afair simplexml dashboards allow custom js so you have much more freedom there), but is also imprecisely defined. What about a 36h search range - average daily count or hourly? Or 8 days - daily or weekly? Anyway, from the top of my head, if you have a predefined set of time ranges you could use a list instead of time-picker and fill-in placeholders this way. Alternative approach (but I'm not 100% sure it will work) could be to use subsearch to generate the span part of the search and within that search use addinfo to get the time range).
How can I get the following visualization ? I've tried the following commands: (index="my_index" sourcetype="sourcetype1") OR (index="my_index" sourcetype="sourcetype2") | fields field1 field2 ... See more...
How can I get the following visualization ? I've tried the following commands: (index="my_index" sourcetype="sourcetype1") OR (index="my_index" sourcetype="sourcetype2") | fields field1 field2 | stats count(field1) as Field1, count(field2) as Field2 and I getting the following graph
does Splunk Add-on for NetApp Data ONTAP work with OnTap above 9.7? https://splunkbase.splunk.com/app/3418 says "The add-on supports the data collection from NetApp® Data ONTAP® Cluster Mode version... See more...
does Splunk Add-on for NetApp Data ONTAP work with OnTap above 9.7? https://splunkbase.splunk.com/app/3418 says "The add-on supports the data collection from NetApp® Data ONTAP® Cluster Mode version 9.6 and 9.7"  with Latest Version 3.1.0 released September 13, 2024. Since OnTap has 9.16 available now and in September 2024 OnTap 9.15 - and possible 9.16 - was fully GA I'm wondering why only 9.6 and 9.7 are listed. Especially since 9.6 and 9.7 may be out of support from NetApp. The Release Notes for Latest Version 3.1.0 doesn't even mention OnTap releases.
I am trying to configure the Proofpoint - ET Splunk TA on Splunk Cloud, and during the setup, it asks for an API key and an authorization code. While I have the API key, I noticed that the authorizat... See more...
I am trying to configure the Proofpoint - ET Splunk TA on Splunk Cloud, and during the setup, it asks for an API key and an authorization code. While I have the API key, I noticed that the authorization code appears as "None", so I provided the Oink code instead. However, when I try to save the configuration, it does not get applied. Is there a specific way to configure this on Splunk Cloud? Any guidance on setting up ET Intelligence correctly would be greatly appreciated. Thank you     
Hi @llopreiato  Unfortunately I think you might need to speak to the vendor of the LB to see if they can work out how to make it work based on HTTP Status code - this is fairly common behaviour for ... See more...
Hi @llopreiato  Unfortunately I think you might need to speak to the vendor of the LB to see if they can work out how to make it work based on HTTP Status code - this is fairly common behaviour for LB health checks so I'd be surprised if its not possible using your LB. Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.