All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I need to upgrade the universal forwarder agents on the multiple instance from the current 7.3.0 to the latest version. Can we directly upgrade it or need to go step by step. Let me know the process ... See more...
I need to upgrade the universal forwarder agents on the multiple instance from the current 7.3.0 to the latest version. Can we directly upgrade it or need to go step by step. Let me know the process with the best practice to upgrade it.
Do you mean something like this? index=np source IN ("/aws/lambda/api-data-test-*") "responseTime" | eval source = if(match(source, "/aws/lambda/api-data-test-(.*)"), replace(source, "/aws/lambda/ap... See more...
Do you mean something like this? index=np source IN ("/aws/lambda/api-data-test-*") "responseTime" | eval source = if(match(source, "/aws/lambda/api-data-test-(.*)"), replace(source, "/aws/lambda/api-data-test-(.*)", "data/\\1"), source) | timechart span=1m count by source  
This is a very confusing search.  Is this some sort of homework?  The reason why I ask is because it is always better to describe what you are trying to achieve, what the data looks like, what is the... See more...
This is a very confusing search.  Is this some sort of homework?  The reason why I ask is because it is always better to describe what you are trying to achieve, what the data looks like, what is the desired results based on sample input, what is the logic between input and desired output.  This, as opposed to make volunteers read your mind based on some SPL snippets.
With UDP you are limited to a single datagram's length. I think I saw one or two solutions which would split the event at frame boundary and send the remaining part in another frame but there's no re... See more...
With UDP you are limited to a single datagram's length. I think I saw one or two solutions which would split the event at frame boundary and send the remaining part in another frame but there's no reliable way to reassemble those later. Yes, switching to TCP should solve this issue but might cause performance problems in some cases.
Hi  I am working on below query to get Count of requests processed by each API service per minute index=np source IN ("/aws/lambda/api-data-test-*") "responseTime" | eval source = if(match(source,... See more...
Hi  I am working on below query to get Count of requests processed by each API service per minute index=np source IN ("/aws/lambda/api-data-test-*") "responseTime" | eval source = if(match(source, "/aws/lambda/api-data-test-(.*)"), replace(source, "/aws/lambda/api-data-test-(.*)", "data/\\1"), source) | bucket _time span=1m | stats count by source, _time i get below result for one source "name" ,second source by address,third source by city . How can i represent different api source with per minute in good understandable format...either graph or pictorial representation source _time count        data/name 2025-03-02 08:13:00 2   data/name 2025-03-02 08:14:00 57   data/name 2025-03-02 08:15:00 347   data/name 2025-03-02 08:16:00 62   data/name 2025-03-02 08:17:00 48     data/address 2025-03-02 08:18:00 21   data/city 2025-03-02 08:19:00 66   data/city 2025-03-02 08:20:00 55   data/address 2025-03-02 08:21:00 7   name event {"name":"log","awsRequestId":"aws","hostname":"1","pid":8,"level":30,"requestType":"GET","entity":"name","client":"Ha2@gmail.com","domain":"name.io","queryParams":{"identifier":"977265"},"responseTime":320,"msg":"responseTime","time":"2025-03-02T03:23:40.504Z","v":0} address event {"name":"log","awsRequestId":"aws","hostname":"1","pid":8,"level":30,"requestType":"GET","entity":"address","client":"Harggg2@gmail.com","domain":"name.io","queryParams":{"identifier":"977265"},"responseTime":320,"msg":"responseTime","time":"2025-03-02T03:23:40.504Z","v":0}
Hello Has anyone encountered the situation of incomplete log transmission using UDP 514? Would changing to TCP be useful? I would appreciate your support. greetings
OS Version: Server 2019 I'm trying to install Splunk UF in my test lab. Using the GUI install, I put all the necessary options needed for my indexing server and the install starts rolling back durin... See more...
OS Version: Server 2019 I'm trying to install Splunk UF in my test lab. Using the GUI install, I put all the necessary options needed for my indexing server and the install starts rolling back during what appears to be the last step of the install. The server once had a successful install of 9.4.0. Since the uninstall, I can no longer get another version of the UF to install anymore.  I've tried:  -re-downloading the UF and using the "newer" download to install -deleting the Splunk folder from c:\pro files -restarting the VM after the failed install and starting over -installing as "local system account" and "virtual account" -- both failed I'm at my wits end now. 
Hi Community, can someone please help me by using stats instead of join for this search? | rest /services/authentication/users splunk_server=local | search type=SAML | fields title | re... See more...
Hi Community, can someone please help me by using stats instead of join for this search? | rest /services/authentication/users splunk_server=local | search type=SAML | fields title | rename title AS User | search [| inputlookup 12k_line.csv | fields User ] | join type=inner User [| rest /servicesNS/-/-/directory | fields author | dedup author | sort author | rename author AS User ]
All,  Our SentinelOne EDR started detecting Alert Manager Enterprise's vsw.exe as Malware https://www.virustotal.com/gui/file/1cb09276e415c198137a87ba17fd05d0425d0c6f1f8c5afef81bac4fede84f6a/detecti... See more...
All,  Our SentinelOne EDR started detecting Alert Manager Enterprise's vsw.exe as Malware https://www.virustotal.com/gui/file/1cb09276e415c198137a87ba17fd05d0425d0c6f1f8c5afef81bac4fede84f6a/detection. Anyone else run into this before I start digging into this? Is there a proper course of action Splunkbase would like if this ends up being positive?  thanks -Daniel
Hi Team,   We have 2 search head cluster, and few reports scheduled with email action, reports running on one search head is working fine and delivering emails as configured. but another search hea... See more...
Hi Team,   We have 2 search head cluster, and few reports scheduled with email action, reports running on one search head is working fine and delivering emails as configured. but another search head is running the report but email is not delivered, I see the following ERROR logs in Inspect job.  04-01-2025 01:00:10.298 ERROR HttpClientRequest [1028078 StatusEnforcerThread] - HTTP client error=Read Timeout while accessing server=https://127.0.0.1:8089 for request=https://127.0.0.1:8089/servicesNS/nobody/botcop/saved/searches/SOMEREPORT/notify. 04-01-2025 01:00:10.299 ERROR SearchNotification [1028078 StatusEnforcerThread] - OnResult notification failed uri=https://127.0.0.1:8089/servicesNS/nobody/botcop/saved/searches/SOMEREPORT/notify postData= method=POST err=Read Timeout status=502 any idea how to fix this? I see the port on SH is listening and accepting connection--tested with telnet.  thanks in advance for any help you may have.  
Hey @PickleRick  I like that approach - hadnt thought of the subsearch in the timechart to achieve so bookmarking that for future ref   Not to be pedantic....I'd probably go for <90000=1h instead... See more...
Hey @PickleRick  I like that approach - hadnt thought of the subsearch in the timechart to achieve so bookmarking that for future ref   Not to be pedantic....I'd probably go for <90000=1h instead of <86400 because if you select "Last 24 hours" then you get slightly more than 24 hours (something like 31-03-2025 22:00:00 to 01-04-2025 22:09:12) @tkwaller1 if you want to get a single value out of it then you could do something like the below - Ive added an appendcols to add the span info into a field so people know what its an average of. index=_internal | timechart [| makeresults | addinfo | eval range=info_max_time-info_min_time | eval span=case(range<60,"1s",range<3600,"1m",range<90000,"1h",1=1,"1d") | eval search="span=\"".span."\"" | table search ] count partial=f | stats avg(count) as avgCount | appendcols [| makeresults | addinfo | eval range=info_max_time-info_min_time | eval span="per ".case(range<60,"1s",range<3600,"1m",range<90000,"1h",1=1,"1d") | table span ]   Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
Why for 36 hours you want an average from 36 hour-long counts and for 192 hours you want day-long counts? Is there a threshold? Anyway, the time picker has its purpose and it's not providing a span.... See more...
Why for 36 hours you want an average from 36 hour-long counts and for 192 hours you want day-long counts? Is there a threshold? Anyway, the time picker has its purpose and it's not providing a span. Anyway. What seems to be working in my lab is indeed generating the span with a subsearch. The trick is that you have to do the whole "span=something" string. Like this: index=winevents | timechart [ | makeresults | addinfo | eval range=info_max_time-info_min_time | eval span=case(range<60,"1s",range<3600,"1m",range<86400,"1h",1=1,"1d") | eval search="span=\"".span."\"" | table search ] count
Thanks @livehybrid  this is promising. Do you happen to know a search command that would give information on the artifacts associated with an incident?
Awesome @livehybrid , Thanks
We are trying to run the splunk forwarder local to fix few vulnerabilities and getting the following error message and stopped. Can you please help with fix for this?     Dockerfile: FROM splunk/u... See more...
We are trying to run the splunk forwarder local to fix few vulnerabilities and getting the following error message and stopped. Can you please help with fix for this?     Dockerfile: FROM splunk/universalforwarder:9:3   Commands run: docker build -t suf . docker run -d -p 9997:9997 -e "SPLUNK_START_ARGS=--accept-license" -e "SPLUNK_PASSWORD=changeme" --name uf suf     2025-04-01 06:40:50 2025-04-01 06:40:50 TASK [splunk_universal_forwarder : include_tasks] ****************************** 2025-04-01 06:40:50 included: /opt/ansible/roles/splunk_universal_forwarder/tasks/../../../roles/splunk_common/tasks/check_for_required_restarts.yml for localhost 2025-04-01 06:40:50 Tuesday 01 April 2025  13:40:50 +0000 (0:00:00.045)       0:00:19.675 ********* 2025-04-01 06:41:23 FAILED - RETRYING: [localhost]: Check for required restarts (5 retries left). 2025-04-01 06:41:23 FAILED - RETRYING: [localhost]: Check for required restarts (4 retries left). 2025-04-01 06:41:23 FAILED - RETRYING: [localhost]: Check for required restarts (3 retries left). 2025-04-01 06:41:23 FAILED - RETRYING: [localhost]: Check for required restarts (2 retries left). 2025-04-01 06:41:23 FAILED - RETRYING: [localhost]: Check for required restarts (1 retries left). 2025-04-01 06:41:23 2025-04-01 06:41:23 TASK [splunk_universal_forwarder : Check for required restarts] **************** 2025-04-01 06:41:23 fatal: [localhost]: FAILED! => { 2025-04-01 06:41:23     "attempts": 5, 2025-04-01 06:41:23     "changed": false, 2025-04-01 06:41:23     "changed_when_result": "The conditional check 'restart_required.status == 200' failed. The error was: error while evaluating conditional (restart_required.status == 200): 'dict object' has no attribute 'status'. 'dict object' has no attribute 'status'" 2025-04-01 06:41:23 } 2025-04-01 06:41:23 2025-04-01 06:41:23 MSG: 2025-04-01 06:41:23 2025-04-01 06:41:23 GET/services/messages/restart_required?output_mode=jsonadmin********8089NoneNoneNone[200, 404];;; failed with NO RESPONSE and EXCEP_STR as Not supported URL scheme http+unix 2025-04-01 06:41:23 2025-04-01 06:41:23 PLAY RECAP ********************************************************************* 2025-04-01 06:41:23 localhost                  : ok=68   changed=2    unreachable=0    failed=1    skipped=81   rescued=0    ignored=0   2025-04-01 06:41:23 2025-04-01 06:41:23 Tuesday 01 April 2025  13:41:23 +0000 (0:00:33.184)       0:00:52.859 *********
Hi @TGibbons87 , While the Splunk Add-on for NetApp Data ONTAP documentation explicitly lists support for ONTAP 9.6 and 9.7, it's possible it may function with later versions like 9.15 or 9.16, espe... See more...
Hi @TGibbons87 , While the Splunk Add-on for NetApp Data ONTAP documentation explicitly lists support for ONTAP 9.6 and 9.7, it's possible it may function with later versions like 9.15 or 9.16, especially if the underlying APIs used by the add-on haven't significantly changed in those newer ONTAP releases.  It could be that the developers havent been able to test it on the newer versions for some reason. The best way to confirm compatibility with ONTAP versions above 9.7 is to test the add-on (version 3.1.0) in a non-production environment connected to your newer ONTAP system. As this is an app built within (but not supported by) Splunk, you might be able to log a support case to ask if this could be updated or ask for more information on compatibility, support may then be able to speak internally with the developer(s) of the app. Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Excellent information and writeup, thank you for the verbose explanation!
Hi @Ombessam , To get the desired visualisation comparing the counts of field1 and field2, you need to transform the results of your stats command so that the field names become values in one column... See more...
Hi @Ombessam , To get the desired visualisation comparing the counts of field1 and field2, you need to transform the results of your stats command so that the field names become values in one column and the counts become values in another. The transpose command is suitable for this. (index="my_index" sourcetype="sourcetype1") OR (index="my_index" sourcetype="sourcetype2") | stats count(field1) as Field1, count(field2) as Field2 | transpose | rename column as FieldName, "row 1" as Count   This structure (FieldName, Count) allows you to configure your bar chart visualization.   You could also achieve this with: (index="my_index" sourcetype="sourcetype1") OR (index="my_index" sourcetype="sourcetype2") | eval x="x" | stats count(status) as field1, count(bytes) as field2 by x   Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Not sure why you think this is improperly defined, sounds pretty easy to me. What about a 36h search range - total divided by 36 hours (just not 36h@h) Or 8 days - total divided by 8 days (just not... See more...
Not sure why you think this is improperly defined, sounds pretty easy to me. What about a 36h search range - total divided by 36 hours (just not 36h@h) Or 8 days - total divided by 8 days (just not 8d@d) It sounds like the issue REALLY lies in the pickers standard formatting of time in conjunction with dashboard studio which has less control over coding, its easier to do in XML.  No worries, thanks for the thoughts, I'll find a way to work around this.
Hi @martin_mueller, The App Exporter don’t seem to work anymore, can that be fixed? PS. Seems like the app I just need Many thanks in advance. Cheers, Bjarne