All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @AviSharma8 , Updated: According to the 8.2 docs its possible to upgrade a UF from 7.3->8.2 (Upgrading a universal forwarder directly to version 8.2 is supported from versions 7.3.x, 8.0.x, and..... See more...
Hi @AviSharma8 , Updated: According to the 8.2 docs its possible to upgrade a UF from 7.3->8.2 (Upgrading a universal forwarder directly to version 8.2 is supported from versions 7.3.x, 8.0.x, and...) and 9.4 supports an upgrade from 8.2 (Splunk supports a direct upgrade of a universal forwarder to version 9.4 from versions 8.2.x and hig...). This is contrary to the more broad upgrade path for a full Splunk installation (Not a UF) which has more steps due to more migrations taking place. In the meantime - Its also worth mentioning that Splunk Enterprise version 9.0 and higher requires Linux kernel version 3.x or higher and has an updated OS support list -  Check supported OS at https://docs.splunk.com/Documentation/Splunk/9.4.1/Installation/Systemrequirements I have seen other people upgrade 7.3 -> 9.x without issue in a single step, however your mileage may vary due to unknown migrations under the hood so you may wish to follow the official stance as above. Here's the general process for upgrading using the .tgz package on Linux for each of your 7.3->8.2->9.4 steps: Stop the existing Universal Forwarder: bash $SPLUNK_HOME/bin/splunk stop (Optional, but Recommended) Back up configuration: bash cp -rp $SPLUNK_HOME/etc /tmp/splunk_etc_backup_$(date +%F) Download and extract the new version: Download the latest UF .tgz package for your Linux architecture from splunk.com. Extract the package: bash tar xzf splunkforwarder*.tgz -C /opt Note: Ensure you extract it to the same parent directory as your existing $SPLUNK_HOME. For example, if your current UF is in /opt/splunkforwarder, extract the new version also into /opt. Start the forwarder: This requires accepting the license on the first start after an upgrade. $SPLUNK_HOME/bin/splunk start​ --accept-license --answer-yes A few additional Tips: Test the upgrade on a non-critical forwarder or a small group first. For upgrading many forwarders, consider using deployment tools like Ansible, Puppet, Chef, or SaltStack, or the Splunk Deployment Server. Always check the forwarder compatibility matrix to ensure the new UF version is compatible with your Splunk Enterprise indexers/deployment server. Useful Documentation Links: https://docs.splunk.com/Documentation/Forwarder/9.4.1/Forwarder/Upgradetheuniversalforwarder https://lantern.splunk.com/Splunk_Platform/Product_Tips/Upgrades_and_Migration/Upgrading_the_Splunk_platform Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi there! We finally found the solution!   On the SLM, Settings > General Setup > Edit the node > Disable monitoring then enable him   Thanks to all
Hi @Zoe_ , Yes, switching from UDP to TCP might resolve incomplete log events sent via syslog. UDP is a connectionless protocol and does not guarantee delivery or prevent truncation, especially for... See more...
Hi @Zoe_ , Yes, switching from UDP to TCP might resolve incomplete log events sent via syslog. UDP is a connectionless protocol and does not guarantee delivery or prevent truncation, especially for logs larger than the network's MTU size (typically around 1500 bytes). TCP establishes a connection, ensures ordered and reliable delivery, and handles segmentation and reassembly of large messages, preventing truncation. # Update your inputs.conf to receive TCP instead of UDP. [tcp://514] connection_host = ip # Or dns, depending on your needs sourcetype = syslog # Set appropriate sourcetype # Add other settings like index as needed disabled = 0 # Ensure your old UDP input is disabled #[udp://514] #disabled = 1 Remember to restart Splunk or reload the configuration after changes. Ensure any firewalls between the sending devices and the Splunk receiver allow traffic on the chosen TCP port (e.g., 514). Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I need to upgrade the universal forwarder agents on the multiple instance from the current 7.3.0 to the latest version. Can we directly upgrade it or need to go step by step. Let me know the process ... See more...
I need to upgrade the universal forwarder agents on the multiple instance from the current 7.3.0 to the latest version. Can we directly upgrade it or need to go step by step. Let me know the process with the best practice to upgrade it.
Do you mean something like this? index=np source IN ("/aws/lambda/api-data-test-*") "responseTime" | eval source = if(match(source, "/aws/lambda/api-data-test-(.*)"), replace(source, "/aws/lambda/ap... See more...
Do you mean something like this? index=np source IN ("/aws/lambda/api-data-test-*") "responseTime" | eval source = if(match(source, "/aws/lambda/api-data-test-(.*)"), replace(source, "/aws/lambda/api-data-test-(.*)", "data/\\1"), source) | timechart span=1m count by source  
This is a very confusing search.  Is this some sort of homework?  The reason why I ask is because it is always better to describe what you are trying to achieve, what the data looks like, what is the... See more...
This is a very confusing search.  Is this some sort of homework?  The reason why I ask is because it is always better to describe what you are trying to achieve, what the data looks like, what is the desired results based on sample input, what is the logic between input and desired output.  This, as opposed to make volunteers read your mind based on some SPL snippets.
With UDP you are limited to a single datagram's length. I think I saw one or two solutions which would split the event at frame boundary and send the remaining part in another frame but there's no re... See more...
With UDP you are limited to a single datagram's length. I think I saw one or two solutions which would split the event at frame boundary and send the remaining part in another frame but there's no reliable way to reassemble those later. Yes, switching to TCP should solve this issue but might cause performance problems in some cases.
Hi  I am working on below query to get Count of requests processed by each API service per minute index=np source IN ("/aws/lambda/api-data-test-*") "responseTime" | eval source = if(match(source,... See more...
Hi  I am working on below query to get Count of requests processed by each API service per minute index=np source IN ("/aws/lambda/api-data-test-*") "responseTime" | eval source = if(match(source, "/aws/lambda/api-data-test-(.*)"), replace(source, "/aws/lambda/api-data-test-(.*)", "data/\\1"), source) | bucket _time span=1m | stats count by source, _time i get below result for one source "name" ,second source by address,third source by city . How can i represent different api source with per minute in good understandable format...either graph or pictorial representation source _time count        data/name 2025-03-02 08:13:00 2   data/name 2025-03-02 08:14:00 57   data/name 2025-03-02 08:15:00 347   data/name 2025-03-02 08:16:00 62   data/name 2025-03-02 08:17:00 48     data/address 2025-03-02 08:18:00 21   data/city 2025-03-02 08:19:00 66   data/city 2025-03-02 08:20:00 55   data/address 2025-03-02 08:21:00 7   name event {"name":"log","awsRequestId":"aws","hostname":"1","pid":8,"level":30,"requestType":"GET","entity":"name","client":"Ha2@gmail.com","domain":"name.io","queryParams":{"identifier":"977265"},"responseTime":320,"msg":"responseTime","time":"2025-03-02T03:23:40.504Z","v":0} address event {"name":"log","awsRequestId":"aws","hostname":"1","pid":8,"level":30,"requestType":"GET","entity":"address","client":"Harggg2@gmail.com","domain":"name.io","queryParams":{"identifier":"977265"},"responseTime":320,"msg":"responseTime","time":"2025-03-02T03:23:40.504Z","v":0}
Hello Has anyone encountered the situation of incomplete log transmission using UDP 514? Would changing to TCP be useful? I would appreciate your support. greetings
OS Version: Server 2019 I'm trying to install Splunk UF in my test lab. Using the GUI install, I put all the necessary options needed for my indexing server and the install starts rolling back durin... See more...
OS Version: Server 2019 I'm trying to install Splunk UF in my test lab. Using the GUI install, I put all the necessary options needed for my indexing server and the install starts rolling back during what appears to be the last step of the install. The server once had a successful install of 9.4.0. Since the uninstall, I can no longer get another version of the UF to install anymore.  I've tried:  -re-downloading the UF and using the "newer" download to install -deleting the Splunk folder from c:\pro files -restarting the VM after the failed install and starting over -installing as "local system account" and "virtual account" -- both failed I'm at my wits end now. 
Hi Community, can someone please help me by using stats instead of join for this search? | rest /services/authentication/users splunk_server=local | search type=SAML | fields title | re... See more...
Hi Community, can someone please help me by using stats instead of join for this search? | rest /services/authentication/users splunk_server=local | search type=SAML | fields title | rename title AS User | search [| inputlookup 12k_line.csv | fields User ] | join type=inner User [| rest /servicesNS/-/-/directory | fields author | dedup author | sort author | rename author AS User ]
All,  Our SentinelOne EDR started detecting Alert Manager Enterprise's vsw.exe as Malware https://www.virustotal.com/gui/file/1cb09276e415c198137a87ba17fd05d0425d0c6f1f8c5afef81bac4fede84f6a/detecti... See more...
All,  Our SentinelOne EDR started detecting Alert Manager Enterprise's vsw.exe as Malware https://www.virustotal.com/gui/file/1cb09276e415c198137a87ba17fd05d0425d0c6f1f8c5afef81bac4fede84f6a/detection. Anyone else run into this before I start digging into this? Is there a proper course of action Splunkbase would like if this ends up being positive?  thanks -Daniel
Hi Team,   We have 2 search head cluster, and few reports scheduled with email action, reports running on one search head is working fine and delivering emails as configured. but another search hea... See more...
Hi Team,   We have 2 search head cluster, and few reports scheduled with email action, reports running on one search head is working fine and delivering emails as configured. but another search head is running the report but email is not delivered, I see the following ERROR logs in Inspect job.  04-01-2025 01:00:10.298 ERROR HttpClientRequest [1028078 StatusEnforcerThread] - HTTP client error=Read Timeout while accessing server=https://127.0.0.1:8089 for request=https://127.0.0.1:8089/servicesNS/nobody/botcop/saved/searches/SOMEREPORT/notify. 04-01-2025 01:00:10.299 ERROR SearchNotification [1028078 StatusEnforcerThread] - OnResult notification failed uri=https://127.0.0.1:8089/servicesNS/nobody/botcop/saved/searches/SOMEREPORT/notify postData= method=POST err=Read Timeout status=502 any idea how to fix this? I see the port on SH is listening and accepting connection--tested with telnet.  thanks in advance for any help you may have.  
Hey @PickleRick  I like that approach - hadnt thought of the subsearch in the timechart to achieve so bookmarking that for future ref   Not to be pedantic....I'd probably go for <90000=1h instead... See more...
Hey @PickleRick  I like that approach - hadnt thought of the subsearch in the timechart to achieve so bookmarking that for future ref   Not to be pedantic....I'd probably go for <90000=1h instead of <86400 because if you select "Last 24 hours" then you get slightly more than 24 hours (something like 31-03-2025 22:00:00 to 01-04-2025 22:09:12) @tkwaller1 if you want to get a single value out of it then you could do something like the below - Ive added an appendcols to add the span info into a field so people know what its an average of. index=_internal | timechart [| makeresults | addinfo | eval range=info_max_time-info_min_time | eval span=case(range<60,"1s",range<3600,"1m",range<90000,"1h",1=1,"1d") | eval search="span=\"".span."\"" | table search ] count partial=f | stats avg(count) as avgCount | appendcols [| makeresults | addinfo | eval range=info_max_time-info_min_time | eval span="per ".case(range<60,"1s",range<3600,"1m",range<90000,"1h",1=1,"1d") | table span ]   Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing.
Why for 36 hours you want an average from 36 hour-long counts and for 192 hours you want day-long counts? Is there a threshold? Anyway, the time picker has its purpose and it's not providing a span.... See more...
Why for 36 hours you want an average from 36 hour-long counts and for 192 hours you want day-long counts? Is there a threshold? Anyway, the time picker has its purpose and it's not providing a span. Anyway. What seems to be working in my lab is indeed generating the span with a subsearch. The trick is that you have to do the whole "span=something" string. Like this: index=winevents | timechart [ | makeresults | addinfo | eval range=info_max_time-info_min_time | eval span=case(range<60,"1s",range<3600,"1m",range<86400,"1h",1=1,"1d") | eval search="span=\"".span."\"" | table search ] count
Thanks @livehybrid  this is promising. Do you happen to know a search command that would give information on the artifacts associated with an incident?
Awesome @livehybrid , Thanks
We are trying to run the splunk forwarder local to fix few vulnerabilities and getting the following error message and stopped. Can you please help with fix for this?     Dockerfile: FROM splunk/u... See more...
We are trying to run the splunk forwarder local to fix few vulnerabilities and getting the following error message and stopped. Can you please help with fix for this?     Dockerfile: FROM splunk/universalforwarder:9:3   Commands run: docker build -t suf . docker run -d -p 9997:9997 -e "SPLUNK_START_ARGS=--accept-license" -e "SPLUNK_PASSWORD=changeme" --name uf suf     2025-04-01 06:40:50 2025-04-01 06:40:50 TASK [splunk_universal_forwarder : include_tasks] ****************************** 2025-04-01 06:40:50 included: /opt/ansible/roles/splunk_universal_forwarder/tasks/../../../roles/splunk_common/tasks/check_for_required_restarts.yml for localhost 2025-04-01 06:40:50 Tuesday 01 April 2025  13:40:50 +0000 (0:00:00.045)       0:00:19.675 ********* 2025-04-01 06:41:23 FAILED - RETRYING: [localhost]: Check for required restarts (5 retries left). 2025-04-01 06:41:23 FAILED - RETRYING: [localhost]: Check for required restarts (4 retries left). 2025-04-01 06:41:23 FAILED - RETRYING: [localhost]: Check for required restarts (3 retries left). 2025-04-01 06:41:23 FAILED - RETRYING: [localhost]: Check for required restarts (2 retries left). 2025-04-01 06:41:23 FAILED - RETRYING: [localhost]: Check for required restarts (1 retries left). 2025-04-01 06:41:23 2025-04-01 06:41:23 TASK [splunk_universal_forwarder : Check for required restarts] **************** 2025-04-01 06:41:23 fatal: [localhost]: FAILED! => { 2025-04-01 06:41:23     "attempts": 5, 2025-04-01 06:41:23     "changed": false, 2025-04-01 06:41:23     "changed_when_result": "The conditional check 'restart_required.status == 200' failed. The error was: error while evaluating conditional (restart_required.status == 200): 'dict object' has no attribute 'status'. 'dict object' has no attribute 'status'" 2025-04-01 06:41:23 } 2025-04-01 06:41:23 2025-04-01 06:41:23 MSG: 2025-04-01 06:41:23 2025-04-01 06:41:23 GET/services/messages/restart_required?output_mode=jsonadmin********8089NoneNoneNone[200, 404];;; failed with NO RESPONSE and EXCEP_STR as Not supported URL scheme http+unix 2025-04-01 06:41:23 2025-04-01 06:41:23 PLAY RECAP ********************************************************************* 2025-04-01 06:41:23 localhost                  : ok=68   changed=2    unreachable=0    failed=1    skipped=81   rescued=0    ignored=0   2025-04-01 06:41:23 2025-04-01 06:41:23 Tuesday 01 April 2025  13:41:23 +0000 (0:00:33.184)       0:00:52.859 *********
Hi @TGibbons87 , While the Splunk Add-on for NetApp Data ONTAP documentation explicitly lists support for ONTAP 9.6 and 9.7, it's possible it may function with later versions like 9.15 or 9.16, espe... See more...
Hi @TGibbons87 , While the Splunk Add-on for NetApp Data ONTAP documentation explicitly lists support for ONTAP 9.6 and 9.7, it's possible it may function with later versions like 9.15 or 9.16, especially if the underlying APIs used by the add-on haven't significantly changed in those newer ONTAP releases.  It could be that the developers havent been able to test it on the newer versions for some reason. The best way to confirm compatibility with ONTAP versions above 9.7 is to test the add-on (version 3.1.0) in a non-production environment connected to your newer ONTAP system. As this is an app built within (but not supported by) Splunk, you might be able to log a support case to ask if this could be updated or ask for more information on compatibility, support may then be able to speak internally with the developer(s) of the app. Did this answer help you? If so, please consider: Adding kudos to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Excellent information and writeup, thank you for the verbose explanation!