All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

OS Version: Server 2019 I'm trying to install Splunk UF in my test lab. Using the GUI install, I put all the necessary options needed for my indexing server and the install starts rolling back durin... See more...
OS Version: Server 2019 I'm trying to install Splunk UF in my test lab. Using the GUI install, I put all the necessary options needed for my indexing server and the install starts rolling back during what appears to be the last step of the install. The server once had a successful install of 9.4.0. Since the uninstall, I can no longer get another version of the UF to install anymore.  I've tried:  -re-downloading the UF and using the "newer" download to install -deleting the Splunk folder from c:\pro files -restarting the VM after the failed install and starting over -installing as "local system account" and "virtual account" -- both failed I'm at my wits end now. 
Hi Community, can someone please help me by using stats instead of join for this search? | rest /services/authentication/users splunk_server=local | search type=SAML | fields title | re... See more...
Hi Community, can someone please help me by using stats instead of join for this search? | rest /services/authentication/users splunk_server=local | search type=SAML | fields title | rename title AS User | search [| inputlookup 12k_line.csv | fields User ] | join type=inner User [| rest /servicesNS/-/-/directory | fields author | dedup author | sort author | rename author AS User ]
All,  Our SentinelOne EDR started detecting Alert Manager Enterprise's vsw.exe as Malware https://www.virustotal.com/gui/file/1cb09276e415c198137a87ba17fd05d0425d0c6f1f8c5afef81bac4fede84f6a/detecti... See more...
All,  Our SentinelOne EDR started detecting Alert Manager Enterprise's vsw.exe as Malware https://www.virustotal.com/gui/file/1cb09276e415c198137a87ba17fd05d0425d0c6f1f8c5afef81bac4fede84f6a/detection. Anyone else run into this before I start digging into this? Is there a proper course of action Splunkbase would like if this ends up being positive?  thanks -Daniel
Hi Team,   We have 2 search head cluster, and few reports scheduled with email action, reports running on one search head is working fine and delivering emails as configured. but another search hea... See more...
Hi Team,   We have 2 search head cluster, and few reports scheduled with email action, reports running on one search head is working fine and delivering emails as configured. but another search head is running the report but email is not delivered, I see the following ERROR logs in Inspect job.  04-01-2025 01:00:10.298 ERROR HttpClientRequest [1028078 StatusEnforcerThread] - HTTP client error=Read Timeout while accessing server=https://127.0.0.1:8089 for request=https://127.0.0.1:8089/servicesNS/nobody/botcop/saved/searches/SOMEREPORT/notify. 04-01-2025 01:00:10.299 ERROR SearchNotification [1028078 StatusEnforcerThread] - OnResult notification failed uri=https://127.0.0.1:8089/servicesNS/nobody/botcop/saved/searches/SOMEREPORT/notify postData= method=POST err=Read Timeout status=502 any idea how to fix this? I see the port on SH is listening and accepting connection--tested with telnet.  thanks in advance for any help you may have.  
We are trying to run the splunk forwarder local to fix few vulnerabilities and getting the following error message and stopped. Can you please help with fix for this?     Dockerfile: FROM splunk/u... See more...
We are trying to run the splunk forwarder local to fix few vulnerabilities and getting the following error message and stopped. Can you please help with fix for this?     Dockerfile: FROM splunk/universalforwarder:9:3   Commands run: docker build -t suf . docker run -d -p 9997:9997 -e "SPLUNK_START_ARGS=--accept-license" -e "SPLUNK_PASSWORD=changeme" --name uf suf     2025-04-01 06:40:50 2025-04-01 06:40:50 TASK [splunk_universal_forwarder : include_tasks] ****************************** 2025-04-01 06:40:50 included: /opt/ansible/roles/splunk_universal_forwarder/tasks/../../../roles/splunk_common/tasks/check_for_required_restarts.yml for localhost 2025-04-01 06:40:50 Tuesday 01 April 2025  13:40:50 +0000 (0:00:00.045)       0:00:19.675 ********* 2025-04-01 06:41:23 FAILED - RETRYING: [localhost]: Check for required restarts (5 retries left). 2025-04-01 06:41:23 FAILED - RETRYING: [localhost]: Check for required restarts (4 retries left). 2025-04-01 06:41:23 FAILED - RETRYING: [localhost]: Check for required restarts (3 retries left). 2025-04-01 06:41:23 FAILED - RETRYING: [localhost]: Check for required restarts (2 retries left). 2025-04-01 06:41:23 FAILED - RETRYING: [localhost]: Check for required restarts (1 retries left). 2025-04-01 06:41:23 2025-04-01 06:41:23 TASK [splunk_universal_forwarder : Check for required restarts] **************** 2025-04-01 06:41:23 fatal: [localhost]: FAILED! => { 2025-04-01 06:41:23     "attempts": 5, 2025-04-01 06:41:23     "changed": false, 2025-04-01 06:41:23     "changed_when_result": "The conditional check 'restart_required.status == 200' failed. The error was: error while evaluating conditional (restart_required.status == 200): 'dict object' has no attribute 'status'. 'dict object' has no attribute 'status'" 2025-04-01 06:41:23 } 2025-04-01 06:41:23 2025-04-01 06:41:23 MSG: 2025-04-01 06:41:23 2025-04-01 06:41:23 GET/services/messages/restart_required?output_mode=jsonadmin********8089NoneNoneNone[200, 404];;; failed with NO RESPONSE and EXCEP_STR as Not supported URL scheme http+unix 2025-04-01 06:41:23 2025-04-01 06:41:23 PLAY RECAP ********************************************************************* 2025-04-01 06:41:23 localhost                  : ok=68   changed=2    unreachable=0    failed=1    skipped=81   rescued=0    ignored=0   2025-04-01 06:41:23 2025-04-01 06:41:23 Tuesday 01 April 2025  13:41:23 +0000 (0:00:33.184)       0:00:52.859 *********
How can I get the following visualization ? I've tried the following commands: (index="my_index" sourcetype="sourcetype1") OR (index="my_index" sourcetype="sourcetype2") | fields field1 field2 ... See more...
How can I get the following visualization ? I've tried the following commands: (index="my_index" sourcetype="sourcetype1") OR (index="my_index" sourcetype="sourcetype2") | fields field1 field2 | stats count(field1) as Field1, count(field2) as Field2 and I getting the following graph
does Splunk Add-on for NetApp Data ONTAP work with OnTap above 9.7? https://splunkbase.splunk.com/app/3418 says "The add-on supports the data collection from NetApp® Data ONTAP® Cluster Mode version... See more...
does Splunk Add-on for NetApp Data ONTAP work with OnTap above 9.7? https://splunkbase.splunk.com/app/3418 says "The add-on supports the data collection from NetApp® Data ONTAP® Cluster Mode version 9.6 and 9.7"  with Latest Version 3.1.0 released September 13, 2024. Since OnTap has 9.16 available now and in September 2024 OnTap 9.15 - and possible 9.16 - was fully GA I'm wondering why only 9.6 and 9.7 are listed. Especially since 9.6 and 9.7 may be out of support from NetApp. The Release Notes for Latest Version 3.1.0 doesn't even mention OnTap releases.
I am trying to configure the Proofpoint - ET Splunk TA on Splunk Cloud, and during the setup, it asks for an API key and an authorization code. While I have the API key, I noticed that the authorizat... See more...
I am trying to configure the Proofpoint - ET Splunk TA on Splunk Cloud, and during the setup, it asks for an API key and an authorization code. While I have the API key, I noticed that the authorization code appears as "None", so I provided the Oink code instead. However, when I try to save the configuration, it does not get applied. Is there a specific way to configure this on Splunk Cloud? Any guidance on setting up ET Intelligence correctly would be greatly appreciated. Thank you     
Hello, we're implementing a distributed clustered infrastructure with CM redundancy, between the indexers/search heads and the CMs there is a load balancer. The mentioned LB can send requests to th... See more...
Hello, we're implementing a distributed clustered infrastructure with CM redundancy, between the indexers/search heads and the CMs there is a load balancer. The mentioned LB can send requests to the ha_active_status endpoint but it needs a static response set to "understand" which CM is active. The team working on the LB says the ha_active_status response is not static because it contains a timestamp so the status check doesn't work. Any workarounds? Thanks in advance Luca
Hi, I'm fairly new to AIX and I have been tasked with upgrading our customers version of SPLUNK from 9.0.1 to 9.4.1, the below steps are what I did which seemed to work and now have version 9.4.1 on... See more...
Hi, I'm fairly new to AIX and I have been tasked with upgrading our customers version of SPLUNK from 9.0.1 to 9.4.1, the below steps are what I did which seemed to work and now have version 9.4.1 on the system:   Implementation Plan:: Create a copy/backup of the splunkforwarder folder: cp -r /opt/splunk/splunkforwarder /opt/splunk/splunkforwarder_backup_$(date +%F) mkdir /tmp/splunk_temp tar -xvf /nim/media/SOFTWARE/splunk/Splunk-9.4.1/splunkforwarder-9.4.1-2f7817798b5d-aix-powerpc.tar -C /tmp/splunk_temp /opt/splunk/splunkforwarder/bin/splunk stop /opt/splunk/splunkforwarder/bin/splunk status rsync -av /tmp/splunk_temp/splunkforwarder/ /opt/splunk/splunkforwarder/ rm -rf /tmp/splunk_temp chown -R 510:510 /opt/splunk/splunkforwarder chown -R root:system /opt/splunk/splunkforwarder/var /opt/splunk/splunkforwarder/bin/splunk status      <<<< This command will kick in the migration and upgrade to 9.4.1 /opt/splunk/splunkforwarder/bin/splunk start /opt/splunk/splunkforwarder/bin/splunk status         <<<<< Shows splunk has been upgraded to 9.4.1   I've also read the Install the universal forwarder on AIX instructions but just wanted to check if the way i've upgraded splunk actually is going to work even though it says it has been upgraded ??   Thanks
Hello, I would like to clone cloud monitoring console dashboard (Forwarders tab) for only a specific set of HF/UF (and not all). I would like to make this dashboard available in one of the app use... See more...
Hello, I would like to clone cloud monitoring console dashboard (Forwarders tab) for only a specific set of HF/UF (and not all). I would like to make this dashboard available in one of the app used by service desk for monitoring. how can I achieve this? Thank you.
  Is it normal for this script to run all the time and take up a lot of memory? Is there any way to reduce memory usage?
Hi everyone,  In Splunk Cloud, I am trying to monitor Azure Load Balancer metrics using the Splunk Add-on for Microsoft Cloud Services. The Microsoft.Network/loadBalancers namespace has been includ... See more...
Hi everyone,  In Splunk Cloud, I am trying to monitor Azure Load Balancer metrics using the Splunk Add-on for Microsoft Cloud Services. The Microsoft.Network/loadBalancers namespace has been included in the Azure Metrics input configuration, and from the _internal logs, it seems that the data is being retrieved correctly. For example, in the logs from mscs_azure_metrics_collector.py, I see the following entry, which suggests that the metrics are being collected: 2025-04-01 07:34:06,096 +0000 log_level=INFO, pid=3442478, tid=ThreadPoolExecutor-0_4, file=mscs_azure_metrics_collector.py, func_name=_index_resource_metrics, code_line_no=526 | Chunked metrics timespan: 2025-04-01T06:28:04Z/2025-04-01T06:33:06Z for resource /subscriptions/<subscripition-id>/resourceGroups/<resource-group-id>/providers/Microsoft.Network/loadBalancers/<load-balancer-name> However, when I try to find these metrics in Splunk, they are missing. I ran the following query to check the indexes: | mcatalog values(resource_id) WHERE index=* by resource_id, index, metric_name, namespace But there is no trace of the Load Balancer. The only namespaces I see are: microsoft.compute/virtualmachines microsoft.network/virtualnetworkgateways microsoft.storage/storageaccounts Even though the following namespaces have been configured in the input settings: Microsoft.Compute/virtualMachines Microsoft.Storage/storageAccounts Microsoft.Storage/storageAccounts/blobServices Microsoft.Storage/storageAccounts/fileServices Microsoft.Storage/storageAccounts/queueServices Microsoft.Storage/storageAccounts/tableServices Microsoft.Network/loadBalancers Microsoft.Network/applicationGateways Microsoft.Network/virtualNetworkGateways Microsoft.Network/azureFirewalls Does anyone know where these metrics might be going? Is there another way to verify if Splunk is actually indexing Load Balancer metrics? Thanks in advance for any help!  
All, We are investigating a move from v7 to v8.    We currently rely heavily on the Investigation API  however per the documentation it is no longer available in v8.  The v8 API also seems to be m... See more...
All, We are investigating a move from v7 to v8.    We currently rely heavily on the Investigation API  however per the documentation it is no longer available in v8.  The v8 API also seems to be missing a get call for notable_events.   Is there another way in the API that we can pull details on the enterprise security events, investigations and assets for v8 or do we need to hold off on upgrading while the product matures? 
I've scowered the internet trying to find a similar issue with no avail.  | rex field=userRiskData.general "do\:(?<deviceOs>.+?)\|di\:(?<deviceId>.+?)\|db\:" | eval validUser=if(isnotnull(userRiskD... See more...
I've scowered the internet trying to find a similar issue with no avail.  | rex field=userRiskData.general "do\:(?<deviceOs>.+?)\|di\:(?<deviceId>.+?)\|db\:" | eval validUser=if(isnotnull(userRiskData.uuid),"Valid","Invalid") | eval op = case(deviceOs>"iOS 1" OR deviceOs<"iOS 999","iOS", deviceOs>"Android 0" OR deviceOs< "Android 999", "Android", 1=1, Other) | eval FullResult=validUser. "-" .outcome. "-" .op I am extracting a device OS from a general field, I don't have permissions to extract it as a perminent field.  When trying to do the eval do truncate the different iOS and Android versions as just "iOS" and "Android", the case is only showing the first OS type in the query. If i change the order to android it'll show android and no iOS, if i keep it as it, it only shows iOS. Is this due to the rex command or am i messing up syntax somewhere?  
Maybe a dumb question but its been making me mad, maybe im overthinking it. I have a very simple search: index=poc channel="/event/Submission_Event" | bucket _time span=$global_time.earliest$ | st... See more...
Maybe a dumb question but its been making me mad, maybe im overthinking it. I have a very simple search: index=poc channel="/event/Submission_Event" | bucket _time span=$global_time.earliest$ | stats count by _time | stats avg(count) as AverageCount I just want the avg(count) over the timerange that is selected. So if they picked 7 days it would give the 7 day average, if they picked 24h then it would give the average over the 24 hours span so i can use it in a single viz visual.   I keep getting: Error in 'bucket' command: The value for option span (-24h@h) is invalid. When span is expressed using a sub-second unit (ds, cs, ms, us), the span value needs to be < 1 second, and 1 second must be evenly divisible by the span value. because the token time is something like 24h@h which isnt feasible for token setting. How can i work around this? Any ideas? Thanks so much for the help!
Is there a way to enable edit_webhook_allow_list capability on Splunk Cloud trial? I'm unable to find this setting under Settings -> Server Settings.
Hello Experts,    Is there any document available which can give me more in-depth knowledge about itsi_summary index. 
Hello Experts,  looking for query where i can find  list of urls  blocked today which were allowed yesterday under different category.  fields- url, url-category, action (values-allowed, blocked) ... See more...
Hello Experts,  looking for query where i can find  list of urls  blocked today which were allowed yesterday under different category.  fields- url, url-category, action (values-allowed, blocked) and time (to compare between yesterday and today)   Thank you advance.   
I've got a question about lookup tables, and how to audit them. I have a rather large lookup table that's being recreated daily from a scheduled correlation search. I don't know if any other correl... See more...
I've got a question about lookup tables, and how to audit them. I have a rather large lookup table that's being recreated daily from a scheduled correlation search. I don't know if any other correlation searches or anything is actually using that lookup table. I wanted to see if there was a way to audit it's use so I can delete the table, and remove the correlation search if needed.