All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, We have data from Change Auditor coming via HEC setup on a Heavy Forwarder. This HF instance was upgraded to Version 9.2.2. After that, I am seeing a difference in the way Splunk displays new ev... See more...
Hi, We have data from Change Auditor coming via HEC setup on a Heavy Forwarder. This HF instance was upgraded to Version 9.2.2. After that, I am seeing a difference in the way Splunk displays new events on SH. It is now converting UTC->PST.  I ran a search for previous week and for those events it is converting timestamp correctly, from UTC-> Eastern.  I am a little confused since both searches are done from same search head against same set of indexers. If there was a TZ issue, woudn't Splunk have converted both incorrectly?  I also ran same searches on indexer with identical output. Recent events in PST whereas older events continue to show as EST. Here are some examples For previous week   Recent. Splunk shows a UTC->PST conversion instead. I did test this manually via Add Data and Splunk is correctly formatting it to Eastern. How can I troubleshoot why recent events in search are showing PST conversion? My current TZ setting on SH is still set to Eastern Time. Also confirmed that system time for HF, indexers and Search Heads is set to Eastern.  Thanks 
I have a log with a sample of the following POST Uploaded File Size for project id : 123 and metadata id : xxxxxxxxxxxx is : 1234 and time taken to upload is: 51ms   So this is project id : 123 S... See more...
I have a log with a sample of the following POST Uploaded File Size for project id : 123 and metadata id : xxxxxxxxxxxx is : 1234 and time taken to upload is: 51ms   So this is project id : 123 Size is 1234 Upload Speed is 51ms I what to extract the project id , size and the upload time as fields  also regarding the upload time I guess I just need the number right.  
Hi Splunk community, I have a quick question about an app, such as the Microsoft Cloud Services app, in a multiple Heavy Forwarder environment. The app is installed on one Heavy Forwarder and makes... See more...
Hi Splunk community, I have a quick question about an app, such as the Microsoft Cloud Services app, in a multiple Heavy Forwarder environment. The app is installed on one Heavy Forwarder and makes some API calls to Azure to retrieve data from an event hub and store this data in an indexer cluster. If the Heavy Forwarder where the add-on is installed goes down, no logs are retrieved from the event hub. So, what are the best practices for this kind of app, which retrieves logs through API calls, to be more resilient? The same applies to some Cisco add-ons that collect logs from Cisco devices via an API. For now, I will configure the app on another Heavy Forwarder without enabling data collection, but in case of failure, human intervention will be needed. I would be curious to know what solutions you implement for this kind of issue. Thanks Nicolas I'm curious
Hi Guys,   I hope someone can help me out or give me a pointer here. When  I run my searches I always get events in the future. I usually fix the time picker so it stops it but afterwards, I have t... See more...
Hi Guys,   I hope someone can help me out or give me a pointer here. When  I run my searches I always get events in the future. I usually fix the time picker so it stops it but afterwards, I have to place the events in order and it's just adding a step for every search I make. Is there a way I can implement some type of SPL to make sure that I only get dates in the current time instead of the future?        
Hi Team, The xml for my Dashboard consists of multiple search queries within a panel. What can I add to it to make the Dashboard automatically refresh along with the panels?  I have followed the d... See more...
Hi Team, The xml for my Dashboard consists of multiple search queries within a panel. What can I add to it to make the Dashboard automatically refresh along with the panels?  I have followed the documentation (http://docs.splunk.com/Documentation/Splunk/latest/Viz/PanelreferenceforSimplifiedXML) and included refresh interval in the form attribute and set the refresh type and refresh interval for individual panels using the <search> element. <form refresh="30"> <form> <row> <panel> <table> <search> <query> ... </query> <earliest>-60m@m</earliest> <latest>now</latest> <refresh>60</refresh> <refreshType>delay</refreshType> </search> </table> </panel> </row> </form> Here, i am using div for each table query and appending these child tables to list under the parent table in a dropdown manner using the javascript.   With this implementation, refresh is not working at the specified interval and the dropdown table will get exit at every refresh interval and we would need to reload the entire dashboard to see the dropdown content in the child table.
Hello Splunkers!! I am getting "Bad allocation" error on all the Splunk dashboard panel. Please help me to identify the potential root cause.  
Hi there. This morning i did a SHC restart, and found something very strange from SHC Members, WARN DistributedPeer [1964778 DistributedPeerMonitorThread] - Peer:https://OLDIDX#1:8089 Authenticati... See more...
Hi there. This morning i did a SHC restart, and found something very strange from SHC Members, WARN DistributedPeer [1964778 DistributedPeerMonitorThread] - Peer:https://OLDIDX#1:8089 Authentication Failed WARN DistributedPeer [1964778 DistributedPeerMonitorThread] - Peer:https://OLDIDX#2:8089 Authentication Failed WARN DistributedPeer [1964778 DistributedPeerMonitorThread] - Peer:https://OLDIDX#3:8089 Authentication Failed WARN DistributedPeer [1964778 DistributedPeerMonitorThread] - Peer:https://OLDIDX#4:8089 Authentication Failed GetRemoteAuthToken [1964778 DistributedPeerMonitorThread] - Unable to get auth token from peer: https://OLDIDX#1:8089 due to: Connect Timeout; exceeded 5000 milliseconds GetBundleListTransaction [1964778 DistributedPeerMonitorThread] - Unable to get bundle list from peer: https://OLDIDX#2:8089 due to: Connect Timeout; exceeded 60000 milliseconds GetRemoteAuthToken [2212932 DistributedPeerMonitorThread] - Unable to get auth token from peer: https://OLDIDX#3:8089 due to: Connect Timeout; exceeded 5000 milliseconds GetRemoteAuthToken [2212932 DistributedPeerMonitorThread] - Unable to get auth token from peer: https://OLDIDX#4:8089 due to: Connect Timeout; exceeded 5000 milliseconds All OLDIDX are old servers, turned off and shut down! None of SHC Members has OLDIDX#* in DistributedSeach conf Recently i update a V7 to V8 Infrastructure. I also searched all .conf for all ip of OLDIDX#*, none of them was found. WHERE are those "artifact" stored? Is there something in "raft" of new SHC? Need to remove alla SHC conf, and redo it from begin?   This messages in splunkd.log appears ONLY DURING the restart of SHC. During the days, using the SHC, i never had, and still i don't have any type of similar message. Thanks.
Dear Splunkers, I would like to ask for support in order to provide specific users with the capabilities they can edit permissions for Alerts and Dashboards. We do have several different Users creat... See more...
Dear Splunkers, I would like to ask for support in order to provide specific users with the capabilities they can edit permissions for Alerts and Dashboards. We do have several different Users created but this specific user had inherited Power roles. But despite users are not allowed to modify permissions even for own dashboards or alerts.  Can you please suggest ? Thank you Stives
I have a search X that shows requests, search Y shows responses. Value A = number of X Value B = number of Y I want to calculate a new value C, that is A-B (would show number of requests where res... See more...
I have a search X that shows requests, search Y shows responses. Value A = number of X Value B = number of Y I want to calculate a new value C, that is A-B (would show number of requests where response is missing. How can I calculate C?
We are looking to deploy Edge Processors (EP) in a high availability configuration - with 2 EP systems per site and multiple sites. We need to use Edge Processors (or Heavy Fowarders, I guess?) to in... See more...
We are looking to deploy Edge Processors (EP) in a high availability configuration - with 2 EP systems per site and multiple sites. We need to use Edge Processors (or Heavy Fowarders, I guess?) to ingest and filter/transform the event logs before they leave our environment and go to our MSSP Splunk Cloud. Ideally, I want the Universal Forwarders (UF) to use the local site EPs. However, in the case that those are unavailable, I would like the UFs to failover to use the EPs at another site. I do not want to have the UFs use the EPs at another site by default, as this will increase WAN costs, so I can't simply list all the servers in the defaultGroup. For example: [tcpout] defaultGroup=site_one_ingest [tcpout:site_one_ingest] disabled=false server=10.1.0.1:9997,10.1.0.2:9997 [tcpout:site_two_ingest] disabled=true server=10.2.0.1:9997,10.2.0.2:9997 Is there any way to configure the UFs to prefer the local Edge Processors (site_one_ingest), but then to failover to the second site (site_two_ingest) if those systems are not available? Is it also possible for the configuration to support automated failback/recovery?
still a total newb here so please be gentle, on Microsoft Window 2019 servers we have an Index cluster and here's how the Hot and Cold volumes are defined on it: C:\Program Files\Splunk\etc\system\... See more...
still a total newb here so please be gentle, on Microsoft Window 2019 servers we have an Index cluster and here's how the Hot and Cold volumes are defined on it: C:\Program Files\Splunk\etc\system\local\indexes.conf [default] [volume:cold11] path = E:\Splunk-Cold maxVolumeDataSizeMB = 12000000 [volume:hot11] path = D:\Splunk-Hot-Warm maxVolumeDataSizeMB = 1000000   that I can live with, but on our Search Heads here's how we point on the volumes, and this don't look right to me: C:\Program Files\Splunk\etc\apps\_1-LDC_COMMON\local\indexes.conf [volume:cold11] path = $SPLUNK_DB [volume:hot11] path = $SPLUNK_DB   should the stanzas on the Search Heads match the ones on our Indexers?
Does Splunk for Cisco Identity Services (ISE) support data containing IPv6 addresses?  
Hi splunk team. I wonder which version of Ciber vision is supported by the API realeas v 2.0 for splunk enterprise
Please forgive me, I am new to Splunk. I'm trying to create a dashboard that visualizes successful/failures logins. I don't have anyone I work with that's a professional or even knowledgeable/expe... See more...
Please forgive me, I am new to Splunk. I'm trying to create a dashboard that visualizes successful/failures logins. I don't have anyone I work with that's a professional or even knowledgeable/experienced enough to help. So, I started to use ChatGPT to help develop these strings. After I got the base setup from ChatGPT, I tried to fill in the sourcetypes. But now I'm getting this error: Error in 'EvalCommand': The expression is malformed.  Please let me know what I need to do to fix this. Ask away please. It'll only help me get better.   index=ActiveDirectory OR index=WindowsLogs OR index=WinEventLog ( (sourcetype=WinEventLog (EventCode=4624 OR EventCode=4625)) # Windows logon events OR (sourcetype=ActiveDirectory "Logon" OR "Failed logon") # Active Directory logon events (adjust keywords if needed) ) | eval LogonType=case( EventCode=4624, "Successful Windows Login", EventCode=4625, "Failed Windows Login", searchmatch("Logon"), "Successful AD Login", searchmatch("Failed logon"), "Failed AD Login" ) | eval user=coalesce(Account_Name, user) # Combine Account_Name and user fields | eval src_ip=coalesce(src_ip, host) # Unify source IP or host | stats count by LogonType, user, src_ip | sort - count
To investigate the issue of missing data in Splunk for a period of 3-4 hours, where gaps were observed in the _internal index as well as in performance metrics like network and CPU data, But still ca... See more...
To investigate the issue of missing data in Splunk for a period of 3-4 hours, where gaps were observed in the _internal index as well as in performance metrics like network and CPU data, But still can't able to find out the potential root cause of data missing in Splunk. Please help me what I need to investigate more to find out the potential root cause of the data gap in Splunk. Gap into the _internal index data Network performance data gap is visible Gap in the CPU performance data      
I want to make a sound alert in my dashboard studio dashboard. Is it even possible?
Our Splunk Add-on app was created with python modules ( like cffi, cryptography and PyJWT) where these modules are placed under app /bin/lib folder..  this add-on is working as expected. When we try... See more...
Our Splunk Add-on app was created with python modules ( like cffi, cryptography and PyJWT) where these modules are placed under app /bin/lib folder..  this add-on is working as expected. When we try to upgrade Splunk Enterprise from 8.2.3  to 9.3,  our add-on is failing to load python modules and throwing error 'No module named '_cffi_backend'    Note: we are running on python 3.7. and updated Splunk python SD to latest 2.0.2
How do you get a Saved Search to ignore a specific automatic lookup? The reason for wanting to do this is because the lookup being used is very large and the enrichment is not needed for a specific ... See more...
How do you get a Saved Search to ignore a specific automatic lookup? The reason for wanting to do this is because the lookup being used is very large and the enrichment is not needed for a specific search. Using something like | fields - FieldA FieldB Did not not speed up the search (where FieldA and FieldB are fields that are matched on in the automatic lookup) When the automatic lookup has the permissions changed to just one app then the saved search runs very fast but I do not believe keeping it like that is an option. Ideally there would be an option that could be a setting just for this one saved search so that it would not know the automatic lookup exists. Thanks in advance for any suggestions.