All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I'm trying to create a new account on Splunk On-Call with Global Admin permissions and share it with a group of people in the higher ups. I would like to know if there is any url (like the splun... See more...
Hi, I'm trying to create a new account on Splunk On-Call with Global Admin permissions and share it with a group of people in the higher ups. I would like to know if there is any url (like the splunk one) to bypass SSO and login with basic credentials. The reason for this is since there will be multiple people using the same account, we don't want SSO/MFA to be set up.  
what are the different ways to calculate size of one index ? looking for solutions other than "licence_usage.log". Appreciate your help. Thank you.
Please help me on how I can check if the field value is continuously increasing for 3 hours.  tried below query but does not help .  Perc_change values are extracted from logs , whereas prev_change... See more...
Please help me on how I can check if the field value is continuously increasing for 3 hours.  tried below query but does not help .  Perc_change values are extracted from logs , whereas prev_change and growing are calculated form perc_change values. | streamstats current=f window=1 latest(perc_change) as prev_value | fillnull value=0 | eval growing = if(perc_change< prev_value,1,0) | table _time GB change perc_change prev_value growing getting values as  perc_change  prev_value  growing 60                          0                    0 35                         60                  1 33                         35                    1 150                       33                   0  expectations :  perc_change  prev_value  growing 60                         35                  1 35                         33                  1 33                         150               0 150                       0                    0  I have to send a report if the perc_change values are continuously growing for 3 hours Appreciate your help . Thank you.  
Our splunk implementation is like a Splunk enterprise where the indexer is set up and several universal forwarder and in one of the server where universal forwarder is installed the JMS queue is set ... See more...
Our splunk implementation is like a Splunk enterprise where the indexer is set up and several universal forwarder and in one of the server where universal forwarder is installed the JMS queue is set up. From this JMS queue I need to read messages, how can I achieve this in splunk
I am new to Splunk and getting below error seems like we started getting this error after yum install update. Any help or suggestion is really appreciated. [root@XXXXXX ~]# systemctl status Splun... See more...
I am new to Splunk and getting below error seems like we started getting this error after yum install update. Any help or suggestion is really appreciated. [root@XXXXXX ~]# systemctl status Splunkd.service ● Splunkd.service - Systemd service file for Splunk, generated by 'splunk enable boot-start' Loaded: loaded (/etc/systemd/system/Splunkd.service; enabled; vendor preset: disabled) Active: failed (Result: exit-code) since Sun 2023-09-17 13:46:55 MST; 3h 58min ago Process: 3466 ExecStartPost=/bin/bash -c chown -R splunk:splunk /sys/fs/cgroup/memory/system.slice/Splunkd.service (code=exited, status=0/SUCCESS) Process: 3464 ExecStartPost=/bin/bash -c chown -R splunk:splunk /sys/fs/cgroup/cpu/system.slice/Splunkd.service (code=exited, status=0/SUCCESS) Process: 3463 ExecStart=/opt/splunk/bin/splunk _internal_launch_under_systemd (code=exited, status=1/FAILURE) Main PID: 3463 (code=exited, status=1/FAILURE) Sep 17 13:46:55 XXX.xx.xxx systemd[1]: Splunkd.service: Service RestartSec=100ms expired, scheduling restart. Sep 17 13:46:55 XXX.xx.xxx systemd[1]: Splunkd.service: Scheduled restart job, restart counter is at 5. Sep 17 13:46:55 XXX.xx.xxx systemd[1]: Stopped Systemd service file for Splunk, generated by 'splunk enable boot-start'. Sep 17 13:46:55 XXX.xx.xxx systemd[1]: Splunkd.service: Start request repeated too quickly. Sep 17 13:46:55 XXX.xx.xxx systemd[1]: Splunkd.service: Failed with result 'exit-code'. Sep 17 13:46:55 XXX.xx.xxx systemd[1]: Failed to start Systemd service file for Splunk, generated by 'splunk enable boot-start'.    [splunk@xxx.xx.xxx bin]$ ./splunk enable boot start /opt/splunk/bin/splunkd: symbol lookup error: /opt/splunk/bin/splunkd: undefined symbol: SSL_load_error_strings /opt/splunk/bin/splunkd: symbol lookup error: /opt/splunk/bin/splunkd: undefined symbol: SSL_load_error_strings /opt/splunk/bin/splunkd: symbol lookup error: /opt/splunk/bin/splunkd: undefined symbol: SSL_load_error_strings Did not find "disabled" setting of "kvstore" stanza in server bundle. /opt/splunk/bin/splunkd: symbol lookup error: /opt/splunk/bin/splunkd: undefined symbol: SSL_load_error_strings /opt/splunk/bin/splunkd: symbol lookup error: /opt/splunk/bin/splunkd: undefined symbol: SSL_load_error_strings splunkd: symbol lookup error: splunkd: undefined symbol: SSL_load_error_strings /opt/splunk/bin/splunkd: symbol lookup error: /opt/splunk/bin/splunkd: undefined symbol: SSL_load_error_strings "/opt/splunk/bin/splunkd" returned 127
We have a case of a delay of an hour for a certain index that happened last week, while the indexing delays are normally up to half a minute. I'm struggling with the parameters for the MLTK to captur... See more...
We have a case of a delay of an hour for a certain index that happened last week, while the indexing delays are normally up to half a minute. I'm struggling with the parameters for the MLTK to capture these specific cases as outliers. Any ideas how to set it up correctly? It’s the tolerance that seems to be affected by the spike itself.
Hello! I need some help from splunkers!!!   I'm using the search index=notable | search status_label=Closed | top limit=5 rule_title in the Splunk Enterprise Security, to list top 10 rule_title val... See more...
Hello! I need some help from splunkers!!!   I'm using the search index=notable | search status_label=Closed | top limit=5 rule_title in the Splunk Enterprise Security, to list top 10 rule_title values.   But i need to bring the field "comment" of each rule_title in the table.   Can please help me?   Tks!!!
There are some values of IP addresses from `cim_Authentication_indexes`. This index is for look up. I want to make if the IP addresses from `cim_Authentication_indexes` are in the second ... See more...
There are some values of IP addresses from `cim_Authentication_indexes`. This index is for look up. I want to make if the IP addresses from `cim_Authentication_indexes` are in the second lookup index. I tried making some query but it quite something wrong.  (`cim_Authentication_indexes`) tag=authentication NOT (action=success user=*$) | table dest, dst, Ip, source_ip, src_ip, src | eval IP_Addr = coalesce(dest, dst, Ip, source_ip, src_ip, src) | append [search index="tml_it-mandiant_ti" type=ipv4 | table value] | stats count by IP_Addr, value | where count >= 1 Please correct this and help me out. Thanks.
Hello All, I need to identify the top log sources which are sending large data to Splunk. Tried Licence master dashboard which isn't helping much.  My requirement is to create a table which contain... See more...
Hello All, I need to identify the top log sources which are sending large data to Splunk. Tried Licence master dashboard which isn't helping much.  My requirement is to create a table which contains following fields. e.g: sourcetype, vol_GB, index, percentage.
I've created app action 'my_action_name' which results I can collect in playbook just fine. phantom.collect2(container=container, datapath=["my_action_name:action_result.data"], action results=resul... See more...
I've created app action 'my_action_name' which results I can collect in playbook just fine. phantom.collect2(container=container, datapath=["my_action_name:action_result.data"], action results=results) but I don't see action_result.data datapath neither in app documentation nor I can pick it up in VPE . I have only 'status' and 'message' available
I got the following errors in my Splunk Error Logs: Init failed, unable to subscribe to Windows Event Log channel Microsoft-Windows-Sysmon/Operational: errorCode=5 The UniversalForwarder is insta... See more...
I got the following errors in my Splunk Error Logs: Init failed, unable to subscribe to Windows Event Log channel Microsoft-Windows-Sysmon/Operational: errorCode=5 The UniversalForwarder is installed on a Windows 10 Desktop (not part of a Doamin). I can see Sysmon logging in the eventlog viewer and I can forward the System and Security logs but not the Sysmon logs. What do I overlook here? inputs.conf:   [WinEventLog://Security] disabled = 0 [WinEventLog://System] disabled = 0 [WinEventLog://Microsoft-Windows-Sysmon/Operational] disabled = 0  
I have a below Splunk query which gives me the result. My SPL searches the " eventType IN (security.threat.detected, security.internal.threat.detected) " and provides me the result src_ip results. ... See more...
I have a below Splunk query which gives me the result. My SPL searches the " eventType IN (security.threat.detected, security.internal.threat.detected) " and provides me the result src_ip results. But the same src_ip field has multiple user_id results in other eventType.  I want my SPL to search the src_ip results with other eventType and filter if the user_id="*idp*". Example - If my src_ip=73.09.52.00, then the src_ip should search the other available eventType and filter the result if the user_id=*idp*   My Current SPL   index=appsrv_test sourcetype="OktaIM2:log" eventType IN (security.threat.detected, security.internal.threat.detected) | rex field=debugContext.debugData.url "\S+username\=(?<idp_accountname>\S+idp-references)" | search NOT idp_accountname IN (*idp-references*) | regex src_ip!="47.37.\d{1,3}.\d{1,3}" | rename actor.alternateId as user_id, target{}.displayName as user, client.device as dvc, client.userAgent.rawUserAgent as http_user_agent, client.geographicalContext.city as src_city client.geographicalContext.state as src_state client.geographicalContext.country as src_country, displayMessage as threat_description | strcat "Outcome Reason: " outcome.reason ", Outcome Result: " outcome.result details | stats values(src_ip) as src_ip count by _time signature threat_description eventType dvc src_city src_state src_country http_user_agent details | `security_content_ctime(firstTime)` | `security_content_ctime(lastTime)` | `okta_threatinsight_threat_detected_filter`    
Could someone explain to me how this cluster command works in the backend? I couldn't find any resource that explain the technique/algorithm behind this cluster command. How does it cluster the ma... See more...
Could someone explain to me how this cluster command works in the backend? I couldn't find any resource that explain the technique/algorithm behind this cluster command. How does it cluster the matches (termlist/termset/ngramset)? How is t be calculated? It doesn't seem to be probability based. What kind of clustering algorithm it uses? It would be the best if someone can explain the full algorithm for this cluster command. Much thanks
I have six different SPL queries that I run on a specific IP Address.  Is it possible to save a search as a report, schedule that report to run, and provide the report a seed file of IP addresses. ... See more...
I have six different SPL queries that I run on a specific IP Address.  Is it possible to save a search as a report, schedule that report to run, and provide the report a seed file of IP addresses. I have 20+ IP addresses that I need to run the same 6 reports on.  Right now i'm just running them interactively, which is a pain. Any suggestions?
Hi, I want to use timechart or bucket span to view the result every 30 mins using below query. Could you please let me know how I can use timechart or bucket span=30m _time here.   index=* ha... See more...
Hi, I want to use timechart or bucket span to view the result every 30 mins using below query. Could you please let me know how I can use timechart or bucket span=30m _time here.   index=* handler=traffic <today timerange> | stats dc(dsid) as today_Traffic | appendcols [search index=* handler=traffic <yesterday timerange> | stats dc(dsid) as Previous_day_Traffic] | eval delta_traffic = today_Traffic-Previous_day_Traffic
Splunk newby here.  I have a search that works if I change it every day but would like to add it to a dashboard for monitoring without having to change the date.  It looks for all the newly created a... See more...
Splunk newby here.  I have a search that works if I change it every day but would like to add it to a dashboard for monitoring without having to change the date.  It looks for all the newly created accounts in the current/past day, depending on what date I put in. The search is index=Activedirectory whenCreated="*,9/15/23" |table whenCreated, name, manager, title, description then search for last 24 hours.  The format of the time is %H:%M:%S AM/PM, %a %m/%d/%Y.  The 2 issues I am having are, how do you specify the AM/PM and how do you set up the search so that it will search for the last 24 hours using the current date.  I was thinking it is "time()" but I am not successful in getting the results I need.  
I am searching far and wide for recommendations, best practices, even just conversations on this topic - all for naught. Here's my big dilemma that I would love to get something other than "It Depend... See more...
I am searching far and wide for recommendations, best practices, even just conversations on this topic - all for naught. Here's my big dilemma that I would love to get something other than "It Depends" answer to:  Which platform is more appropriate for Metrics: Splunk Cloud or Splunk O11y ? I am not an expert in Splunk, but know that there are certain challenges with metrics indexes in Splunk Core/Cloud . And, I can also see there are Metrics Rules in O11y  that help with filtering, and some pre-built dashboards. But I still am lost for direction.  Short of actually ingesting the same amount of metrics into both Platforms and seeing what comes out...do I want to do that ? Well, "that depends"!  Any guidance deeply appreciated.
I recently upgraded my search head cluster to 9.x and since then my skipped/deferred searches have sky rocketed.     index=_internal source=*scheduler.log status=* | timechart span=60s count... See more...
I recently upgraded my search head cluster to 9.x and since then my skipped/deferred searches have sky rocketed.     index=_internal source=*scheduler.log status=* | timechart span=60s count by status      
Hello! I'm using a text input box to input a username. If I were to simply put that username into my base search, it works great and is very quick. I have other search input parameters, so my probl... See more...
Hello! I'm using a text input box to input a username. If I were to simply put that username into my base search, it works great and is very quick. I have other search input parameters, so my problem is that if I DON'T specify a username, I want it to include all values. This includes null values. I started by using an asterisk as the default input value, but that doesn't include null values. The only way I've been able to make this partially work is by removing the username from the base search, then using an eval command to give the null entries a value, and then search the base results for either "*" to include everything, or the username I typed in. This is horribly inefficient because I have to search my entire database for every entry before I can filter it. I also think this doesn't work properly because it has a limit on the number of results in the base search.  I've done a lot of searching for doing an eval command BEFORE the base search, but that doesn't seem to be possible. This can't be a unique scenario. How do I search for both "null" and "NOT null" values in the base search without removing my username input box?
Hello, I wonder if somebody can please help me to sort the following data: Into this table: Any ideas are welcome I was trying to run this query but it is not separating the values of... See more...
Hello, I wonder if somebody can please help me to sort the following data: Into this table: Any ideas are welcome I was trying to run this query but it is not separating the values of the fields properly: index=query_mcc | eval data = split(_raw, ",") | eval Date = strftime(_time, "%Y-%m-%d-%H:%M:%S") | eval Category = mvindex(data, 1) | eval Status = mvindex(data, -1) | eval Command = mvindex(data, 0) | table host, Date, Category, Status, Command   but is giving me this , where it only shows the first line..