All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello there, I know its possible to set up monitoring / health checks on Splunk Enterprise, but is this a function that can be set up for Splunk Cloud? if so what would the process(s) be as all i ca... See more...
Hello there, I know its possible to set up monitoring / health checks on Splunk Enterprise, but is this a function that can be set up for Splunk Cloud? if so what would the process(s) be as all i can find is referrence to Splunk Enterprise. Many Thanks. Sting 663
Hello Community, I am asking you today to ask you for help concerning a project that I manage in my company. This is the availability calculation: I carry out the availability calculation by retrievi... See more...
Hello Community, I am asking you today to ask you for help concerning a project that I manage in my company. This is the availability calculation: I carry out the availability calculation by retrieving the critical alerts on ITRS (ITRS database) which are then indexed on splunk via the Splunk_DB_Connect application, then by applying a formula to tilt the unavailability then availability over a given time (24 hours or a week). my question is the following : when it comes to a cluster of servers (Active, Passive), I only take into account critical alerts if they are generated at the same time by the two members of a cluster (the active and passive servers). At the moment all the critical alerts generated by the 2 members of the cluster are indexed but I want to filter them in order to take the alert only if the two servers have emitted them at the same time or that the alert of the two server has been generated while server 1 was in critical condition and viseversa. do you have any idea how i can do this filtering? the final goal is to create an availability dashboard with graphics. thank you in advance for your help. 
Unable to view zipkin App in Splunk. I want to integrate zipkin with splunk as an app for splunk. I wanted to know is that if I can connect Zipkin and Splunk so that the flow will look like:   ... See more...
Unable to view zipkin App in Splunk. I want to integrate zipkin with splunk as an app for splunk. I wanted to know is that if I can connect Zipkin and Splunk so that the flow will look like:         Is there any tutorial or documentation about this? Thank you
We are using Centos 6 32 bit machine with Glibc 2.5 and. We are trying to monitor PHP application version 7.0 with appdynamics. I need to know the required GCC version to install the PHP agent. We ... See more...
We are using Centos 6 32 bit machine with Glibc 2.5 and. We are trying to monitor PHP application version 7.0 with appdynamics. I need to know the required GCC version to install the PHP agent. We have GCC 4.1.2 on our machine. Can we monitor the PHP 7.0 application on a 32-bit machine? Is my system requirements are OK?  ^ Edited by @Ryan.Paredez for formatting
Hello Am attempting to identify the name of the SQL Server and the SQL Agent process name based on a CMDB lookup and storing them into a KVStore. The results from the SPL itself show the correct re... See more...
Hello Am attempting to identify the name of the SQL Server and the SQL Agent process name based on a CMDB lookup and storing them into a KVStore. The results from the SPL itself show the correct results, however when checking the KVStore, its found that only the process name related to the Agent gets stored. On filtering the results, am able to store the Server process name, but when running without filters only the Agent process name is stored. SPL last lines - | inputlookup append=true lookup_host_process_monitoring_list | dedup process snow_node | outputlookup lookup_host_process_monitoring_list Results from execution -   Result stored in KVStore -   If the same SPL, I send the output to a CSV, all the results get saved. Has anyone seen or come across a similar situation?  Thanks n regards Prashant
[root@splunk-search-head-1 bin]# ./splunk bootstrap shcluster-captain -servers_list “https://##.###.##.191:8089,https://##.###.###.248:8089,https://##.###.###.248:8089” -auth admin:test123@ Invalid ... See more...
[root@splunk-search-head-1 bin]# ./splunk bootstrap shcluster-captain -servers_list “https://##.###.##.191:8089,https://##.###.###.248:8089,https://##.###.###.248:8089” -auth admin:test123@ Invalid uri in servers list “https://##.###.##.191:8089 ++++++++++++++++++++++++++++++++++++++++++++++++ I'm trying to setup a SH cluster captain, but I'm getting invalid uri in servers list, all information into the server list are validate and correct, need some help on this issue, I also setup my http to https and I still get the same error output, please help, Thank you.
  Hello Splunk community, I hope that your experience with Splunk can help me. Actually, I'm working on making an alert. The case is putting an alert on the "server message rate". And the goa... See more...
  Hello Splunk community, I hope that your experience with Splunk can help me. Actually, I'm working on making an alert. The case is putting an alert on the "server message rate". And the goal is to put an alert on the moving thresholds. I did a prediction (LLP5) and generated the upper and lower bound, then i put an alert on it. There are few points to be mentioned: 1- we have data just from 2nd of Dec 2020 (less than 2 months), 2- we saw a behaviour that alerts of this week are not anymore alerts when we run the query the next week. Some questions:  1- Can LLP5/LLP be the best possible MLTK algorithm to try when we wanna make alerts? I treid StateSpace Forecast and DecisionTreeRegressor as well, but in these algorithms, we should create lower and upper bounds ourselves. 2- Can we create alerts on the base of comparing week by week data? Prediction week by week and then comparing for example Mondays, Tuesdays, etc. with each other. Normally I know that it's possible, but in prediction and alerting I don't know. 3- Is it normal that an alert that we saw last week, this week gets disappeared? Does it mean, that data learns from the historical data, then an alert of last week can get disappeared this week? 4- In MLTK, In the time picker, when i choose 30 days in relative time, the result is different from the preset 30 dayst, why?? It's weird for me.  5- Finally, less than 2 months data, is it enough? The simplified query is as follows: | tstats sum(msg) as msg where `sws_logs_indexes` sourcetype=sws* host="p*" mp_name="xxx" sag_instance="*xxx" by _time sag_instance span=10m | streamstats current=false latest(msg) as previous_msg by sag_instance | eval rate=msg-previous_msg | timechart span=10m avg(rate) as "Server msg rate" | predict "Server msg rate" as prediction algorithm=LLP5 holdback=0 future_timespan=0 upper75=upper75 lower75=lower75 | eval isOutlier = if(prediction!="" AND 'Server msg rate' != "" AND ('Server msg rate' < 'lower75(prediction)' OR 'Server msg rate' > 'upper75(prediction)'), 1, 0) | table _time,isOutlier |where isOutlier=1   Thanks in advance, Kind regards, Maryam       
Hi @ehaddad_splunk , In our environment, we currently have Splunk App for Salesforce v3.0.0  Splunk Add-on for Salesforce v2.0.0 Now, we are planning to upgrade the Splunk Add-on for Salesforce... See more...
Hi @ehaddad_splunk , In our environment, we currently have Splunk App for Salesforce v3.0.0  Splunk Add-on for Salesforce v2.0.0 Now, we are planning to upgrade the Splunk Add-on for Salesforce to v4.0.2 which supports both Splunk Enterprise v7.2.9.1 and v8.1.x because we will upgrade Splunk Enterprise to v8.1.1 in the near future. With this, if we upgrade Splunk Add-on for Salesforce to v4.0.2, would it still be working with Splunk App for Salesforce v3.0.0? We check Splunk App for Salesforce v3.0.0 and it only supports Splunk Enterprise v7.2.x so we would like to confirm with you as author of Splunk App for Salesforce, if Splunk App for Salesforce v3.0.0 is still compatible with Splunk Add-on for Salesforce v4.0.2 as well as Splunk Enterprise v8.1.1?. Looking forward to your feedback.
Hi All,   I am trying to use "REST API Modular Input v1.9.8" app to receive REST API calls from Dynatrace. But i am not getting any data. When i checked the splunkd.log, i could see the below error... See more...
Hi All,   I am trying to use "REST API Modular Input v1.9.8" app to receive REST API calls from Dynatrace. But i am not getting any data. When i checked the splunkd.log, i could see the below error. Please help me to fix this issue.   ======================================== 01-20-2021 08:30:34.943 -0800 WARN  LocalAppsAdminHandler - Using deprecated capabilities for write: admin_all_objects or edit_local_apps. See enable_install_apps in limits.conf 01-20-2021 08:30:37.628 -0800 ERROR ExecProcessor - message from ""D:\Program Files\Splunk\bin\Python3.exe" "D:\Program Files\Splunk\etc\apps\rest_ta\bin\rest.py"" Exception in thread Thread-1: 01-20-2021 08:30:37.628 -0800 ERROR ExecProcessor - message from ""D:\Program Files\Splunk\bin\Python3.exe" "D:\Program Files\Splunk\etc\apps\rest_ta\bin\rest.py"" Traceback (most recent call last): 01-20-2021 08:30:37.628 -0800 ERROR ExecProcessor - message from ""D:\Program Files\Splunk\bin\Python3.exe" "D:\Program Files\Splunk\etc\apps\rest_ta\bin\rest.py""   File "D:\Program Files\Splunk\Python-3.7\lib\threading.py", line 926, in _bootstrap_inner 01-20-2021 08:30:37.628 -0800 ERROR ExecProcessor - message from ""D:\Program Files\Splunk\bin\Python3.exe" "D:\Program Files\Splunk\etc\apps\rest_ta\bin\rest.py""     self.run() 01-20-2021 08:30:37.628 -0800 ERROR ExecProcessor - message from ""D:\Program Files\Splunk\bin\Python3.exe" "D:\Program Files\Splunk\etc\apps\rest_ta\bin\rest.py""   File "D:\Program Files\Splunk\Python-3.7\lib\threading.py", line 870, in run 01-20-2021 08:30:37.628 -0800 ERROR ExecProcessor - message from ""D:\Program Files\Splunk\bin\Python3.exe" "D:\Program Files\Splunk\etc\apps\rest_ta\bin\rest.py""     self._target(*self._args, **self._kwargs) 01-20-2021 08:30:37.628 -0800 ERROR ExecProcessor - message from ""D:\Program Files\Splunk\bin\Python3.exe" "D:\Program Files\Splunk\etc\apps\rest_ta\bin\rest.py""   File "D:\Program Files\Splunk\etc\apps\rest_ta\bin\rest.py", line 489, in do_run 01-20-2021 08:30:37.628 -0800 ERROR ExecProcessor - message from ""D:\Program Files\Splunk\bin\Python3.exe" "D:\Program Files\Splunk\etc\apps\rest_ta\bin\rest.py""     (item.split('=',1) for item in http_header_propertys_str.split(delimiter))) 01-20-2021 08:30:37.628 -0800 ERROR ExecProcessor - message from ""D:\Program Files\Splunk\bin\Python3.exe" "D:\Program Files\Splunk\etc\apps\rest_ta\bin\rest.py""   File "D:\Program Files\Splunk\etc\apps\rest_ta\bin\rest.py", line 488, in <genexpr> 01-20-2021 08:30:37.628 -0800 ERROR ExecProcessor - message from ""D:\Program Files\Splunk\bin\Python3.exe" "D:\Program Files\Splunk\etc\apps\rest_ta\bin\rest.py""     http_header_propertys = dict((k.strip(), v.strip()) for k,v in 01-20-2021 08:30:37.628 -0800 ERROR ExecProcessor - message from ""D:\Program Files\Splunk\bin\Python3.exe" "D:\Program Files\Splunk\etc\apps\rest_ta\bin\rest.py"" ValueError: not enough values to unpack (expected 2, got 1) ===============================================   Data Input Details are below: Endpoint URL - https://abcd.dynatrace-managed.com/e/xxxxxxxxxxxxxxxxx/api/v1 Activation Key - xxxxxxxxxxxxxxxxxxxxxxxxx HTTP Method - GET Authentication Type - none HTTP Header Propertys - Authorization: Api-Token xxxxxxxxxxxxxxxxx Response Type - json Streaming Request - 0 Sequential Mode - 0 Index Error Responses - 0 Log Level - INFO Interval - 60 Set Source Type - Manual Source Type - DynatraceOAV Host - $decideOnStartup Index - main
This https://docs.splunk.com/Documentation/SplunkCloud/latest/RESTTUT/RESTandCloud link states that Splunk Support opens port 8089 for REST access. And then once you have REST API access, you can mak... See more...
This https://docs.splunk.com/Documentation/SplunkCloud/latest/RESTTUT/RESTandCloud link states that Splunk Support opens port 8089 for REST access. And then once you have REST API access, you can make calls with a SAML account. Whereas, this https://docs.splunk.com/Documentation/SplunkCloud/8.1.2011/User/Admintasks link states "You cannot use SAML authentication with the REST API". Are these two documentations unrelated? Can “SAML” authentication be used with REST API on a paid subscription to Splunk cloud, or not?
HI , I'm working on developing a app, I observed that the only capability available for a user to access or edit kvstore is using admin_all_object. Can you please suggest if there are any other cap... See more...
HI , I'm working on developing a app, I observed that the only capability available for a user to access or edit kvstore is using admin_all_object. Can you please suggest if there are any other capability to achieve this? As a admin we do not prefer giving users this capability(admin_all_object) just to access the kvstore.   Even if i give this capability the app inspect ,points out at this as an error. check_authorize_conf_admin_all_objects_privileges default/authorize.conf [role_cs_admin] contains `admin_all_objects = enabled`. File: default/authorize.conf Line Number: 16  
Hi Team, We have a requirement that is I have few of the sourcetypes configured from our end which contains very important logs. So if there are no logs from those particular sourcetypes then we nee... See more...
Hi Team, We have a requirement that is I have few of the sourcetypes configured from our end which contains very important logs. So if there are no logs from those particular sourcetypes then we need to get an alert for the same. So how should we need to configure the alert. Hence kindly guide me with the search query for the same. i.e. The condition is that the search query would be running for every 15 minutes and it should trigger an alert if any of these sourcetypes are not sending logs to splunk for every 15 minutes. Example: index= windows sourcetype=dns index=firewall sourcetype=syslog index=os sourcetype=top etc. So kindly help me with the query.    
I want to know that there are or not SSH sessions which is in last 5 minutes in past 1 month. (except today) - I want to compare srcip and dstip - Time range picker (last 5 minutes) index=fw AND... See more...
I want to know that there are or not SSH sessions which is in last 5 minutes in past 1 month. (except today) - I want to compare srcip and dstip - Time range picker (last 5 minutes) index=fw AND dstport=22 NOT [ search  index=fw AND dstport=22 earliest=-1mon@mon latest=-1day@day | fields + srcip, dstip] | dedup srcip, dstip | table _time, srcip, dstip, dstport, protocal, action, hostname   this is not working correctly.
I've installed the v8.2.0 Splunk App for Unix and Linux on my search head per the instructions in the documentation. However, restarting Splunk throws an error of:   Bad regex value: '(?::){0}*', o... See more...
I've installed the v8.2.0 Splunk App for Unix and Linux on my search head per the instructions in the documentation. However, restarting Splunk throws an error of:   Bad regex value: '(?::){0}*', of param: props.conf / [(?::){0}*]; why: this regex is likely to apply to all data and may break summary indexing, among other Splunk features.   This appears to be coming from the {%appdir%}\default\props.conf file:   ## Dropdowns [(?::){0}*] LOOKUP-dropdowns = dropdownsLookup host OUTPUT unix_category unix_group   As a result, the Metrics page in the app is non-functional. Is anyone else having this issue? What did you do resolve it?
Hi @gcusello , We've observed slow Splunk indexing in one webMethods servers (aexxxxxx) causing certain testcases to fail. Normally indexing completes in 1-2 seconds. But on some servers it takes mu... See more...
Hi @gcusello , We've observed slow Splunk indexing in one webMethods servers (aexxxxxx) causing certain testcases to fail. Normally indexing completes in 1-2 seconds. But on some servers it takes much longer (upto 2 minutes).       Could you please help me with this. Regards, Rahul    
Hi All, I have 10 index.  But in1 index logs were not being indexed frequently. Restarting HF fixing this issue for few days. And again same issue persist. How to find root cause of this issue and ... See more...
Hi All, I have 10 index.  But in1 index logs were not being indexed frequently. Restarting HF fixing this issue for few days. And again same issue persist. How to find root cause of this issue and fix this permanently. @isoutamo @saravanan90 @thambisetty @ITWhisperer @gcusello @bowesmana   @to4kawa   
Hi Splunkers,  Trying to disable certain capabilities from roles via rest api, but havent been successful yet. Please help. What is the rest endpoint url that I should use to disable. I tried some... See more...
Hi Splunkers,  Trying to disable certain capabilities from roles via rest api, but havent been successful yet. Please help. What is the rest endpoint url that I should use to disable. I tried something like https://127.0.0.1:8089/services/authorization/roles/power/capabilities/rtsearch/disable, but this is invalid end point. Splunk documentation has not been very helpful. 
Hi friends!   Im doing a search like index=_internal From a custom app, even if Im the admin user. I have a cluster Splunk architecture and still I obtain messages like this Search results might... See more...
Hi friends!   Im doing a search like index=_internal From a custom app, even if Im the admin user. I have a cluster Splunk architecture and still I obtain messages like this Search results might be incomplete: the search process on the peer:XXXXX ended prematurely. Check the peer log, such as $SPLUNK_HOME/var/log/splunk/splunkd.log and as well as the search.log for the particular search. Search process did not exit cleanly, exit_code=255, description="exited with code 255". Please look in search.log for this peer in the Job Inspector for more info.   But I can do the query from search app whitout any problem. Is there a way to enable the _internal index for other apps?
Hey there,    Right now I have come close to completing an absolute epic in getting a multi-array json  API response converted to a semi reportable format. The final hurdle I am running into is get... See more...
Hey there,    Right now I have come close to completing an absolute epic in getting a multi-array json  API response converted to a semi reportable format. The final hurdle I am running into is getting seconds converted to hh:mm:ss for duration reporting. So far, I have tried the following different approaches, working with a field that just has numerical content: eval TimeLoggedIn=tostring(TimeLoggedInSecs, "duration") This does not return any value  eval TimeLoggedIn=strftime(TimeLoggedInSecs,"%H:%M:%S") This returns inaccurate values. EX: converts 25595 seconds into 1 hours, six minutes, and 35 seconds when it should be over 7 hours. but this is the only way that returns a value convert num(TimeLoggedInSecs)| eval TimeLoggedIn=TimeLoggedInSecs/60 this returns no value.    So far the community has been super helpful in explaining some less than common things and allowing me to learn and reverse engineer to solve the overall problem. But this is kinda frustrating as it's the only thing left in the way, and it seems to be the hardest.    Any help or ideas would be greatly appreciated!   reportingthe news as presented by reporters for newspapers or radio or televisionMore (Definitions, Synonyms, Translation)    
Hey All,   Sorry if this has been asked before but I couldnt see the same such post.    I want to include some specific windows logs and exclude all others. From what I can see the below conf... See more...
Hey All,   Sorry if this has been asked before but I couldnt see the same such post.    I want to include some specific windows logs and exclude all others. From what I can see the below config is including other events too - is this because I have Application and System Stanzas with no whitelists? Or do I need to include a blacklist too within this stanza?   [WinEventLog://Security] disabled = 0 start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 whitelist = 4776,4720,4723,1102,4624,4726,4625 renderXml=true   Thanks!