All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @BlueSocket  The Splunk Web UI under Settings -> Server Settings -> Email Domains -> Allowed Domains specifically reads from and writes to the global configuration file located at $SPLUNK_HOME/et... See more...
Hi @BlueSocket  The Splunk Web UI under Settings -> Server Settings -> Email Domains -> Allowed Domains specifically reads from and writes to the global configuration file located at $SPLUNK_HOME/etc/system/local/alert_actions.conf. Since you configured allowedDomainList within an app context ($SPLUNK_HOME/etc/apps/my_app/local/alert_actions.conf), Splunk correctly applies this setting during its configuration layering process. This is why btool shows the setting as active and the warning message in Splunk Web disappears. However, the UI page itself is designed only to display and manage the setting present in the system/local directory. It does not reflect settings inherited from app-level configurations. Your configuration is active and enforced, but it won't appear on that specific UI page unless you define it globally in $SPLUNK_HOME/etc/system/local/alert_actions.conf. Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi Team, I am using following CURL command curl -k -u admin:password -X POST https://<host>:<port>/servicesNS/akanksha_goel1/search/saved/searches/Clickstream-Microsurvey-Failure-Alert-Rule... See more...
Hi Team, I am using following CURL command curl -k -u admin:password -X POST https://<host>:<port>/servicesNS/akanksha_goel1/search/saved/searches/Clickstream-Microsurvey-Failure-Alert-Rule-Dev -d "disabled=1" --max-time 60 -H "Content-Type: application/x-www-form-urlencoded" But I am getting error as Error: read ECONNRESET kindly help us resolve the issue!
Hi @livehybrid  First of all, thanks for your response. When I search using index="wazuh-alerts", I get lots of events. For the search index="wazuh-alerts" "Medium", I get 7 events.
I just upgraded to 9.4 and I got the new 9.3+ warning in SplunkWeb about the alert_actions.conf allowedDomainList setting not being set and that I should fix it. I have now set the list correctly in... See more...
I just upgraded to 9.4 and I got the new 9.3+ warning in SplunkWeb about the alert_actions.conf allowedDomainList setting not being set and that I should fix it. I have now set the list correctly in an app and deployed the app to machine: /opt/splunk/etc/apps/my_app/local/alert_action.conf [email] allowedDomainList = mydomain.com,myotherdomain.com I then restart Splunk and I get no warnings. I then run the command: /opt/splunk/bin/splunk cmd btool alert_actions list email I see the following: [email] allowedDomainList = mydomain.com,myotherdomain.com I then go into SplunkWeb and I do not see the allowedDomainList warning in the messages list - the issue is fixed. I then go into Settings->Server Settings->Email Domains->Allowed Domains and this setting is empty. I would expect to see "mydomain.com,myotherdomain.com" in the setting control. Even when I have set everything correctly and Splunk Btool shows the right setting and I have restarted Splunk , why is the setting not showing up?
Hi @rfolkert  Yes, you can use the following in the options{} of your visualisation: "backgroundColor": "> primary | seriesByName('color') | lastPoint()"  You can use your own thresholding... See more...
Hi @rfolkert  Yes, you can use the following in the options{} of your visualisation: "backgroundColor": "> primary | seriesByName('color') | lastPoint()"  You can use your own thresholding and logic to determine the "color" field which should render a HTML colour code (such as #00ff00) in this example:   *OR* if you want to use the built-in colour editing capability in Dashboard Studio then set it up as you normally would under "Color and Style" options for your viz, then once done go to the source code section and update to the following: "backgroundColor": "> primary | seriesByName('threshold') | lastPoint() | rangeValue(backgroundColorEditorConfig)" Note, in this example I am using backgroundColor but you can update this to majorColor or whatever other color styling type you wish to use.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
| makeresults | fields - _time | eval _raw="host,CPU,MeM,UsePct,Swapused Apple1,5,3,2,7 Apple2,4,1,12,9 Apple3,1,2,4,8" | multikv forceheader=1 | table host,CPU,MeM,UsePct,Swapused | lookup hostmetri... See more...
| makeresults | fields - _time | eval _raw="host,CPU,MeM,UsePct,Swapused Apple1,5,3,2,7 Apple2,4,1,12,9 Apple3,1,2,4,8" | multikv forceheader=1 | table host,CPU,MeM,UsePct,Swapused | lookup hostmetrics.csv host | foreach * [| eval fieldvalue=if(fieldname="<<FIELD>>",<<FIELD>>,fieldvalue)] | eval metric=if(fieldvalue < value,"OK","Error") I set up hostmetrics.csv like this | makeresults format=csv data="host,fieldname,value Apple1,CPU,4 Apple3,MeM,2 Apple2,UsePct,8" | outputlookup hostmetrics.csv
I am trying to create a new finding-based detection to group findings together when the risk score exceeds a threshold, similar to the RBA concept. However, I am encountering an issue: when the find... See more...
I am trying to create a new finding-based detection to group findings together when the risk score exceeds a threshold, similar to the RBA concept. However, I am encountering an issue: when the finding (notable) is created, no Entity appears in the Incident Review dashboard, even though the fields risk_object, normalized_risk_object, and risk_object_type have values. Has anyone experienced the same issue?
Hi, I'm having exactly the same problem. I can integrate ThousandEyes with AppDynamics and receive health status and recommendations for tests to be created. I can also create dashboards in AppDynam... See more...
Hi, I'm having exactly the same problem. I can integrate ThousandEyes with AppDynamics and receive health status and recommendations for tests to be created. I can also create dashboards in AppDynamics with ThousandEyes metrics. I just can't sync TE with AppD RUM. Could you tell me how you solved your problem? Thank you very much in advance.
As the title suggests I have a scenario where I have two fields for a single value panel, the first is a number I want to display, but the second field I want to use to color the visualization.  the ... See more...
As the title suggests I have a scenario where I have two fields for a single value panel, the first is a number I want to display, but the second field I want to use to color the visualization.  the color field is a threshold so if i am under threshold green over threshold red and it is returned as a simple boolean 0-1 my basic stats output looks like this, two values, the first is my number displayed, the 2nd my threshold I want to color off of. | stats values(PercentChange) as PercentChange latest(threshold) as threshold the question is how do I tell dashboard studio to color off of the secondary field instead of the field defined as my display value?
Hi @msatish  Just to confirm - are you using SC4S?  I am not familiar with ExtremeCloud XIQ and it isnt a "known product" to SC4S however we should still be able to update splunk_metadata.csv. Do ... See more...
Hi @msatish  Just to confirm - are you using SC4S?  I am not familiar with ExtremeCloud XIQ and it isnt a "known product" to SC4S however we should still be able to update splunk_metadata.csv. Do you know if the data is being sent in CEF format? If possible please could you provide a couple of lines of your events to help us work out the correct values for the splunk_metadata.csv file?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @hazardoom  If you arent wanting to go down the app route then another thing you could look at it is using the REST API.  Ive used this with clients in the past, here is an example to get your s... See more...
Hi @hazardoom  If you arent wanting to go down the app route then another thing you could look at it is using the REST API.  Ive used this with clients in the past, here is an example to get your started if this is something you wanted to explore: import configparser import requests from urllib.parse import quote # ======= USER CONFIGURATION ========= SPLUNK_HOST = 'https://yoursplunkhost:8089' # e.g., https://localhost:8089 SPLUNK_TOKEN = 'your-token-here' # just the token string, no 'Splunk ' prefix APP = 'search' # target app for the saved searches CONF_FILE = 'savedsearches.conf' # path to your savedsearches.conf file VERIFY_SSL = True # set to False if using self-signed certs USERNAME = 'yourUsername' # API requires this as a path component # ==================================== # Map conf fields to REST API fields def convert_field_name(field, value): """Map .conf fields to API fields and perform value translations.""" if field == "enableSched": return "is_scheduled", "1" if value.strip().lower() in ("1", "true", "yes", "on") else "0" return field, value def load_savedsearches(conf_path): cp = configparser.ConfigParser(strict=False, delimiters=['=']) cp.optionxform = str # preserve case and case sensitivity cp.read(conf_path) return cp def upload_savedsearches(cp): headers = {'Authorization': f'Splunk {SPLUNK_TOKEN}'} base_url = f"{SPLUNK_HOST}/servicesNS/{USERNAME}/{APP}/saved/searches" for savedsearch_name in cp.sections(): data = {'name': savedsearch_name} for field, value in cp[savedsearch_name].items(): api_field, api_value = convert_field_name(field, value) data[api_field] = api_value search_url = f"{base_url}/{quote(savedsearch_name)}" # Check if the saved search exists (GET request) check = requests.get(search_url, headers=headers, verify=VERIFY_SSL) if check.status_code == 200: print(f"Updating existing savedsearch: {savedsearch_name}") r = requests.post(search_url, data=data, headers=headers, verify=VERIFY_SSL) else: print(f"Creating new savedsearch: {savedsearch_name}") r = requests.post(base_url, data=data, headers=headers, verify=VERIFY_SSL) if r.status_code not in (200, 201): print(f"Failed for {savedsearch_name}: {r.status_code} {r.text}") else: print(f"Success: {savedsearch_name}") def main(): cp = load_savedsearches(CONF_FILE) upload_savedsearches(cp) if __name__ == "__main__": main() We use this approach to upload file direct from Git pipelines which is especially useful if you arent an admin on the platform so cannot upload an app - however may also work well for your usecase. Note: you could use the Splunk Python SDK too, which basically does the same thing.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
I think Splunk doesn't have a built-in/defined sourcetype for ExtremeCloud XIQ logs. Can we define a custom sourcetype, like `extremecloud:xiq`, in the syslog server(splunk_metadata.csv)? If so, how ... See more...
I think Splunk doesn't have a built-in/defined sourcetype for ExtremeCloud XIQ logs. Can we define a custom sourcetype, like `extremecloud:xiq`, in the syslog server(splunk_metadata.csv)? If so, how do we make sure the logs coming from ExtremeCloud XIQ platform land in the "extreme" index and use the "extremecloud:xiq" sourcetype?
At this location, We handle setting up ITSI and not SA for teams for monitoring. They work with us as we need them. I am in ITSI creating alerts with correlation searches our correlation searches hav... See more...
At this location, We handle setting up ITSI and not SA for teams for monitoring. They work with us as we need them. I am in ITSI creating alerts with correlation searches our correlation searches have about 20 lines of required fields that show in the alerts after the calculations. All I need to know is determine if the fields for the event meets or exceeds that percent criteria, if it does it'll generate a of low or high based on what they put in the lookup for the severity.  I could do a case statement in the code but I am trying not to hard code. If I put it into the lookup, if the customer changes their mind on the percents later or they want it to be a low alert instead of critical, they can modify the table without the code being touched.  If you do a custom KPI, I haven't been able to allow the required fields that have to be in the alert for the monitoring group. Here, once the code for that index goes live then it is considered production. Which means, one small change of code requires going thru the testing process between us, the team and the monitoring group who watches the alerts.  It's a whole ordeal.   SO, If I can create a table where the team can say a field and the percent  then it is easier.  Each event in the log the customer is creating has multiple fields to check. The only thing I care about is the host, the field value and the severity.  I am trying to avoid hard coding. If I can't come up with a way to use the lookups, I will do it.  I know that this is NOT what people normally do, but sometimes you have to think outside the box to make life easier. Teams don't know what they want and constantly change their minds. When we are working to onboard new indexes in the building for infrastructure and applications...our team of 4 doesn't have time to do a lot of changes when someone changes their minds.
9.4.0/9.3.2/9.2.4/9.1.7 and above has the fix. Since you are already on 9.4.1, it also has the fix.
What is the best practice to migrate all alerts and dashboards from on-prem to cloud except using a custom app which seems very restricted and dull? 
We have used it in RegexGames although I can't remember how many came up with a solution. Yes, regex may not be "pretty", but can be fun trying to solve regex puzzles!
Hi @hazardoom , it's always a bad practice maintain objects in private folders, the only way, is to move them in the app. Ciao. Giuseppe
But seriously, this solution is usually good enough unless you have a strict demand on validating the IP format in which case regex is not the best tool for the job (it can be done using regex bu... See more...
But seriously, this solution is usually good enough unless you have a strict demand on validating the IP format in which case regex is not the best tool for the job (it can be done using regex but it's neither pretty, nor efficient).
Hi @Mfmahdi  Please do not tag/call out specific users on here - there are lots of people monitoring for questions being raised and those you have tagged do have day jobs and other priorities so you... See more...
Hi @Mfmahdi  Please do not tag/call out specific users on here - there are lots of people monitoring for questions being raised and those you have tagged do have day jobs and other priorities so you risk your question being missed. To troubleshoot the KV Store initialization issue, start by examining the logs on the search head cluster members for specific errors. | rest /services/kvstore/status | fields splunk_server, current* Then check on each SHC member: ps -ef | grep mongod # Check mongod logs for errors tail -n 200 $SPLUNK_HOME/var/log/splunk/mongod.log # Check splunkd logs for KV Store related errors grep KVStore $SPLUNK_HOME/var/log/splunk/splunkd.log | tail -n 200   Verify mongod Process: Ensure the mongod process, which underlies the KV Store, is running on the search head members. Use the ps command or your operating system's equivalent. If it's not running, investigate why using the logs. Check Cluster Health: Ensure the search head cluster itself is healthy using the Monitoring Console or the CLI command splunk show shcluster-status run from the captain. KV Store issues can sometimes be symptomatic of underlying cluster communication problems. From your screenshot it looks like this is showing as starting state, so hopefully the logs shine some light on the issue. Check Resources: Verify sufficient disk space, memory, and CPU resources on the search head cluster members, particularly on the node currently acting as the KV Store primary. Focus on the error messages found in mongod.log and splunkd.log as they usually pinpoint the root cause (e.g., permissions, disk space, configuration errors, corrupted files). If the logs indicate corruption or persistent startup failures that restarts don't resolve, you may need to consider more advanced recovery steps, potentially involving Splunk Support. USeful docs which might help: Splunk Docs: Troubleshoot the KV Store Splunk Docs: About the KV Store  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Dears,,, The KV Store initialization on our search head cluster was previously working fine. However, unexpectedly, we are now encountering the error: "KV Store initialization has not been completed... See more...
Dears,,, The KV Store initialization on our search head cluster was previously working fine. However, unexpectedly, we are now encountering the error: "KV Store initialization has not been completed yet", and the KV Store status shows as "starting." I attempted a rolling restart across the search heads, but the issue persists. Kindly provide your support to resolve this issues  @gcusello  @woodcock  Thank you in advance.