All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I'd like to include this in an email alert. I've got various emails to alert when going over but I'd like to show the number of warnings in that 60 day rolling window. 
Have you check that those indexes are there and splunk is running there without issues? Basically if you have GUI enabled on IDX you can try query from there or use CLI and do queries on command lin... See more...
Have you check that those indexes are there and splunk is running there without issues? Basically if you have GUI enabled on IDX you can try query from there or use CLI and do queries on command line too. Check also if there is any issues with internal logs. You can query those from internal indexes like  index=_internal log_level IN (error, warn)
*SIGH* I guessed that this might be the reason. It is just annoying that the settings from other apps are shown in part of the settings, but not this one.
Hi @ranafge  Do those 7 medium events look like the ones you would expect to see in the dashboards? Without seeing the data its hard for us to work out so please provide redacted samples if you can.... See more...
Hi @ranafge  Do those 7 medium events look like the ones you would expect to see in the dashboards? Without seeing the data its hard for us to work out so please provide redacted samples if you can. Is the data JSON structured? Does it have a field data -> vulnerability -> severity when looking at the event(s)?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @akanksha01  The ECONNRESET error indicates that the TCP connection was abruptly closed by the Splunk server or an intermediary network device (like a firewall or load balancer) before the reques... See more...
Hi @akanksha01  The ECONNRESET error indicates that the TCP connection was abruptly closed by the Splunk server or an intermediary network device (like a firewall or load balancer) before the request could be fully processed or the response sent. The curl command syntax itself for disabling the saved search appears correct. Troubleshooting steps: Verify Network Connectivity: Ensure the IP and port (typically 8089 for the Splunk management port) are correct and reachable from the machine running the curl command. Check for firewalls or network ACLs that might be blocking or resetting the connection at either source or destination. Check Splunk Server Status: Ensure the Splunk instance is running and responsive, are you able to reach the instance using netcat from your source? Examine Splunk Logs: Check the$SPLUNK_HOME/var/log/splunk/splunkd.log on the Splunk server for any errors occurring around the time you ran the curl command. This might provide clues about why the server closed the connection. Check Intermediary Devices: If you are connecting through a load balancer or proxy, check its logs and configuration. It might have shorter timeouts or specific rules causing the connection reset. Simplify the Request: Try the request without --max-time 60 initially to rule out timeout interactions, although disabling an alert should be very fast. You could also apply -v to provide a more verbose output. Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @BlueSocket  The Splunk Web UI under Settings -> Server Settings -> Email Domains -> Allowed Domains specifically reads from and writes to the global configuration file located at $SPLUNK_HOME/et... See more...
Hi @BlueSocket  The Splunk Web UI under Settings -> Server Settings -> Email Domains -> Allowed Domains specifically reads from and writes to the global configuration file located at $SPLUNK_HOME/etc/system/local/alert_actions.conf. Since you configured allowedDomainList within an app context ($SPLUNK_HOME/etc/apps/my_app/local/alert_actions.conf), Splunk correctly applies this setting during its configuration layering process. This is why btool shows the setting as active and the warning message in Splunk Web disappears. However, the UI page itself is designed only to display and manage the setting present in the system/local directory. It does not reflect settings inherited from app-level configurations. Your configuration is active and enforced, but it won't appear on that specific UI page unless you define it globally in $SPLUNK_HOME/etc/system/local/alert_actions.conf. Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi Team, I am using following CURL command curl -k -u admin:password -X POST https://<host>:<port>/servicesNS/akanksha_goel1/search/saved/searches/Clickstream-Microsurvey-Failure-Alert-Rule... See more...
Hi Team, I am using following CURL command curl -k -u admin:password -X POST https://<host>:<port>/servicesNS/akanksha_goel1/search/saved/searches/Clickstream-Microsurvey-Failure-Alert-Rule-Dev -d "disabled=1" --max-time 60 -H "Content-Type: application/x-www-form-urlencoded" But I am getting error as Error: read ECONNRESET kindly help us resolve the issue!
Hi @livehybrid  First of all, thanks for your response. When I search using index="wazuh-alerts", I get lots of events. For the search index="wazuh-alerts" "Medium", I get 7 events.
I just upgraded to 9.4 and I got the new 9.3+ warning in SplunkWeb about the alert_actions.conf allowedDomainList setting not being set and that I should fix it. I have now set the list correctly in... See more...
I just upgraded to 9.4 and I got the new 9.3+ warning in SplunkWeb about the alert_actions.conf allowedDomainList setting not being set and that I should fix it. I have now set the list correctly in an app and deployed the app to machine: /opt/splunk/etc/apps/my_app/local/alert_action.conf [email] allowedDomainList = mydomain.com,myotherdomain.com I then restart Splunk and I get no warnings. I then run the command: /opt/splunk/bin/splunk cmd btool alert_actions list email I see the following: [email] allowedDomainList = mydomain.com,myotherdomain.com I then go into SplunkWeb and I do not see the allowedDomainList warning in the messages list - the issue is fixed. I then go into Settings->Server Settings->Email Domains->Allowed Domains and this setting is empty. I would expect to see "mydomain.com,myotherdomain.com" in the setting control. Even when I have set everything correctly and Splunk Btool shows the right setting and I have restarted Splunk , why is the setting not showing up?
Hi @rfolkert  Yes, you can use the following in the options{} of your visualisation: "backgroundColor": "> primary | seriesByName('color') | lastPoint()"  You can use your own thresholding... See more...
Hi @rfolkert  Yes, you can use the following in the options{} of your visualisation: "backgroundColor": "> primary | seriesByName('color') | lastPoint()"  You can use your own thresholding and logic to determine the "color" field which should render a HTML colour code (such as #00ff00) in this example:   *OR* if you want to use the built-in colour editing capability in Dashboard Studio then set it up as you normally would under "Color and Style" options for your viz, then once done go to the source code section and update to the following: "backgroundColor": "> primary | seriesByName('threshold') | lastPoint() | rangeValue(backgroundColorEditorConfig)" Note, in this example I am using backgroundColor but you can update this to majorColor or whatever other color styling type you wish to use.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
| makeresults | fields - _time | eval _raw="host,CPU,MeM,UsePct,Swapused Apple1,5,3,2,7 Apple2,4,1,12,9 Apple3,1,2,4,8" | multikv forceheader=1 | table host,CPU,MeM,UsePct,Swapused | lookup hostmetri... See more...
| makeresults | fields - _time | eval _raw="host,CPU,MeM,UsePct,Swapused Apple1,5,3,2,7 Apple2,4,1,12,9 Apple3,1,2,4,8" | multikv forceheader=1 | table host,CPU,MeM,UsePct,Swapused | lookup hostmetrics.csv host | foreach * [| eval fieldvalue=if(fieldname="<<FIELD>>",<<FIELD>>,fieldvalue)] | eval metric=if(fieldvalue < value,"OK","Error") I set up hostmetrics.csv like this | makeresults format=csv data="host,fieldname,value Apple1,CPU,4 Apple3,MeM,2 Apple2,UsePct,8" | outputlookup hostmetrics.csv
I am trying to create a new finding-based detection to group findings together when the risk score exceeds a threshold, similar to the RBA concept. However, I am encountering an issue: when the find... See more...
I am trying to create a new finding-based detection to group findings together when the risk score exceeds a threshold, similar to the RBA concept. However, I am encountering an issue: when the finding (notable) is created, no Entity appears in the Incident Review dashboard, even though the fields risk_object, normalized_risk_object, and risk_object_type have values. Has anyone experienced the same issue?
Hi, I'm having exactly the same problem. I can integrate ThousandEyes with AppDynamics and receive health status and recommendations for tests to be created. I can also create dashboards in AppDynam... See more...
Hi, I'm having exactly the same problem. I can integrate ThousandEyes with AppDynamics and receive health status and recommendations for tests to be created. I can also create dashboards in AppDynamics with ThousandEyes metrics. I just can't sync TE with AppD RUM. Could you tell me how you solved your problem? Thank you very much in advance.
As the title suggests I have a scenario where I have two fields for a single value panel, the first is a number I want to display, but the second field I want to use to color the visualization.  the ... See more...
As the title suggests I have a scenario where I have two fields for a single value panel, the first is a number I want to display, but the second field I want to use to color the visualization.  the color field is a threshold so if i am under threshold green over threshold red and it is returned as a simple boolean 0-1 my basic stats output looks like this, two values, the first is my number displayed, the 2nd my threshold I want to color off of. | stats values(PercentChange) as PercentChange latest(threshold) as threshold the question is how do I tell dashboard studio to color off of the secondary field instead of the field defined as my display value?
Hi @msatish  Just to confirm - are you using SC4S?  I am not familiar with ExtremeCloud XIQ and it isnt a "known product" to SC4S however we should still be able to update splunk_metadata.csv. Do ... See more...
Hi @msatish  Just to confirm - are you using SC4S?  I am not familiar with ExtremeCloud XIQ and it isnt a "known product" to SC4S however we should still be able to update splunk_metadata.csv. Do you know if the data is being sent in CEF format? If possible please could you provide a couple of lines of your events to help us work out the correct values for the splunk_metadata.csv file?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @hazardoom  If you arent wanting to go down the app route then another thing you could look at it is using the REST API.  Ive used this with clients in the past, here is an example to get your s... See more...
Hi @hazardoom  If you arent wanting to go down the app route then another thing you could look at it is using the REST API.  Ive used this with clients in the past, here is an example to get your started if this is something you wanted to explore: import configparser import requests from urllib.parse import quote # ======= USER CONFIGURATION ========= SPLUNK_HOST = 'https://yoursplunkhost:8089' # e.g., https://localhost:8089 SPLUNK_TOKEN = 'your-token-here' # just the token string, no 'Splunk ' prefix APP = 'search' # target app for the saved searches CONF_FILE = 'savedsearches.conf' # path to your savedsearches.conf file VERIFY_SSL = True # set to False if using self-signed certs USERNAME = 'yourUsername' # API requires this as a path component # ==================================== # Map conf fields to REST API fields def convert_field_name(field, value): """Map .conf fields to API fields and perform value translations.""" if field == "enableSched": return "is_scheduled", "1" if value.strip().lower() in ("1", "true", "yes", "on") else "0" return field, value def load_savedsearches(conf_path): cp = configparser.ConfigParser(strict=False, delimiters=['=']) cp.optionxform = str # preserve case and case sensitivity cp.read(conf_path) return cp def upload_savedsearches(cp): headers = {'Authorization': f'Splunk {SPLUNK_TOKEN}'} base_url = f"{SPLUNK_HOST}/servicesNS/{USERNAME}/{APP}/saved/searches" for savedsearch_name in cp.sections(): data = {'name': savedsearch_name} for field, value in cp[savedsearch_name].items(): api_field, api_value = convert_field_name(field, value) data[api_field] = api_value search_url = f"{base_url}/{quote(savedsearch_name)}" # Check if the saved search exists (GET request) check = requests.get(search_url, headers=headers, verify=VERIFY_SSL) if check.status_code == 200: print(f"Updating existing savedsearch: {savedsearch_name}") r = requests.post(search_url, data=data, headers=headers, verify=VERIFY_SSL) else: print(f"Creating new savedsearch: {savedsearch_name}") r = requests.post(base_url, data=data, headers=headers, verify=VERIFY_SSL) if r.status_code not in (200, 201): print(f"Failed for {savedsearch_name}: {r.status_code} {r.text}") else: print(f"Success: {savedsearch_name}") def main(): cp = load_savedsearches(CONF_FILE) upload_savedsearches(cp) if __name__ == "__main__": main() We use this approach to upload file direct from Git pipelines which is especially useful if you arent an admin on the platform so cannot upload an app - however may also work well for your usecase. Note: you could use the Splunk Python SDK too, which basically does the same thing.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
I think Splunk doesn't have a built-in/defined sourcetype for ExtremeCloud XIQ logs. Can we define a custom sourcetype, like `extremecloud:xiq`, in the syslog server(splunk_metadata.csv)? If so, how ... See more...
I think Splunk doesn't have a built-in/defined sourcetype for ExtremeCloud XIQ logs. Can we define a custom sourcetype, like `extremecloud:xiq`, in the syslog server(splunk_metadata.csv)? If so, how do we make sure the logs coming from ExtremeCloud XIQ platform land in the "extreme" index and use the "extremecloud:xiq" sourcetype?
At this location, We handle setting up ITSI and not SA for teams for monitoring. They work with us as we need them. I am in ITSI creating alerts with correlation searches our correlation searches hav... See more...
At this location, We handle setting up ITSI and not SA for teams for monitoring. They work with us as we need them. I am in ITSI creating alerts with correlation searches our correlation searches have about 20 lines of required fields that show in the alerts after the calculations. All I need to know is determine if the fields for the event meets or exceeds that percent criteria, if it does it'll generate a of low or high based on what they put in the lookup for the severity.  I could do a case statement in the code but I am trying not to hard code. If I put it into the lookup, if the customer changes their mind on the percents later or they want it to be a low alert instead of critical, they can modify the table without the code being touched.  If you do a custom KPI, I haven't been able to allow the required fields that have to be in the alert for the monitoring group. Here, once the code for that index goes live then it is considered production. Which means, one small change of code requires going thru the testing process between us, the team and the monitoring group who watches the alerts.  It's a whole ordeal.   SO, If I can create a table where the team can say a field and the percent  then it is easier.  Each event in the log the customer is creating has multiple fields to check. The only thing I care about is the host, the field value and the severity.  I am trying to avoid hard coding. If I can't come up with a way to use the lookups, I will do it.  I know that this is NOT what people normally do, but sometimes you have to think outside the box to make life easier. Teams don't know what they want and constantly change their minds. When we are working to onboard new indexes in the building for infrastructure and applications...our team of 4 doesn't have time to do a lot of changes when someone changes their minds.
9.4.0/9.3.2/9.2.4/9.1.7 and above has the fix. Since you are already on 9.4.1, it also has the fix.
What is the best practice to migrate all alerts and dashboards from on-prem to cloud except using a custom app which seems very restricted and dull?