All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

As the title suggests I have a scenario where I have two fields for a single value panel, the first is a number I want to display, but the second field I want to use to color the visualization.  the ... See more...
As the title suggests I have a scenario where I have two fields for a single value panel, the first is a number I want to display, but the second field I want to use to color the visualization.  the color field is a threshold so if i am under threshold green over threshold red and it is returned as a simple boolean 0-1 my basic stats output looks like this, two values, the first is my number displayed, the 2nd my threshold I want to color off of. | stats values(PercentChange) as PercentChange latest(threshold) as threshold the question is how do I tell dashboard studio to color off of the secondary field instead of the field defined as my display value?
Hi @msatish  Just to confirm - are you using SC4S?  I am not familiar with ExtremeCloud XIQ and it isnt a "known product" to SC4S however we should still be able to update splunk_metadata.csv. Do ... See more...
Hi @msatish  Just to confirm - are you using SC4S?  I am not familiar with ExtremeCloud XIQ and it isnt a "known product" to SC4S however we should still be able to update splunk_metadata.csv. Do you know if the data is being sent in CEF format? If possible please could you provide a couple of lines of your events to help us work out the correct values for the splunk_metadata.csv file?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @hazardoom  If you arent wanting to go down the app route then another thing you could look at it is using the REST API.  Ive used this with clients in the past, here is an example to get your s... See more...
Hi @hazardoom  If you arent wanting to go down the app route then another thing you could look at it is using the REST API.  Ive used this with clients in the past, here is an example to get your started if this is something you wanted to explore: import configparser import requests from urllib.parse import quote # ======= USER CONFIGURATION ========= SPLUNK_HOST = 'https://yoursplunkhost:8089' # e.g., https://localhost:8089 SPLUNK_TOKEN = 'your-token-here' # just the token string, no 'Splunk ' prefix APP = 'search' # target app for the saved searches CONF_FILE = 'savedsearches.conf' # path to your savedsearches.conf file VERIFY_SSL = True # set to False if using self-signed certs USERNAME = 'yourUsername' # API requires this as a path component # ==================================== # Map conf fields to REST API fields def convert_field_name(field, value): """Map .conf fields to API fields and perform value translations.""" if field == "enableSched": return "is_scheduled", "1" if value.strip().lower() in ("1", "true", "yes", "on") else "0" return field, value def load_savedsearches(conf_path): cp = configparser.ConfigParser(strict=False, delimiters=['=']) cp.optionxform = str # preserve case and case sensitivity cp.read(conf_path) return cp def upload_savedsearches(cp): headers = {'Authorization': f'Splunk {SPLUNK_TOKEN}'} base_url = f"{SPLUNK_HOST}/servicesNS/{USERNAME}/{APP}/saved/searches" for savedsearch_name in cp.sections(): data = {'name': savedsearch_name} for field, value in cp[savedsearch_name].items(): api_field, api_value = convert_field_name(field, value) data[api_field] = api_value search_url = f"{base_url}/{quote(savedsearch_name)}" # Check if the saved search exists (GET request) check = requests.get(search_url, headers=headers, verify=VERIFY_SSL) if check.status_code == 200: print(f"Updating existing savedsearch: {savedsearch_name}") r = requests.post(search_url, data=data, headers=headers, verify=VERIFY_SSL) else: print(f"Creating new savedsearch: {savedsearch_name}") r = requests.post(base_url, data=data, headers=headers, verify=VERIFY_SSL) if r.status_code not in (200, 201): print(f"Failed for {savedsearch_name}: {r.status_code} {r.text}") else: print(f"Success: {savedsearch_name}") def main(): cp = load_savedsearches(CONF_FILE) upload_savedsearches(cp) if __name__ == "__main__": main() We use this approach to upload file direct from Git pipelines which is especially useful if you arent an admin on the platform so cannot upload an app - however may also work well for your usecase. Note: you could use the Splunk Python SDK too, which basically does the same thing.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
I think Splunk doesn't have a built-in/defined sourcetype for ExtremeCloud XIQ logs. Can we define a custom sourcetype, like `extremecloud:xiq`, in the syslog server(splunk_metadata.csv)? If so, how ... See more...
I think Splunk doesn't have a built-in/defined sourcetype for ExtremeCloud XIQ logs. Can we define a custom sourcetype, like `extremecloud:xiq`, in the syslog server(splunk_metadata.csv)? If so, how do we make sure the logs coming from ExtremeCloud XIQ platform land in the "extreme" index and use the "extremecloud:xiq" sourcetype?
At this location, We handle setting up ITSI and not SA for teams for monitoring. They work with us as we need them. I am in ITSI creating alerts with correlation searches our correlation searches hav... See more...
At this location, We handle setting up ITSI and not SA for teams for monitoring. They work with us as we need them. I am in ITSI creating alerts with correlation searches our correlation searches have about 20 lines of required fields that show in the alerts after the calculations. All I need to know is determine if the fields for the event meets or exceeds that percent criteria, if it does it'll generate a of low or high based on what they put in the lookup for the severity.  I could do a case statement in the code but I am trying not to hard code. If I put it into the lookup, if the customer changes their mind on the percents later or they want it to be a low alert instead of critical, they can modify the table without the code being touched.  If you do a custom KPI, I haven't been able to allow the required fields that have to be in the alert for the monitoring group. Here, once the code for that index goes live then it is considered production. Which means, one small change of code requires going thru the testing process between us, the team and the monitoring group who watches the alerts.  It's a whole ordeal.   SO, If I can create a table where the team can say a field and the percent  then it is easier.  Each event in the log the customer is creating has multiple fields to check. The only thing I care about is the host, the field value and the severity.  I am trying to avoid hard coding. If I can't come up with a way to use the lookups, I will do it.  I know that this is NOT what people normally do, but sometimes you have to think outside the box to make life easier. Teams don't know what they want and constantly change their minds. When we are working to onboard new indexes in the building for infrastructure and applications...our team of 4 doesn't have time to do a lot of changes when someone changes their minds.
9.4.0/9.3.2/9.2.4/9.1.7 and above has the fix. Since you are already on 9.4.1, it also has the fix.
What is the best practice to migrate all alerts and dashboards from on-prem to cloud except using a custom app which seems very restricted and dull? 
We have used it in RegexGames although I can't remember how many came up with a solution. Yes, regex may not be "pretty", but can be fun trying to solve regex puzzles!
Hi @hazardoom , it's always a bad practice maintain objects in private folders, the only way, is to move them in the app. Ciao. Giuseppe
But seriously, this solution is usually good enough unless you have a strict demand on validating the IP format in which case regex is not the best tool for the job (it can be done using regex bu... See more...
But seriously, this solution is usually good enough unless you have a strict demand on validating the IP format in which case regex is not the best tool for the job (it can be done using regex but it's neither pretty, nor efficient).
Hi @Mfmahdi  Please do not tag/call out specific users on here - there are lots of people monitoring for questions being raised and those you have tagged do have day jobs and other priorities so you... See more...
Hi @Mfmahdi  Please do not tag/call out specific users on here - there are lots of people monitoring for questions being raised and those you have tagged do have day jobs and other priorities so you risk your question being missed. To troubleshoot the KV Store initialization issue, start by examining the logs on the search head cluster members for specific errors. | rest /services/kvstore/status | fields splunk_server, current* Then check on each SHC member: ps -ef | grep mongod # Check mongod logs for errors tail -n 200 $SPLUNK_HOME/var/log/splunk/mongod.log # Check splunkd logs for KV Store related errors grep KVStore $SPLUNK_HOME/var/log/splunk/splunkd.log | tail -n 200   Verify mongod Process: Ensure the mongod process, which underlies the KV Store, is running on the search head members. Use the ps command or your operating system's equivalent. If it's not running, investigate why using the logs. Check Cluster Health: Ensure the search head cluster itself is healthy using the Monitoring Console or the CLI command splunk show shcluster-status run from the captain. KV Store issues can sometimes be symptomatic of underlying cluster communication problems. From your screenshot it looks like this is showing as starting state, so hopefully the logs shine some light on the issue. Check Resources: Verify sufficient disk space, memory, and CPU resources on the search head cluster members, particularly on the node currently acting as the KV Store primary. Focus on the error messages found in mongod.log and splunkd.log as they usually pinpoint the root cause (e.g., permissions, disk space, configuration errors, corrupted files). If the logs indicate corruption or persistent startup failures that restarts don't resolve, you may need to consider more advanced recovery steps, potentially involving Splunk Support. USeful docs which might help: Splunk Docs: Troubleshoot the KV Store Splunk Docs: About the KV Store  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Dears,,, The KV Store initialization on our search head cluster was previously working fine. However, unexpectedly, we are now encountering the error: "KV Store initialization has not been completed... See more...
Dears,,, The KV Store initialization on our search head cluster was previously working fine. However, unexpectedly, we are now encountering the error: "KV Store initialization has not been completed yet", and the KV Store status shows as "starting." I attempted a rolling restart across the search heads, but the issue persists. Kindly provide your support to resolve this issues  @gcusello  @woodcock  Thank you in advance.    
Hi @goudas  The discrepancy likely stems from differences in the search execution context between Postman and your JavaScript fetch call, such as the timeframe used for the search job or the app con... See more...
Hi @goudas  The discrepancy likely stems from differences in the search execution context between Postman and your JavaScript fetch call, such as the timeframe used for the search job or the app context. When not explicitly defined in the API request, Splunk might use default values that could differ based on user settings or how the API call is authenticated. Ensure you are searching the same earliest and latest time, and that you are using the same app context between your WebUI searches and API searches. Also, check that any backslashes/quotes etc are appropriately handled in your API requests.  To investigate any differences, in the web UI go to Activity (top right) -> Jobs to open the Job Manager and then locate the two searches - check that the search, earliest/latest/app all match. This should hopefully highlight if there is a discrepancy. Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Good morning,  For now I downloaded the app, I will delete what users requested to delete. I'll move everything from local to defalut, but what about users folder. In it I have like 50 users - fol... See more...
Good morning,  For now I downloaded the app, I will delete what users requested to delete. I'll move everything from local to defalut, but what about users folder. In it I have like 50 users - folder with the username and in it there is history and metadata subfolders and in metadata - local conf, what to do with them? 
How are the results different? What do you get? What were you expecting? Could it do with using backslashes? Can you get the results you were expecting by adding additional backslashes?
The following query return the expected result on Postman but return a different result on Javacsript fetch: search host="hydra-notifications-engine-prod*" index="federated:rh_jboss" "notifications-... See more...
The following query return the expected result on Postman but return a different result on Javacsript fetch: search host="hydra-notifications-engine-prod*" index="federated:rh_jboss" "notifications-engine ReportProcessor :" | eval chartingField=case(match(_raw,"Channel\s*EMAIL \|"),"Email",match(_raw,"Channel\s*GOOGLECHAT \|"),"Google Chat",match(_raw,"Channel\s*IRC \|"),"IRC",match(_raw,"Channel\s*SLACK \|"),"Slack",match(_raw,"Channel\s*SMS \|"),"SMS") |timechart span="1d" count by chartingField What is issue?
If you have a timechart split by a field, then it's different to stats, because your field name is not called total. You need to use this type of constrct | foreach * [ | eval <<FIELD>>=round('<<FI... See more...
If you have a timechart split by a field, then it's different to stats, because your field name is not called total. You need to use this type of constrct | foreach * [ | eval <<FIELD>>=round('<<FIELD>>'/7.0*.5, 2) ] Here's an example you can run that generates some random data | makeresults count=1000 | eval p=random() % 5 + 1 | eval player="Player ".p | streamstats c | eval _time=now() - (c / 5) * 3600 | timechart span=1d count by player | foreach * [ | eval <<FIELD>>=round('<<FIELD>>'/7.0*.5, 2) ] However, it's still not entirely clear what you are trying to do. You talk about a week of 700 but are timecharting by 1 day and you say if Lebron has 100 one week - what are you trying to get with the values by day? Are you trying to normalise all players so they can be seen relative to each other or something else? Perhaps you can flesh out what you are trying to achieve if you think of your data as a timechart.
True, but I didn't want to give away all my secrets! 
@pjac1029  You're most welcome! I'm glad to hear that it worked for you.
Hi @livehybrid , Yes, I do have the appIcon.png in the folder $SPLUNK_HOME/etc/apps/search/appserver/static/, but the error still appears. I’m also facing the same issue in my custom Splunk app loc... See more...
Hi @livehybrid , Yes, I do have the appIcon.png in the folder $SPLUNK_HOME/etc/apps/search/appserver/static/, but the error still appears. I’m also facing the same issue in my custom Splunk app located at $SPLUNK_HOME/etc/apps/Custom_app/appserver/static/. I tried adding the appIcon.png (36x36) there as well, restarted Splunk, and checked my custom app,(Also in all splunk apps) but the appIcon error still persists — even in the dashboards of core Splunk app