All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @hazardoom , it's always a bad practice maintain objects in private folders, the only way, is to move them in the app. Ciao. Giuseppe
But seriously, this solution is usually good enough unless you have a strict demand on validating the IP format in which case regex is not the best tool for the job (it can be done using regex bu... See more...
But seriously, this solution is usually good enough unless you have a strict demand on validating the IP format in which case regex is not the best tool for the job (it can be done using regex but it's neither pretty, nor efficient).
Hi @Mfmahdi  Please do not tag/call out specific users on here - there are lots of people monitoring for questions being raised and those you have tagged do have day jobs and other priorities so you... See more...
Hi @Mfmahdi  Please do not tag/call out specific users on here - there are lots of people monitoring for questions being raised and those you have tagged do have day jobs and other priorities so you risk your question being missed. To troubleshoot the KV Store initialization issue, start by examining the logs on the search head cluster members for specific errors. | rest /services/kvstore/status | fields splunk_server, current* Then check on each SHC member: ps -ef | grep mongod # Check mongod logs for errors tail -n 200 $SPLUNK_HOME/var/log/splunk/mongod.log # Check splunkd logs for KV Store related errors grep KVStore $SPLUNK_HOME/var/log/splunk/splunkd.log | tail -n 200   Verify mongod Process: Ensure the mongod process, which underlies the KV Store, is running on the search head members. Use the ps command or your operating system's equivalent. If it's not running, investigate why using the logs. Check Cluster Health: Ensure the search head cluster itself is healthy using the Monitoring Console or the CLI command splunk show shcluster-status run from the captain. KV Store issues can sometimes be symptomatic of underlying cluster communication problems. From your screenshot it looks like this is showing as starting state, so hopefully the logs shine some light on the issue. Check Resources: Verify sufficient disk space, memory, and CPU resources on the search head cluster members, particularly on the node currently acting as the KV Store primary. Focus on the error messages found in mongod.log and splunkd.log as they usually pinpoint the root cause (e.g., permissions, disk space, configuration errors, corrupted files). If the logs indicate corruption or persistent startup failures that restarts don't resolve, you may need to consider more advanced recovery steps, potentially involving Splunk Support. USeful docs which might help: Splunk Docs: Troubleshoot the KV Store Splunk Docs: About the KV Store  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Dears,,, The KV Store initialization on our search head cluster was previously working fine. However, unexpectedly, we are now encountering the error: "KV Store initialization has not been completed... See more...
Dears,,, The KV Store initialization on our search head cluster was previously working fine. However, unexpectedly, we are now encountering the error: "KV Store initialization has not been completed yet", and the KV Store status shows as "starting." I attempted a rolling restart across the search heads, but the issue persists. Kindly provide your support to resolve this issues  @gcusello  @woodcock  Thank you in advance.    
Hi @goudas  The discrepancy likely stems from differences in the search execution context between Postman and your JavaScript fetch call, such as the timeframe used for the search job or the app con... See more...
Hi @goudas  The discrepancy likely stems from differences in the search execution context between Postman and your JavaScript fetch call, such as the timeframe used for the search job or the app context. When not explicitly defined in the API request, Splunk might use default values that could differ based on user settings or how the API call is authenticated. Ensure you are searching the same earliest and latest time, and that you are using the same app context between your WebUI searches and API searches. Also, check that any backslashes/quotes etc are appropriately handled in your API requests.  To investigate any differences, in the web UI go to Activity (top right) -> Jobs to open the Job Manager and then locate the two searches - check that the search, earliest/latest/app all match. This should hopefully highlight if there is a discrepancy. Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Good morning,  For now I downloaded the app, I will delete what users requested to delete. I'll move everything from local to defalut, but what about users folder. In it I have like 50 users - fol... See more...
Good morning,  For now I downloaded the app, I will delete what users requested to delete. I'll move everything from local to defalut, but what about users folder. In it I have like 50 users - folder with the username and in it there is history and metadata subfolders and in metadata - local conf, what to do with them? 
How are the results different? What do you get? What were you expecting? Could it do with using backslashes? Can you get the results you were expecting by adding additional backslashes?
The following query return the expected result on Postman but return a different result on Javacsript fetch: search host="hydra-notifications-engine-prod*" index="federated:rh_jboss" "notifications-... See more...
The following query return the expected result on Postman but return a different result on Javacsript fetch: search host="hydra-notifications-engine-prod*" index="federated:rh_jboss" "notifications-engine ReportProcessor :" | eval chartingField=case(match(_raw,"Channel\s*EMAIL \|"),"Email",match(_raw,"Channel\s*GOOGLECHAT \|"),"Google Chat",match(_raw,"Channel\s*IRC \|"),"IRC",match(_raw,"Channel\s*SLACK \|"),"Slack",match(_raw,"Channel\s*SMS \|"),"SMS") |timechart span="1d" count by chartingField What is issue?
If you have a timechart split by a field, then it's different to stats, because your field name is not called total. You need to use this type of constrct | foreach * [ | eval <<FIELD>>=round('<<FI... See more...
If you have a timechart split by a field, then it's different to stats, because your field name is not called total. You need to use this type of constrct | foreach * [ | eval <<FIELD>>=round('<<FIELD>>'/7.0*.5, 2) ] Here's an example you can run that generates some random data | makeresults count=1000 | eval p=random() % 5 + 1 | eval player="Player ".p | streamstats c | eval _time=now() - (c / 5) * 3600 | timechart span=1d count by player | foreach * [ | eval <<FIELD>>=round('<<FIELD>>'/7.0*.5, 2) ] However, it's still not entirely clear what you are trying to do. You talk about a week of 700 but are timecharting by 1 day and you say if Lebron has 100 one week - what are you trying to get with the values by day? Are you trying to normalise all players so they can be seen relative to each other or something else? Perhaps you can flesh out what you are trying to achieve if you think of your data as a timechart.
True, but I didn't want to give away all my secrets! 
@pjac1029  You're most welcome! I'm glad to hear that it worked for you.
Hi @livehybrid , Yes, I do have the appIcon.png in the folder $SPLUNK_HOME/etc/apps/search/appserver/static/, but the error still appears. I’m also facing the same issue in my custom Splunk app loc... See more...
Hi @livehybrid , Yes, I do have the appIcon.png in the folder $SPLUNK_HOME/etc/apps/search/appserver/static/, but the error still appears. I’m also facing the same issue in my custom Splunk app located at $SPLUNK_HOME/etc/apps/Custom_app/appserver/static/. I tried adding the appIcon.png (36x36) there as well, restarted Splunk, and checked my custom app,(Also in all splunk apps) but the appIcon error still persists — even in the dashboards of core Splunk app
Hi @livehybrid  The screenshot I sent is from the Search Head and shows the exact same configuration deployed to the Heavy Forwarder. This is the first Heavy Forwarder that the data lands on. The da... See more...
Hi @livehybrid  The screenshot I sent is from the Search Head and shows the exact same configuration deployed to the Heavy Forwarder. This is the first Heavy Forwarder that the data lands on. The data is sent to the Heavy Forwarder using rsyslog, and the Heavy Forwarder uses [monitor:] to monitor the logs.
Hi @livehybrid    I checked with the query and it worked, I made little change to display DD:MM:YYYY HH:MM:SS using below query and it worked as expected. I am marking your answer as solution since... See more...
Hi @livehybrid    I checked with the query and it worked, I made little change to display DD:MM:YYYY HH:MM:SS using below query and it worked as expected. I am marking your answer as solution since it gave me base query to develop from, thank you very much !   |eval timestamp=strftime(now(), "%d/&m/%Y %H:%M:%S") |table timestamp,<<intented fields>>
@hrawat  The email sent titled "Splunk Service Bulletin Notification" was very poorly written. It explicitly states to upgrade to one of the following versions, it doesn't say "or later". We have r... See more...
@hrawat  The email sent titled "Splunk Service Bulletin Notification" was very poorly written. It explicitly states to upgrade to one of the following versions, it doesn't say "or later". We have recently upgraded all our forwarders to be running 9.4.1, which according to the service bulletin email isn't fixed, only 9.4.0 is (was there regression, or is the email wrong?).  
Hi @ranafge  Okay, this is progress in terms of diagnosing. So - you see events if you search index="wazuh-alerts"  ? If you search index="wazuh-alerts"  "Medium" - do you get any result then? Im ... See more...
Hi @ranafge  Okay, this is progress in terms of diagnosing. So - you see events if you search index="wazuh-alerts"  ? If you search index="wazuh-alerts"  "Medium" - do you get any result then? Im trying to determine if its a field extraction issue or if the data is actually missing.   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Alan_Chan  Ive checked this config locally and it does work for your sample event so something else isnt right here. I think there is a typo in what you posted so I used the value from the scree... See more...
Hi @Alan_Chan  Ive checked this config locally and it does work for your sample event so something else isnt right here. I think there is a typo in what you posted so I used the value from the screenshot, but please confirm you have the asterisk in your SEDCMD that is deployed? Is the screenshot you sent from the Searchhead? Is the exact same config deployed to the Heavy Forwarder? And is this the only (or first) HF that the data lands on?  How is the data arriving into the HF? If it is via HEC using the event endpoint then this configuration will not work and you would need to use INGEST_EVAL or move to the raw HEC endpoint.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @pjac1029  Simply add "<prefix>production</prefix>" within your <input></input> block like this:    Did this answer help you? If so, please consider: Adding karma to show it was use... See more...
Hi @pjac1029  Simply add "<prefix>production</prefix>" within your <input></input> block like this:    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
that worked ! Thanks so much for your help. I really appreciate it !
If I were to be nitpicky I'd say that it captures stuff like 000.999.123.987, which is not a valid IP