All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@isoutamo @PickleRick  you mean that Once logs transition to frozen storage, they are typically outside Splunk's control unless you have configured specific mechanisms to manage them.  can I  han... See more...
@isoutamo @PickleRick  you mean that Once logs transition to frozen storage, they are typically outside Splunk's control unless you have configured specific mechanisms to manage them.  can I  handle frozen storage  using Automate Deletion of Old Frozen Data Example: Delete Frozen Data Older Than 1 Year #!/bin/bash # Path to the frozen storage directory FROZEN_DIR="/data/splunk_frozen" # Log file for the operation LOGFILE="/var/log/splunk_frozen_cleanup.log" # Retention period in days (365 days = 1 year) RETENTION_DAYS=365 # Find and delete directories older than the retention period echo "$(date): Starting cleanup of frozen data in $FROZEN_DIR" >> "$LOGFILE" find "$FROZEN_DIR" -type d -mtime +$RETENTION_DAYS -exec rm -rf {} \; -exec echo "$(date): Deleted {}" >> "$LOGFILE" \; echo "$(date): Cleanup complete" >> "$LOGFILE"  
I'll just reply to myself here: The issue was that the hostname for some reason doesn't resolve properly in the inputs.conf file. It is supposed to automatically insert the actual hostname, but it d... See more...
I'll just reply to myself here: The issue was that the hostname for some reason doesn't resolve properly in the inputs.conf file. It is supposed to automatically insert the actual hostname, but it doesn't. I created the file "$SPLUNK_HOME/etc/system/default/inputs.conf" (as it didn't exist yet) and entered the following lines (replace [HOSTNAME] with the name of your host system running Splunk):   [default] host = [HOSTNAME]    This should override the default configuration in "$SPLUNK_HOME/etc/system/local/inputs.conf". Afterwards, everything worked correctly
Hi this conf19 presentation shows how to find all KOs via rest api. https://github.com/paychex/Splunk.Conf19 r. Ismo
Hi All i have a bar chart, like this one, in some condition this may have a lot of values that need to be reported, but, as you can imagine, is not very readable is possible to specify a minimu... See more...
Hi All i have a bar chart, like this one, in some condition this may have a lot of values that need to be reported, but, as you can imagine, is not very readable is possible to specify a minimum size of each bar and enable to scroll bar to see (clearly....) all events?  Thanks.
my goal was to test splunk Rest API, Since I just needed to create an endpoint to access it so i used the hostname directly. I dont need to use the webUI  Does this affect the splunk configuratio... See more...
my goal was to test splunk Rest API, Since I just needed to create an endpoint to access it so i used the hostname directly. I dont need to use the webUI  Does this affect the splunk configuration? I am not sure what the issue is here or why would i get an internal server error? Any hints appreciated!
Move the trigger condition from the alert to the search.  IOW, put this on the end of the query | where count => 250 AND count <=500
Thank you for the tip! Do you have any suggestions on how to format this query? I'm not sure the best way to do this when I need the alert to fire based of number of results. 
It’s just like @PickleRick said.  When splunk move buckets into frozen it wants that script or whatever you are using will return zero. After that it removed original bucket. It return value is some... See more...
It’s just like @PickleRick said.  When splunk move buckets into frozen it wants that script or whatever you are using will return zero. After that it removed original bucket. It return value is something else then splunk try it again after some time.
Hi it’s possible, but what is your real issue what you are solving this way? How your stream is generated in source side and are there several or only one source? r. Ismo
Also you should define what is your synonym for “common hours”?
In props.conf, when you are using sourcetype as stanza name, use just the name of sourcetype instead add prefix sourcetype::
Was this ever solved? I am currently facing the same issue. I have already spent an afternoon trying to fix the permissions but nothing seems to work.
Glad to know the issue is resolved, please accept the solution - so it's marked as Resolved.
Hello @avoelk, Alert throttling should help here? Doc - https://docs.splunk.com/Documentation/Splunk/9.3.2/Alert/ThrottleAlerts. If not alert throttling, you can also dump the results in a lookup / s... See more...
Hello @avoelk, Alert throttling should help here? Doc - https://docs.splunk.com/Documentation/Splunk/9.3.2/Alert/ThrottleAlerts. If not alert throttling, you can also dump the results in a lookup / summary index and exclude the events in the consuquitive runs. Please let me know if any questions.   Please hit Karma, if this helps!
That's called throttling.  When you edit the alert, click the "Throttle" box and specify how long alerts should be silenced.  Splunk will not send an alert for the same conditions during the throttle... See more...
That's called throttling.  When you edit the alert, click the "Throttle" box and specify how long alerts should be silenced.  Splunk will not send an alert for the same conditions during the throttle period.
Hi @avoelk , you should write triggered alerts in a summary index or in a lookup and then filter results based on this index. Ciao. Giuseppe
I currently have the issue that I want to trigger a certain alert, let's call it unusual processes or logins.  now, I've created a search - in which I find the specific events that are considered su... See more...
I currently have the issue that I want to trigger a certain alert, let's call it unusual processes or logins.  now, I've created a search - in which I find the specific events that are considered suspicious, and I save it as a sheduled search and as an action I write it into the triggered alerts. the timeframe is -20m@m till -5m@m and the cron job is for every 5 minutes. now I see that there is an issue in that case, because if I cron the job every 5 minutes, given the look back timeframe, I'm getting at least 3 of the same events triggered as an alert.    now my question is, is there an option/way to trigger based on whether or not an event already occured ? so basically that the search looks - did I trigger that event before already? if yes, then don't write it in the triggered alerts, otherwise, write it in  the triggered alerts.    every help is appreciated
@mpc7zh if you check /opt/splunkforwarder/etc/splunk.version you will see: VERSION=9.3.0 BUILD=51ccf43db5bd PRODUCT=splunk PLATFORM=Linux-x86_64
Hi it's possible to log on Splunk using Laminas\Log\Writer? ...I'll try to do but with some problem...do you have any esemple of how to do it?
\"webaclId\":\s\"[^:]+:[^:]+:[^:]+:[^:]+:[^:]+:regional\/webacl\/([^\/]+)\/ Your example data has a space "webaclId": " Verified from regex101