All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

It’s just like @PickleRick said.  When splunk move buckets into frozen it wants that script or whatever you are using will return zero. After that it removed original bucket. It return value is some... See more...
It’s just like @PickleRick said.  When splunk move buckets into frozen it wants that script or whatever you are using will return zero. After that it removed original bucket. It return value is something else then splunk try it again after some time.
Hi it’s possible, but what is your real issue what you are solving this way? How your stream is generated in source side and are there several or only one source? r. Ismo
Also you should define what is your synonym for “common hours”?
In props.conf, when you are using sourcetype as stanza name, use just the name of sourcetype instead add prefix sourcetype::
Was this ever solved? I am currently facing the same issue. I have already spent an afternoon trying to fix the permissions but nothing seems to work.
Glad to know the issue is resolved, please accept the solution - so it's marked as Resolved.
Hello @avoelk, Alert throttling should help here? Doc - https://docs.splunk.com/Documentation/Splunk/9.3.2/Alert/ThrottleAlerts. If not alert throttling, you can also dump the results in a lookup / s... See more...
Hello @avoelk, Alert throttling should help here? Doc - https://docs.splunk.com/Documentation/Splunk/9.3.2/Alert/ThrottleAlerts. If not alert throttling, you can also dump the results in a lookup / summary index and exclude the events in the consuquitive runs. Please let me know if any questions.   Please hit Karma, if this helps!
That's called throttling.  When you edit the alert, click the "Throttle" box and specify how long alerts should be silenced.  Splunk will not send an alert for the same conditions during the throttle... See more...
That's called throttling.  When you edit the alert, click the "Throttle" box and specify how long alerts should be silenced.  Splunk will not send an alert for the same conditions during the throttle period.
Hi @avoelk , you should write triggered alerts in a summary index or in a lookup and then filter results based on this index. Ciao. Giuseppe
I currently have the issue that I want to trigger a certain alert, let's call it unusual processes or logins.  now, I've created a search - in which I find the specific events that are considered su... See more...
I currently have the issue that I want to trigger a certain alert, let's call it unusual processes or logins.  now, I've created a search - in which I find the specific events that are considered suspicious, and I save it as a sheduled search and as an action I write it into the triggered alerts. the timeframe is -20m@m till -5m@m and the cron job is for every 5 minutes. now I see that there is an issue in that case, because if I cron the job every 5 minutes, given the look back timeframe, I'm getting at least 3 of the same events triggered as an alert.    now my question is, is there an option/way to trigger based on whether or not an event already occured ? so basically that the search looks - did I trigger that event before already? if yes, then don't write it in the triggered alerts, otherwise, write it in  the triggered alerts.    every help is appreciated
@mpc7zh if you check /opt/splunkforwarder/etc/splunk.version you will see: VERSION=9.3.0 BUILD=51ccf43db5bd PRODUCT=splunk PLATFORM=Linux-x86_64
Hi it's possible to log on Splunk using Laminas\Log\Writer? ...I'll try to do but with some problem...do you have any esemple of how to do it?
\"webaclId\":\s\"[^:]+:[^:]+:[^:]+:[^:]+:[^:]+:regional\/webacl\/([^\/]+)\/ Your example data has a space "webaclId": " Verified from regex101  
Sorry about the delay. Unfortunately this does not seem to be the problem I have changed the local.meta to # simquery command [commands/simquery] access = read : [ * ], write : [ admin ] And resta... See more...
Sorry about the delay. Unfortunately this does not seem to be the problem I have changed the local.meta to # simquery command [commands/simquery] access = read : [ * ], write : [ admin ] And restartet - that why it took so long - but I still get the same error
The current universal forwarder 9.0.9 included in SOAR 6.2.2 is being flagged for an openssl vulnerability. Does anyone know what version UF is packaged in the 6.3.1 SOAR release?
Why is that? Serious question. I've never tried to do so but it shouldn't need to index anything locally.
Ok that's way too much logic for me to follow on a Monday morning before I have even had coffee.  I would split the fields into mv unique options.  Then start evaluating a new field based upon your l... See more...
Ok that's way too much logic for me to follow on a Monday morning before I have even had coffee.  I would split the fields into mv unique options.  Then start evaluating a new field based upon your logic flow.  Anything with a TRUE outcome can be your final results.
Can you provide an anonymized sample of what this search displays and an example record of what you want the final output to be?
Contact Methods for US/CAN https://www.splunk.com/en_us/about-splunk/contact-us.html?locale=en_us Non US/CAN locations https://www.splunk.com/en_us/about-splunk/contact-us.html?locale=en_us#custom... See more...
Contact Methods for US/CAN https://www.splunk.com/en_us/about-splunk/contact-us.html?locale=en_us Non US/CAN locations https://www.splunk.com/en_us/about-splunk/contact-us.html?locale=en_us#customer-support  
Hi @Sabahat - apologies for very late repsonse , I hope this has been already resolved but if not, this is visible for sc_admin role, I am not sure about Power User. Thank you.