All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I suppose that you should try to move those timestamp extractions under each source:: definitions. Then those should work. Anyhow those definitions which you have put on that new sourcetype definiti... See more...
I suppose that you should try to move those timestamp extractions under each source:: definitions. Then those should work. Anyhow those definitions which you have put on that new sourcetype definitions are working on search time if those can apply on search time. But example those _time settings are working only in indexing phase.
Thanks @isoutamo , As I understand ; Since these definitions are used only at search time , then I only need the add-on installed on the search Head. On the HEC I will put the props.conf with the TI... See more...
Thanks @isoutamo , As I understand ; Since these definitions are used only at search time , then I only need the add-on installed on the search Head. On the HEC I will put the props.conf with the TIME PREFIX related regex , so time will be extracted from the incoming logs and sent to the indexers. 
Thanks @PickleRick I have very less visibility and access issues on the source side, while I understand it is the easiest way to do this on the client side, I am trying to understand what are the pos... See more...
Thanks @PickleRick I have very less visibility and access issues on the source side, while I understand it is the easiest way to do this on the client side, I am trying to understand what are the possibilities at I have or can do on my HF on which I have full control.
Hi @gcusello  I understand your points. As a Splunk SME i have created the rules and urgency values has been set in correlation search. but do we really need Asset/Identity management? Ass/IM takin... See more...
Hi @gcusello  I understand your points. As a Splunk SME i have created the rules and urgency values has been set in correlation search. but do we really need Asset/Identity management? Ass/IM taking care by different teams.  I have enabled use cases & its triggered alerts also, only thing is we are unable to see Urgency Field values. what is the best practice to view urgency filed? Thank you so much for responding my queries. 
Hi @Sankar , Urgency depends on Priority (from the Correlation Search) and priority (from the Asset/Identity). Didi you defined Priority in Asset and Identity Management? Ciao. Giuseppe
Hmm... And if you run your whole script with `splunk cmd`?
Hi @gcusello  First case query its working. but urgency field i don't see any severity.  all alerts urgency field is empty only. but in the rule we set under Adaptive response actions--> notable --... See more...
Hi @gcusello  First case query its working. but urgency field i don't see any severity.  all alerts urgency field is empty only. but in the rule we set under Adaptive response actions--> notable -->severity value. (Ex High, Medium, Low, informational) we have 40+ indexes so i want to each alert for Search Name, Index, Urgency, count. hope you can able to share right info.
Yes, am getting output for below commands, sudo /usr/bin/crictl ps -a splunk cmd sudo /usr/bin/crictl ps -a  
OK. So first steps to debug such issue would be to: 1) Run (as splunk user) sudo /usr/bin/crictl I'm assuming you already checked that 2) Run (again - as splunk user) splunk cmd sudo /usr/bin/cr... See more...
OK. So first steps to debug such issue would be to: 1) Run (as splunk user) sudo /usr/bin/crictl I'm assuming you already checked that 2) Run (again - as splunk user) splunk cmd sudo /usr/bin/crictl (the actual parameters for crictl are not important here, we just want to see if the command will be properly spawned at all). Having said that - I'm not a very big fan of escalating privileges that way from splunk. As I understand, this is a scripted input. I'd rather have a script spawned by cron and capture its output and then ingest that output file with a normal monitor input.
Yes, my script is working with sudo only below script for reference #!/bin/bash # Set the working directory to the script's directory cd "$(dirname "$0")" || exit 1 # Full paths for commands CR... See more...
Yes, my script is working with sudo only below script for reference #!/bin/bash # Set the working directory to the script's directory cd "$(dirname "$0")" || exit 1 # Full paths for commands CRICTL_PATH="/usr/bin/crictl"  # Adjust the path if necessary # Get container list container_list=$(sudo "$CRICTL_PATH" ps -a) echo "$container_list" | sed '1s/POD ID/POD_ID/g' IFS=$'\n' for container_info in $(echo "$container_list" | tail -n +2); do container_id=$(echo "$container_info" | awk '{print $1}')  container_name=$(echo "$container_info" | awk '{print $4}') done ############## cat /etc/sudoers.d/splunk splunk ALL=(ALL) NOPASSWD: /usr/bin/crictl,/usr/bin/podman
OK. Check shcluster-status. Check splunkd.log on those instances (and mongodb.log). If the state of the SHC is not in sync... that means something is off with replication or the overall cluster health.
Hi @AShwin1119 , it's really strange, probably therewas some replication issue, did you checked the status of replication in the Cluster Manager Console? Had you some stop of one or more of the ind... See more...
Hi @AShwin1119 , it's really strange, probably therewas some replication issue, did you checked the status of replication in the Cluster Manager Console? Had you some stop of one or more of the indexers? Have you a multisite or a single site Indexers Cluster? If it's all OK, open a case to Splunk Support. Ciao. Giuseppe
There is no such thing as "practice exams" meaning real exam questions. Everyone attempting an exam signs an NDA so even if someone does leak information on the exam itself despite this there's no gu... See more...
There is no such thing as "practice exams" meaning real exam questions. Everyone attempting an exam signs an NDA so even if someone does leak information on the exam itself despite this there's no guarantee that the questions are correct, the answers are really as they were on the exam and so on. Not to mention of course legality of such thing. So the official way to certification is by completing the certification track - https://www.splunk.com/en_us/training/certification-track/splunk-certified-cybersecurity-defense-analyst.html - there you have a PDF which lists all the recommended courses which should cover the material needed for the exam. There are of course third-party trainings on this and similar topics but since they are not officially aligned with Splunk there's no guarantee on their contents and adequacy for particular exam.
OK, whether that's ugly or not is a matter of personal taste of course but be aware that it's a very unintuitive way to handle data and someone tasked with maintenance of this later might have hard t... See more...
OK, whether that's ugly or not is a matter of personal taste of course but be aware that it's a very unintuitive way to handle data and someone tasked with maintenance of this later might have hard time understanding this.
While there might be a solution using props/transforms (most probably not with just ingest actions), it seems it could be better done on a previous layer - configure such split in your syslog receive... See more...
While there might be a solution using props/transforms (most probably not with just ingest actions), it seems it could be better done on a previous layer - configure such split in your syslog receiver and adjust metadata when sending to HEC or writing to files for pickup by your HF.
I am taking the SPLK-5001 Cybersecurity Defense analyst exam, where can I find useful and accurate practice exams to prepare? I find that some available online are AI generated, not realistic, too ha... See more...
I am taking the SPLK-5001 Cybersecurity Defense analyst exam, where can I find useful and accurate practice exams to prepare? I find that some available online are AI generated, not realistic, too hard or too easy. Any general study tips would be very helpful
And did you check what sudo told you? Does your sudo work at all?
My use case requires strict relationships.  | inputlookup append=t mylookup | eval _time = strptime(start_date, "%Y-%m-%d") | addinfo | rename info_* AS * | where _time >= min_time AND _time <= ... See more...
My use case requires strict relationships.  | inputlookup append=t mylookup | eval _time = strptime(start_date, "%Y-%m-%d") | addinfo | rename info_* AS * | where _time >= min_time AND _time <= max_time This works for my use case, bit clunkly. Thank all. 
It's a bit philosophical issue. Firstly, there can be many things done on HFs. Some people run modular inputs on them, some have them just for receiving HEC, others have a "parsing layer" before sen... See more...
It's a bit philosophical issue. Firstly, there can be many things done on HFs. Some people run modular inputs on them, some have them just for receiving HEC, others have a "parsing layer" before sending data to indexers. So there are several different use cases. As a rule of thumb it's best to use DS to distribute apps to forwarders regardless of what kind of forwarders they are. There are some caveats though. Most importantly, many modular inputs require interactive configuration from webui. And those can create configuration items which might: 1) Hold sensitive data like authorization info for external services 2) Be encrypted in a way that is not transferable between forwarders. So you might end up in a situation where you might not want to distribute a particular app and its settings centrally. As for me, assuming you can do that (because either the above-stated points do not apply or are of no concern) I'd deploy an app like that on a testing rig, create configuration for a given input, capture the resulting conf files and add them to an app pushed from DS to the production environment.
@pipehitter- Please accept my answer if it helped you with your question by clicking on "Accept as Solution".