All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @gcusello  I understand your points. As a Splunk SME i have created the rules and urgency values has been set in correlation search. but do we really need Asset/Identity management? Ass/IM takin... See more...
Hi @gcusello  I understand your points. As a Splunk SME i have created the rules and urgency values has been set in correlation search. but do we really need Asset/Identity management? Ass/IM taking care by different teams.  I have enabled use cases & its triggered alerts also, only thing is we are unable to see Urgency Field values. what is the best practice to view urgency filed? Thank you so much for responding my queries. 
Hi @Sankar , Urgency depends on Priority (from the Correlation Search) and priority (from the Asset/Identity). Didi you defined Priority in Asset and Identity Management? Ciao. Giuseppe
Hmm... And if you run your whole script with `splunk cmd`?
Hi @gcusello  First case query its working. but urgency field i don't see any severity.  all alerts urgency field is empty only. but in the rule we set under Adaptive response actions--> notable --... See more...
Hi @gcusello  First case query its working. but urgency field i don't see any severity.  all alerts urgency field is empty only. but in the rule we set under Adaptive response actions--> notable -->severity value. (Ex High, Medium, Low, informational) we have 40+ indexes so i want to each alert for Search Name, Index, Urgency, count. hope you can able to share right info.
Yes, am getting output for below commands, sudo /usr/bin/crictl ps -a splunk cmd sudo /usr/bin/crictl ps -a  
OK. So first steps to debug such issue would be to: 1) Run (as splunk user) sudo /usr/bin/crictl I'm assuming you already checked that 2) Run (again - as splunk user) splunk cmd sudo /usr/bin/cr... See more...
OK. So first steps to debug such issue would be to: 1) Run (as splunk user) sudo /usr/bin/crictl I'm assuming you already checked that 2) Run (again - as splunk user) splunk cmd sudo /usr/bin/crictl (the actual parameters for crictl are not important here, we just want to see if the command will be properly spawned at all). Having said that - I'm not a very big fan of escalating privileges that way from splunk. As I understand, this is a scripted input. I'd rather have a script spawned by cron and capture its output and then ingest that output file with a normal monitor input.
Yes, my script is working with sudo only below script for reference #!/bin/bash # Set the working directory to the script's directory cd "$(dirname "$0")" || exit 1 # Full paths for commands CR... See more...
Yes, my script is working with sudo only below script for reference #!/bin/bash # Set the working directory to the script's directory cd "$(dirname "$0")" || exit 1 # Full paths for commands CRICTL_PATH="/usr/bin/crictl"  # Adjust the path if necessary # Get container list container_list=$(sudo "$CRICTL_PATH" ps -a) echo "$container_list" | sed '1s/POD ID/POD_ID/g' IFS=$'\n' for container_info in $(echo "$container_list" | tail -n +2); do container_id=$(echo "$container_info" | awk '{print $1}')  container_name=$(echo "$container_info" | awk '{print $4}') done ############## cat /etc/sudoers.d/splunk splunk ALL=(ALL) NOPASSWD: /usr/bin/crictl,/usr/bin/podman
OK. Check shcluster-status. Check splunkd.log on those instances (and mongodb.log). If the state of the SHC is not in sync... that means something is off with replication or the overall cluster health.
Hi @AShwin1119 , it's really strange, probably therewas some replication issue, did you checked the status of replication in the Cluster Manager Console? Had you some stop of one or more of the ind... See more...
Hi @AShwin1119 , it's really strange, probably therewas some replication issue, did you checked the status of replication in the Cluster Manager Console? Had you some stop of one or more of the indexers? Have you a multisite or a single site Indexers Cluster? If it's all OK, open a case to Splunk Support. Ciao. Giuseppe
There is no such thing as "practice exams" meaning real exam questions. Everyone attempting an exam signs an NDA so even if someone does leak information on the exam itself despite this there's no gu... See more...
There is no such thing as "practice exams" meaning real exam questions. Everyone attempting an exam signs an NDA so even if someone does leak information on the exam itself despite this there's no guarantee that the questions are correct, the answers are really as they were on the exam and so on. Not to mention of course legality of such thing. So the official way to certification is by completing the certification track - https://www.splunk.com/en_us/training/certification-track/splunk-certified-cybersecurity-defense-analyst.html - there you have a PDF which lists all the recommended courses which should cover the material needed for the exam. There are of course third-party trainings on this and similar topics but since they are not officially aligned with Splunk there's no guarantee on their contents and adequacy for particular exam.
OK, whether that's ugly or not is a matter of personal taste of course but be aware that it's a very unintuitive way to handle data and someone tasked with maintenance of this later might have hard t... See more...
OK, whether that's ugly or not is a matter of personal taste of course but be aware that it's a very unintuitive way to handle data and someone tasked with maintenance of this later might have hard time understanding this.
While there might be a solution using props/transforms (most probably not with just ingest actions), it seems it could be better done on a previous layer - configure such split in your syslog receive... See more...
While there might be a solution using props/transforms (most probably not with just ingest actions), it seems it could be better done on a previous layer - configure such split in your syslog receiver and adjust metadata when sending to HEC or writing to files for pickup by your HF.
I am taking the SPLK-5001 Cybersecurity Defense analyst exam, where can I find useful and accurate practice exams to prepare? I find that some available online are AI generated, not realistic, too ha... See more...
I am taking the SPLK-5001 Cybersecurity Defense analyst exam, where can I find useful and accurate practice exams to prepare? I find that some available online are AI generated, not realistic, too hard or too easy. Any general study tips would be very helpful
And did you check what sudo told you? Does your sudo work at all?
My use case requires strict relationships.  | inputlookup append=t mylookup | eval _time = strptime(start_date, "%Y-%m-%d") | addinfo | rename info_* AS * | where _time >= min_time AND _time <= ... See more...
My use case requires strict relationships.  | inputlookup append=t mylookup | eval _time = strptime(start_date, "%Y-%m-%d") | addinfo | rename info_* AS * | where _time >= min_time AND _time <= max_time This works for my use case, bit clunkly. Thank all. 
It's a bit philosophical issue. Firstly, there can be many things done on HFs. Some people run modular inputs on them, some have them just for receiving HEC, others have a "parsing layer" before sen... See more...
It's a bit philosophical issue. Firstly, there can be many things done on HFs. Some people run modular inputs on them, some have them just for receiving HEC, others have a "parsing layer" before sending data to indexers. So there are several different use cases. As a rule of thumb it's best to use DS to distribute apps to forwarders regardless of what kind of forwarders they are. There are some caveats though. Most importantly, many modular inputs require interactive configuration from webui. And those can create configuration items which might: 1) Hold sensitive data like authorization info for external services 2) Be encrypted in a way that is not transferable between forwarders. So you might end up in a situation where you might not want to distribute a particular app and its settings centrally. As for me, assuming you can do that (because either the above-stated points do not apply or are of no concern) I'd deploy an app like that on a testing rig, create configuration for a given input, capture the resulting conf files and add them to an app pushed from DS to the production environment.
@pipehitter- Please accept my answer if it helped you with your question by clicking on "Accept as Solution".
Your timepicker will not work. Timepicker is responsible for setting the earliest/latest parameters for the search. Those parameters only affect fetching events from indexes at the beginning of the ... See more...
Your timepicker will not work. Timepicker is responsible for setting the earliest/latest parameters for the search. Those parameters only affect fetching events from indexes at the beginning of the search pipeline when the events are generated with search or tstats (maybe there's another command which they affect but I cannot think of any right now). They don't "filter" the events anywhere after that. Most importantly, if you're doing inputlookup or rest timepicker will not affect your search results in any way. And you can't do anything about it (maybe except some very very ugly bending over backwards with addinfo and filtering with where but that's not something any sane person would do.
Yes, packaging your content into an app is a good practice but it shouldn't matter much if it's in apps/<app>/local or system/local for actually running the config (unless the settings get overwritte... See more...
Yes, packaging your content into an app is a good practice but it shouldn't matter much if it's in apps/<app>/local or system/local for actually running the config (unless the settings get overwritten of course). And no, timestamp doesn't have to be in US format. That's what time parsing sourcetype settings are for. But back to the @joewetzel63 's issue - did you try running the script "as Splunk"? With splunk cmd /opt/splunkforwarder/bin/scripts/whatever.sh
Yes. But are those results of some searches that you want to "merge" or do you simply have two different sourcetypes from which different sets of fields are extracted? If it's the latter, your solut... See more...
Yes. But are those results of some searches that you want to "merge" or do you simply have two different sourcetypes from which different sets of fields are extracted? If it's the latter, your solution should be relatively simple <some restriction on index(es)> sourcetype IN (sourcetype1, sourcetype2) | stats values(colA) as colA values(colB) as colB values(col1) as col1 values(col2) as col2 [...] by common_column If you want all columns, you might simply go with values(*) as *