All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello, I am building a splunk app , where I want to have my own custom aggregate function for stats command. Below is my use case let say. | makeresults count=10 | eval event_count=random()%... See more...
Hello, I am building a splunk app , where I want to have my own custom aggregate function for stats command. Below is my use case let say. | makeresults count=10 | eval event_count=random()%10 | stats mysum("event_count") as total_count Does anyone knows how my python code should look like if its feasible to create mysum function. Thanks!
The search appears to be working a treat. Now just need to understand why, so lots to learn  Thank you very much for your help. Kind regards Chris
Hi Splunkers! The issue I am having is regarding different results from alerts when some condition is met, compared to manual search results on the same query and time frame. I am having a repeated i... See more...
Hi Splunkers! The issue I am having is regarding different results from alerts when some condition is met, compared to manual search results on the same query and time frame. I am having a repeated issue between different search queries including different functions, where an alert is triggered, and when i view the results of the alert, it outputs for example 3000 events scanned, and 2 results in the statistic section. While when i manually trigger this search it will output 3500 events scanned and 0 results in the statistic scan. I cant find any solution online, and this issue is causing several of my alerts to false alert. here is an example query that is giving me this issue incase that is helpful: index="index" <search> earliest=-8h@h |stats count(Field) as Counter earliest(Field) as DataOld by FieldA, Field B |where DataNew!=DataOld OR isnull(DataOld) |table Counter, DataOld, Field A, Field B any help is very appericated!
Hi I think i was not able to explain you properly if you can have a look now in proper way and tell will be helpful as iam getting data in json format already so. I am getting  cloud logstash data... See more...
Hi I think i was not able to explain you properly if you can have a look now in proper way and tell will be helpful as iam getting data in json format already so. I am getting  cloud logstash data and its sourcetype is httpevent now below is the output iam already getting in json format in search logs of splunk. @timestamp: 2025T19:31:30.615Z environment: dev event: { [+] } host: { [+] } input: { [+] } kubernetes: { [+] } message: +0000 FM [com.abc.cmp.event.message.base.Abs] DEBUG Receiver not ready, try other receivers or try later audit is disabled } so in message sometime iam getting above data and  some logs have some json data as well. and also above json data for above fields are also coming. Now i want to know that i have to use this data in my splunk for to have a structured data out of it how to do that so that i can use that data for my use ?
I suppose that you should try to move those timestamp extractions under each source:: definitions. Then those should work. Anyhow those definitions which you have put on that new sourcetype definiti... See more...
I suppose that you should try to move those timestamp extractions under each source:: definitions. Then those should work. Anyhow those definitions which you have put on that new sourcetype definitions are working on search time if those can apply on search time. But example those _time settings are working only in indexing phase.
Thanks @isoutamo , As I understand ; Since these definitions are used only at search time , then I only need the add-on installed on the search Head. On the HEC I will put the props.conf with the TI... See more...
Thanks @isoutamo , As I understand ; Since these definitions are used only at search time , then I only need the add-on installed on the search Head. On the HEC I will put the props.conf with the TIME PREFIX related regex , so time will be extracted from the incoming logs and sent to the indexers. 
Thanks @PickleRick I have very less visibility and access issues on the source side, while I understand it is the easiest way to do this on the client side, I am trying to understand what are the pos... See more...
Thanks @PickleRick I have very less visibility and access issues on the source side, while I understand it is the easiest way to do this on the client side, I am trying to understand what are the possibilities at I have or can do on my HF on which I have full control.
Hi @gcusello  I understand your points. As a Splunk SME i have created the rules and urgency values has been set in correlation search. but do we really need Asset/Identity management? Ass/IM takin... See more...
Hi @gcusello  I understand your points. As a Splunk SME i have created the rules and urgency values has been set in correlation search. but do we really need Asset/Identity management? Ass/IM taking care by different teams.  I have enabled use cases & its triggered alerts also, only thing is we are unable to see Urgency Field values. what is the best practice to view urgency filed? Thank you so much for responding my queries. 
Hi @Sankar , Urgency depends on Priority (from the Correlation Search) and priority (from the Asset/Identity). Didi you defined Priority in Asset and Identity Management? Ciao. Giuseppe
Hmm... And if you run your whole script with `splunk cmd`?
Hi @gcusello  First case query its working. but urgency field i don't see any severity.  all alerts urgency field is empty only. but in the rule we set under Adaptive response actions--> notable --... See more...
Hi @gcusello  First case query its working. but urgency field i don't see any severity.  all alerts urgency field is empty only. but in the rule we set under Adaptive response actions--> notable -->severity value. (Ex High, Medium, Low, informational) we have 40+ indexes so i want to each alert for Search Name, Index, Urgency, count. hope you can able to share right info.
Yes, am getting output for below commands, sudo /usr/bin/crictl ps -a splunk cmd sudo /usr/bin/crictl ps -a  
OK. So first steps to debug such issue would be to: 1) Run (as splunk user) sudo /usr/bin/crictl I'm assuming you already checked that 2) Run (again - as splunk user) splunk cmd sudo /usr/bin/cr... See more...
OK. So first steps to debug such issue would be to: 1) Run (as splunk user) sudo /usr/bin/crictl I'm assuming you already checked that 2) Run (again - as splunk user) splunk cmd sudo /usr/bin/crictl (the actual parameters for crictl are not important here, we just want to see if the command will be properly spawned at all). Having said that - I'm not a very big fan of escalating privileges that way from splunk. As I understand, this is a scripted input. I'd rather have a script spawned by cron and capture its output and then ingest that output file with a normal monitor input.
Yes, my script is working with sudo only below script for reference #!/bin/bash # Set the working directory to the script's directory cd "$(dirname "$0")" || exit 1 # Full paths for commands CR... See more...
Yes, my script is working with sudo only below script for reference #!/bin/bash # Set the working directory to the script's directory cd "$(dirname "$0")" || exit 1 # Full paths for commands CRICTL_PATH="/usr/bin/crictl"  # Adjust the path if necessary # Get container list container_list=$(sudo "$CRICTL_PATH" ps -a) echo "$container_list" | sed '1s/POD ID/POD_ID/g' IFS=$'\n' for container_info in $(echo "$container_list" | tail -n +2); do container_id=$(echo "$container_info" | awk '{print $1}')  container_name=$(echo "$container_info" | awk '{print $4}') done ############## cat /etc/sudoers.d/splunk splunk ALL=(ALL) NOPASSWD: /usr/bin/crictl,/usr/bin/podman
OK. Check shcluster-status. Check splunkd.log on those instances (and mongodb.log). If the state of the SHC is not in sync... that means something is off with replication or the overall cluster health.
Hi @AShwin1119 , it's really strange, probably therewas some replication issue, did you checked the status of replication in the Cluster Manager Console? Had you some stop of one or more of the ind... See more...
Hi @AShwin1119 , it's really strange, probably therewas some replication issue, did you checked the status of replication in the Cluster Manager Console? Had you some stop of one or more of the indexers? Have you a multisite or a single site Indexers Cluster? If it's all OK, open a case to Splunk Support. Ciao. Giuseppe
There is no such thing as "practice exams" meaning real exam questions. Everyone attempting an exam signs an NDA so even if someone does leak information on the exam itself despite this there's no gu... See more...
There is no such thing as "practice exams" meaning real exam questions. Everyone attempting an exam signs an NDA so even if someone does leak information on the exam itself despite this there's no guarantee that the questions are correct, the answers are really as they were on the exam and so on. Not to mention of course legality of such thing. So the official way to certification is by completing the certification track - https://www.splunk.com/en_us/training/certification-track/splunk-certified-cybersecurity-defense-analyst.html - there you have a PDF which lists all the recommended courses which should cover the material needed for the exam. There are of course third-party trainings on this and similar topics but since they are not officially aligned with Splunk there's no guarantee on their contents and adequacy for particular exam.
OK, whether that's ugly or not is a matter of personal taste of course but be aware that it's a very unintuitive way to handle data and someone tasked with maintenance of this later might have hard t... See more...
OK, whether that's ugly or not is a matter of personal taste of course but be aware that it's a very unintuitive way to handle data and someone tasked with maintenance of this later might have hard time understanding this.
While there might be a solution using props/transforms (most probably not with just ingest actions), it seems it could be better done on a previous layer - configure such split in your syslog receive... See more...
While there might be a solution using props/transforms (most probably not with just ingest actions), it seems it could be better done on a previous layer - configure such split in your syslog receiver and adjust metadata when sending to HEC or writing to files for pickup by your HF.
I am taking the SPLK-5001 Cybersecurity Defense analyst exam, where can I find useful and accurate practice exams to prepare? I find that some available online are AI generated, not realistic, too ha... See more...
I am taking the SPLK-5001 Cybersecurity Defense analyst exam, where can I find useful and accurate practice exams to prepare? I find that some available online are AI generated, not realistic, too hard or too easy. Any general study tips would be very helpful