All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Use the strftime() function
Not sure where the missing + is coming from but this shows you had two = in your search  
Hi, I am very new to this environment and i was having a trouble in login as I have forgot the password and admin detail is there any way, I can reset it.    thanks  
Not sure where MQ events come into it. When you tried with sort and head 1,  what did you get?
I see, there was missing "+" .  but I got same empty results
As I said, you have two = in your regex, your event only has one  
Try adding  supports_rawargs = true Other than that, do you have any documentation for the mitrepurplelab custom command that would indicate what values should be there? 
i tried this as well earlier, may be the issue with MQ events.
what is difference between my above query and the query your provided?, I did the same thing, right?
I've found a workaround in the meantime Since I know what I'm getting, I clean up the arguments before loading them into my python script
Sending Email as an action for an Alert and includes the result as table. _time field is one of the columns for this table and is showing this type of format "DDD MMM 24hh:mm:ss YYYY". Op... See more...
Sending Email as an action for an Alert and includes the result as table. _time field is one of the columns for this table and is showing this type of format "DDD MMM 24hh:mm:ss YYYY". Opening the Alert in Search shows a different format. "YYYY-MM-DD 24hh:mm:ss.sss" Is there a way to format _time field in the email's inline table?  
When I do that I have this again   2024-02-09 14:39:40,578 - DEBUG - Arguments reçus: ['/opt/splunk/etc/apps/Ta-Purplelab/bin/mitrepurplelab.py'] 2024-02-09 14:39:40,578 - ERROR - Usage incorrect: ... See more...
When I do that I have this again   2024-02-09 14:39:40,578 - DEBUG - Arguments reçus: ['/opt/splunk/etc/apps/Ta-Purplelab/bin/mitrepurplelab.py'] 2024-02-09 14:39:40,578 - ERROR - Usage incorrect: python script.py <technique_id>   [mitrepurplelab] chunked = true python.version = python3 filename = mitrepurplelab.py
You may need to go back to basics to get your time buckets it. Start with something like this index="cloudflare" | bin _time span=10m | stats count by _time ClientRequestHost ClientRequestPath | eve... See more...
You may need to go back to basics to get your time buckets it. Start with something like this index="cloudflare" | bin _time span=10m | stats count by _time ClientRequestHost ClientRequestPath | eventstats sum(count) as total by _time ClientRequestHost | eval percent = round(count / total,2) | rename count as "Events", ClientRequestPath as "Path", percent as "%"
Try removing the extra stuff you put in and set chunked to true again
I test  <query>| makeresults | eval technique_id="$technique_id$" | where isnotnull(technique_id) | mitrepurplelab "$technique_id$"</query>   and I got :  2024-02-09 14:24:52,100 - DEBUG - Argume... See more...
I test  <query>| makeresults | eval technique_id="$technique_id$" | where isnotnull(technique_id) | mitrepurplelab "$technique_id$"</query>   and I got :  2024-02-09 14:24:52,100 - DEBUG - Arguments reçus: ['/opt/splunk/etc/apps/Ta-Purplelab/bin/mitrepurplelab.py', '__GETINFO__', '"T1059.003"'] 2024-02-09 14:24:52,100 - ERROR - Usage incorrect: python script.py <technique_id>   This time the Tehnique is well retreive but the syntax is not correct for the script I guess 
Your event data has only one = after request id, not two as you have used in your rex. Try this sourcetype=“my_source” [search sourcetype="my_source" "failed request, request id=" | rex “failed requ... See more...
Your event data has only one = after request id, not two as you have used in your rex. Try this sourcetype=“my_source” [search sourcetype="my_source" "failed request, request id=" | rex “failed request, request id=(?<request_id>[\w-]+)" | top limit=100 request_id | fields request_id]
There is nothing in this search that ensures you only have one event - you could have two events with exactly the same _time value - try something like this | sort 0 -_time | head 1
What did you get in the mitrepurplelab.log when you tried <query>| makeresults | eval technique_id="$technique_id$" | where isnotnull(technique_id) | mitrepurplelab "$technique_id$"</query> and <q... See more...
What did you get in the mitrepurplelab.log when you tried <query>| makeresults | eval technique_id="$technique_id$" | where isnotnull(technique_id) | mitrepurplelab "$technique_id$"</query> and <query>| makeresults | eval technique_id="$technique_id$" | where isnotnull(technique_id) | mitrepurplelab technique_id</query>
this is how sample event looks like 2024:02:09:13:47:07.078 ERROR boundedElastic-6362 c.v.v.h.UserErrorHandler -24jan-rre2655-5b684rfb9b-jcfd4 failed request, request id=0a1-0b2-0a3, error: Val... See more...
this is how sample event looks like 2024:02:09:13:47:07.078 ERROR boundedElastic-6362 c.v.v.h.UserErrorHandler -24jan-rre2655-5b684rfb9b-jcfd4 failed request, request id=0a1-0b2-0a3, error: ValidationException{message=\'Status: 915 User not part of registery. at least one seller did not have the same user id as the initial OT1 inquiry user, OT1 user=[user id=985238, seller id=134550], all merchants=[(user id=10, seller id=20), (user id=10, seller id=20), (user id=10, seller id=20), (user id=10, seller id=20)]\’} now my task is to extract that request id and extract the events of it
By "Search head" you mean something like an all-in-one Splunk install, right? Indexer, SH, etc... all in a single box? I think a quick outline of how this should work might help The HF with DB Con... See more...
By "Search head" you mean something like an all-in-one Splunk install, right? Indexer, SH, etc... all in a single box? I think a quick outline of how this should work might help The HF with DB Connect will have an outputs.conf set up that points to your Indexer/SH The Indexer/SH will have receiving turned on So when your HF "collects" data using DB Connect, it puts that data into an index that's on the indexer. So if I had to make a primary guess, it's that you didn't hook the HF up to the SH/IDX to send it's data there, so what it's doing is saving the data into an index _on the local HF_ instead of on your primary SH/Indexer. Some docs to read - I promise they're pretty good too!  But double check you've done all the steps in both - https://docs.splunk.com/Documentation/Splunk/9.2.0/Forwarding/Enableareceiver https://docs.splunk.com/Documentation/Splunk/9.2.0/Forwarding/Deployaheavyforwarder If that doesn't work, let us know what you did and what it's now doing.  Also you could check DB Connect's various "monitoring" pages to see what they say about what's happening.  Maybe you just have some other error or forgot to schedule the DB input or something.