All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Ram2  Same questions of @PickleRick : Looks like you are reading app logs from linux, thru Splunk UF (or HF) 1) pls confirm are you using UF or HF (or directly reading at indexer?) 2) are you... See more...
Hi @Ram2  Same questions of @PickleRick : Looks like you are reading app logs from linux, thru Splunk UF (or HF) 1) pls confirm are you using UF or HF (or directly reading at indexer?) 2) are you ingesting these logs thru some Splunk TA / App / Add-on's?
Closing this post, to move it from unanswered to answered, thanks 
Hi @FAnalyst  >>>now I want to open .spl file to look into these Use Cases Yes, as said on above two replies, you can untar the file (.spl = tar file) and look into it, check the contents, try to u... See more...
Hi @FAnalyst  >>>now I want to open .spl file to look into these Use Cases Yes, as said on above two replies, you can untar the file (.spl = tar file) and look into it, check the contents, try to understand the searches, etc >>>but do not want to upload the file as an app  Yes, you no need to upload it to Splunk.  i feel you want to edit some usecase and then upload then tar it as a .spl file(tar file) and then upload it to Splunk. it is also possible. careful while editing(pls preserve the format), all should be good, thanks.    Best Regards Sekar   
1. How are you ingesting your events? 2. Where (on which component) did you put these settings?1
(And all of this may be why the ES team included triage and resolution metrics but excluded detection.)
Hi @vikas_gopal, The previous response provides searches to calculate time differences between known notable time values. Original event time values may not be available. For example, the Expected ... See more...
Hi @vikas_gopal, The previous response provides searches to calculate time differences between known notable time values. Original event time values may not be available. For example, the Expected Host Not Reporting rule uses the metadata command to identify hosts with a lastTime value between 2 and 30 days ago. The lastTime field is stored in the notable, and we can use it to calculate time-to-detect by subtracting the lastTime value from the _time value. An example closer to your description, the Excessive Failed Logins rule, does not store the original event time(s). We could evaluate the notable action definition for the rule to find and execute a drill-down search, which would in turn give us one or more _time values, but as with the rules themselves, success depends on the implementation of the action and drill-down search. When developing rules, understanding event lag is usually a prerequisite. We typically calculate lag by subtracting event _time from event _indextime. The lag value is used as a lookback in rule definitions. For example, a 90th percentile lag of 5 minutes may suggest a lookback of 5 minutes. A rule scheduled to search the last 20 minutes of events would then search between the last 25 and the last 5 minutes. Your mean time-to-detect should be approximately equal to your mean lag time plus rule queuing and execution time. You'll need to adjust your lookback threshold relative to your tolerance for missed detections (false negatives), but this is generally how I would approach the problem. As an alternative, you could enforce design constraints within your rules and require all notables to include original event _time values.
Ah, so you don't want to move events but copy them. You can't easily do that. You could duplicate events using CLONE_SOURCETYPE but thst works per sourcetype, not destination index. So depending on... See more...
Ah, so you don't want to move events but copy them. You can't easily do that. You could duplicate events using CLONE_SOURCETYPE but thst works per sourcetype, not destination index. So depending on your use case you could either try to duplicate events before ingeting them to Splunk or batch-copy them using the collect command with a scheduled search post-indexing. You are aware that those events will consume your license twice?
Hi @PickleRick  Firstly, what do you mean by move? — We want the logs to be in both EM and EP Splunk. Secondly, why don't you just send the data to the right index in the first place? — We don’t wa... See more...
Hi @PickleRick  Firstly, what do you mean by move? — We want the logs to be in both EM and EP Splunk. Secondly, why don't you just send the data to the right index in the first place? — We don’t want to create 4 indexes we want to reroute to 1 index only
Firstly, what do you mean by move? Secondly, why don't you just send the data to the right index in the first place?
.
Sample logs: Mon Sep 04 13:23:40 2024 -- (eMonitoringLATN_install.sh) Your current directory is [/d/onedemand/etc] Mon Sep 05 12:21:30 2024 -- (eMonitoringLATN_install.sh) Final Destination reached... See more...
Sample logs: Mon Sep 04 13:23:40 2024 -- (eMonitoringLATN_install.sh) Your current directory is [/d/onedemand/etc] Mon Sep 05 12:21:30 2024 -- (eMonitoringLATN_install.sh) Final Destination reached logs. Mon Sep 06 12:21:30 2024 -- (eMonitoringLATN_install.sh) logs ingestion started.   We tried below props for above sample logs but line breaking not happening correctly.  And pls let us know how to give time format as well. SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+)\w+\s\w+\s\d{2}\s\d{2}:\d{2}:\d{2}\s\d{4} TIME_PREFIX=^    
Salam guys I wrote the Correlation Search Query and added the Adaptive Response Actions (notable, risk analysis and send to soar), but when the event goes to soar there's no event_id in the artifact... See more...
Salam guys I wrote the Correlation Search Query and added the Adaptive Response Actions (notable, risk analysis and send to soar), but when the event goes to soar there's no event_id in the artifacts SPL Query: [ | tstats summariesonly=true values("Authentication.user") as "user",values("Authentication.dest") as "dest",count("Authentication.dest") as "dest_count",count("Authentication.user") as "user_count",count from datamodel="Authentication"."Authentication" where nodename="Authentication.Failed_Authentication" by "Authentication.src","Authentication.app" | rename "Authentication.src" as "src","Authentication.app" as "app" | where 'count'>5 ]
When you're trying to install apps from Splunkbase it asks for your splunk.com user so that you Splunk can pull the app from there. It has nothing to do with your local users.
i just installed splunk and the password/username works fine for loggings but when i try installing add ons it tell me ''Incorrect username or password" is there a second password ??? [solve... See more...
i just installed splunk and the password/username works fine for loggings but when i try installing add ons it tell me ''Incorrect username or password" is there a second password ??? [solved]    thanks to  TheBravoSierra Path Finder ‎09-17-202107:25 AM Possible Fix: Go to splunk.com or splunkbase and login, view profile, and then you can see your username. It is probable that the username is different than you remember. This was the case for me.  note. i couldn't figure out how to delete the comment, so i modified it, hope it's helpful for someone   
Hi guys. I'd like to know if there is a way to schedule an input script to run multiple times also if it does not end with an exit code. I explain: i have some scripts which need to wait some time... See more...
Hi guys. I'd like to know if there is a way to schedule an input script to run multiple times also if it does not end with an exit code. I explain: i have some scripts which need to wait some time before exiting with an output and/or an exit code. At the same time, i need to rerun the same script also if the previous is still running in background. Splunk can't do this, since it monitors the ran script and wait and exit code before running the new one. Example:   [script://./bin/my_script.sh] index=blablabla sourcetye=blablabla source=blablabla interval=60 ...   Let's say "my_script.sh" contains a simple (it's only an example),   #!/bin/bash date sleep 90   Now, with all current methods i used, also running it with [script://./bin/my_script.sh &] Or with a launcher.sh which detaches a child process with a "bash -c 'my_script.sh &' &" or "exec my_script.sh &" the "sleep 90" prevents splunkd to rerun the script every 60 secs, since it needs 90s to exit the previous script sleep. So, in my indexed data, i'll found 2 mins data, as for the 90s sleep, 10:00 splunkd launches "my_script.sh" and waits its exit code to index data 10:01 splunkd tries to launch a new "my_script.sh", but stops it since there's a previous "sleep 90" 10:02 splunkd indexes previous 10:00 data, and reschedule a new "my_script.sh" 10:03 as 10:01 10:04 as 10:02 ... and so on... Is there a way to force a re-run, also if a previous script pid is currently still running, and have data for, 10:00 output from "my_script.sh" 10:01 output from "my_script.sh" 10:02 output from "my_script.sh" 10:03 output from "my_script.sh" ......? Thanks.
Yes. I would understand it this way. The search will be run but the alert will not be triggered for the throttle period. So side efects of the search (outputlookup, sendemail, whatever) will be run o... See more...
Yes. I would understand it this way. The search will be run but the alert will not be triggered for the throttle period. So side efects of the search (outputlookup, sendemail, whatever) will be run on each search run
Just about every Splunk query contains a pipe so I question the accuracy of the post you quoted. My theory (and I can't find any documentation to support it) is the collect command does not return t... See more...
Just about every Splunk query contains a pipe so I question the accuracy of the post you quoted. My theory (and I can't find any documentation to support it) is the collect command does not return the field being throttled so the alert cannot be throttled.  The collect command is used to write results to an index and is used to populate summaries more than it is to trigger alerts.  I suggest having two searches - one to populate testindex and other other to read testindex and trigger the alert.
Thanks Mitesh,   The "failed to create input" was because I already had one the same from previous testing! The rest of the app is still a mystery to me, I have emailed Cisco but no reply   Than... See more...
Thanks Mitesh,   The "failed to create input" was because I already had one the same from previous testing! The rest of the app is still a mystery to me, I have emailed Cisco but no reply   Thanks again FQzy
Hi @FAnalyst , as also @PickleRick said, spl is a tar.gz file tat you can open with a tool like 7zip and then search the savedsearches.conf file, in which you can find the Correlations searches, but... See more...
Hi @FAnalyst , as also @PickleRick said, spl is a tar.gz file tat you can open with a tool like 7zip and then search the savedsearches.conf file, in which you can find the Correlations searches, but what's your requirement? Ciao. Giuseppe
There are license reports mentioned by @gcusello which are quite useful. But they might contain some summarized data. You can search you raw data and calculate it manually index=whatever | stats su... See more...
There are license reports mentioned by @gcusello which are quite useful. But they might contain some summarized data. You can search you raw data and calculate it manually index=whatever | stats sum(eval(len(_raw))) BY host Some caveats though 1. It is slow - it must read all events and check their lengths. You can walk around this problem by creating an indexed field containing event's length. 2. It shows the size of searchable data. Won't show the data you don't have acess to (if you're searching through multiple indexes and have rights to only some of them, won't show you the size of "removed" data if someone used | delete, will be probably limited by role's search filters and so on) 3. Be cautious about _time vs. _indextime