All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Let me first say that your requirement is very unusuall. Typically with scripts people rather have the opposite requirement - that you don't run another instance if the previous one is still running.... See more...
Let me first say that your requirement is very unusuall. Typically with scripts people rather have the opposite requirement - that you don't run another instance if the previous one is still running. In your case I think I'd rather go for external cron-launched script with each instance writing to own log file (and some tmpreaper or manual script to clean the log files after some time) and a monitor input to ingest those files. It'll probably be way easier (but not pure-splunk).
@gcuselloYou don't have to do it separately for each sourcetype. If you use output_format=hec with collect you can either retain the original sourcetype or modify it dynamically. @Ash1giving shared ... See more...
@gcuselloYou don't have to do it separately for each sourcetype. If you use output_format=hec with collect you can either retain the original sourcetype or modify it dynamically. @Ash1giving shared access to those 4 indexes would probably the way to go. If you don't wajt your users to have to type in all four indexes names, just define a macro or eventtype.
for today this is the volume used there are 3 indexers each one of them has 16 CPU's
Hi @KhalidAlharthi , you can check IOPS using an external tool as Bonnie++ or others. Abour resource sharing,  it is a configuration in VM-Ware, even if these machines are only for Splunk but they ... See more...
Hi @KhalidAlharthi , you can check IOPS using an external tool as Bonnie++ or others. Abour resource sharing,  it is a configuration in VM-Ware, even if these machines are only for Splunk but they are in a VM-Ware infrastructure where there are other VMs, it's required by Splunk that they must be dedicated (https://docs.splunk.com/Documentation/Splunk/9.3.0/Capacity/Referencehardware#Virtualized_Infrastructures ). Anyway, probably the issue is in the performaces of your virtual storage. Then how many logs (daily average) are you indexing? How many Indexers are you using and how many CPUs are there in each Indexer? Splunk requires at least 12 CPUs for each Indexer and more if there's ES or ITSI, then you can index max 200 GB/day with one indexer (less if you have ES or ITSI), so it's relevant how many logs are you indexing. Ciao. Giuseppe
thanks @gcusello  for responding,   i didn't miss up with disk storage or add any additional partitions .. last week i performed a new index from the CM and push it through indexers ...   about I... See more...
thanks @gcusello  for responding,   i didn't miss up with disk storage or add any additional partitions .. last week i performed a new index from the CM and push it through indexers ...   about IOPS i don't know how can i check that using splunk   for the virtual infrastructure splunk has it's own configuration and not shared with other resources ... (Vsphere)
Hi @KhalidAlharthi , if you have enough disk space the issue could be related to the resources of the Indexers:  have you performant disks on your Indexers? Splunk requires at least 800 IOPS (bett... See more...
Hi @KhalidAlharthi , if you have enough disk space the issue could be related to the resources of the Indexers:  have you performant disks on your Indexers? Splunk requires at least 800 IOPS (better 1200 or more!), and this is the bottleneck of each Splunk installation. If you are using a shared virtual infrastructure, are the resources of the Splunk servers dedicated to them or shared? They must be dedicated not shared. Ciao. Giuseppe
I am facing an issue, while try to create Automation User, this option is not available. Need to create server but for that authorization configuration is required.  to avail this authorization con... See more...
I am facing an issue, while try to create Automation User, this option is not available. Need to create server but for that authorization configuration is required.  to avail this authorization configuration, we need the following options as was in previous versions of Splunk How ever I am getting the below options:       Please suggest What am I doing Wrong? Best Regards  
i have checked the main partitions of the system and hot/cold/frozen partition they have enough space and i think it's not the issue...   Thanks @gcusello 
Hi @Ash1 , as also @PickleRick said, copying logs from one index in another one you pay twice your license (if you want to maintain the same sourcetype), is this acceptable for you? Why do you want... See more...
Hi @Ash1 , as also @PickleRick said, copying logs from one index in another one you pay twice your license (if you want to maintain the same sourcetype), is this acceptable for you? Why do you want to do this? if the reason is the access grants you could use 4 indexes for EP data and one for both EP and EM data, in this way you don't need to duplicate them. Anyway, there is one way to copy logs from an index to another and it isn't relevant if they come from 4 indexes and must be copied in one: 1) schedule a search and add at the end the collect command, something like this: index IN (app-ep-index1, app-ep-index2, app-ep-index3, app-ep-index4) <condition_of_the_log_to_be_copied> | collect index=app-em-index sourcetype=ypur_sourcetype) this solution has three limits: you pay twice the license, there's a delay in the data availability in the app-em-index, yu have to schedule one search for each sourcetype you want to copy.  My hint is to send common logs to one index and give grants to both the groups to this index. Ciao. Giuseppe
 I'd say list them for now.  Thanks for the tips I need them I'm trying to get better with SPL 
  Looks like you are reading app logs from linux, thru Splunk UF (or HF)  yes from Splunk UF 1) pls confirm are you using UF or HF (or directly reading at indexer?) using UF 2) are you ingesting t... See more...
  Looks like you are reading app logs from linux, thru Splunk UF (or HF)  yes from Splunk UF 1) pls confirm are you using UF or HF (or directly reading at indexer?) using UF 2) are you ingesting these logs thru some Splunk TA / App / Add-on's? No
I just had this same issue.  I tried a password reset, but got the same lockout message.  I only need my cloud instance for one assignment in a class, so I am just using another email to start over. ... See more...
I just had this same issue.  I tried a password reset, but got the same lockout message.  I only need my cloud instance for one assignment in a class, so I am just using another email to start over.  No idea what could have caused the lockout.
Hi @Ram2  Same questions of @PickleRick : Looks like you are reading app logs from linux, thru Splunk UF (or HF) 1) pls confirm are you using UF or HF (or directly reading at indexer?) 2) are you... See more...
Hi @Ram2  Same questions of @PickleRick : Looks like you are reading app logs from linux, thru Splunk UF (or HF) 1) pls confirm are you using UF or HF (or directly reading at indexer?) 2) are you ingesting these logs thru some Splunk TA / App / Add-on's?
Closing this post, to move it from unanswered to answered, thanks 
Hi @FAnalyst  >>>now I want to open .spl file to look into these Use Cases Yes, as said on above two replies, you can untar the file (.spl = tar file) and look into it, check the contents, try to u... See more...
Hi @FAnalyst  >>>now I want to open .spl file to look into these Use Cases Yes, as said on above two replies, you can untar the file (.spl = tar file) and look into it, check the contents, try to understand the searches, etc >>>but do not want to upload the file as an app  Yes, you no need to upload it to Splunk.  i feel you want to edit some usecase and then upload then tar it as a .spl file(tar file) and then upload it to Splunk. it is also possible. careful while editing(pls preserve the format), all should be good, thanks.    Best Regards Sekar   
1. How are you ingesting your events? 2. Where (on which component) did you put these settings?1
(And all of this may be why the ES team included triage and resolution metrics but excluded detection.)
Hi @vikas_gopal, The previous response provides searches to calculate time differences between known notable time values. Original event time values may not be available. For example, the Expected ... See more...
Hi @vikas_gopal, The previous response provides searches to calculate time differences between known notable time values. Original event time values may not be available. For example, the Expected Host Not Reporting rule uses the metadata command to identify hosts with a lastTime value between 2 and 30 days ago. The lastTime field is stored in the notable, and we can use it to calculate time-to-detect by subtracting the lastTime value from the _time value. An example closer to your description, the Excessive Failed Logins rule, does not store the original event time(s). We could evaluate the notable action definition for the rule to find and execute a drill-down search, which would in turn give us one or more _time values, but as with the rules themselves, success depends on the implementation of the action and drill-down search. When developing rules, understanding event lag is usually a prerequisite. We typically calculate lag by subtracting event _time from event _indextime. The lag value is used as a lookback in rule definitions. For example, a 90th percentile lag of 5 minutes may suggest a lookback of 5 minutes. A rule scheduled to search the last 20 minutes of events would then search between the last 25 and the last 5 minutes. Your mean time-to-detect should be approximately equal to your mean lag time plus rule queuing and execution time. You'll need to adjust your lookback threshold relative to your tolerance for missed detections (false negatives), but this is generally how I would approach the problem. As an alternative, you could enforce design constraints within your rules and require all notables to include original event _time values.
Ah, so you don't want to move events but copy them. You can't easily do that. You could duplicate events using CLONE_SOURCETYPE but thst works per sourcetype, not destination index. So depending on... See more...
Ah, so you don't want to move events but copy them. You can't easily do that. You could duplicate events using CLONE_SOURCETYPE but thst works per sourcetype, not destination index. So depending on your use case you could either try to duplicate events before ingeting them to Splunk or batch-copy them using the collect command with a scheduled search post-indexing. You are aware that those events will consume your license twice?
Hi @PickleRick  Firstly, what do you mean by move? — We want the logs to be in both EM and EP Splunk. Secondly, why don't you just send the data to the right index in the first place? — We don’t wa... See more...
Hi @PickleRick  Firstly, what do you mean by move? — We want the logs to be in both EM and EP Splunk. Secondly, why don't you just send the data to the right index in the first place? — We don’t want to create 4 indexes we want to reroute to 1 index only