All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Expanding on @PickleRick's answer, we can use Splunk as the scheduler by "forking" the script to the background. The detached background process will continue to run after the parent script exits wit... See more...
Expanding on @PickleRick's answer, we can use Splunk as the scheduler by "forking" the script to the background. The detached background process will continue to run after the parent script exits with 0 (no error): #!/bin/bash if [ "${FORK:="0"}" = "0" ] then FORK=1 nohup "$0" "$@" >/dev/null 2>&1 & exit 0 fi BASENAME=$(basename -- "$0") logger --id=$$ -t "${BASENAME}" "start" sleep 90 logger --id=$$ -t "${BASENAME}" "finish" I've used the logger command in the example. On standard Linux configurations, this will log messages to /var/log/messages or /var/log/syslog, depending on the local syslog daemon configuration. We can use any log file, but since the background process is detached from splunkd, we can't use stdout. The scripted input can use either intervals or cron expressions. The file input or the input specific to wherever you write your script's output would be configured separately as required. Just be careful not to unintentionally fork b*mb yourself. Check Splunk (limits.conf) and host (ulimit) limits. We can also write a long-lived script or modular input that manages its own child processes.
If we don't care about ties, we can filter the pre-sorted values field in place:   ``` index=foo ``` | fieldsummary maxvals=0 username src dst port mail etc | fields field values | eval values="["... See more...
If we don't care about ties, we can filter the pre-sorted values field in place:   ``` index=foo ``` | fieldsummary maxvals=0 username src dst port mail etc | fields field values | eval values="[".mvjoin(mvindex(json_array_to_mv(values), 0, 2), ",")."]" EDIT: See @PickleRick's answer re: maxvals=3. My only caution here is that distinct_count will no longer be exact. We haven't used the field in this result, but its behavior changes nonetheless.  
If you just want to list both result sets in one table you need to combine two separate searches because datamodel is an event generating command. So it's either append (which has its limitations) or... See more...
If you just want to list both result sets in one table you need to combine two separate searches because datamodel is an event generating command. So it's either append (which has its limitations) or multisearch (but I'm not sure if you can use multisearch with datamodel)
You can just use maxvals=3 argument to fieldsummary.
Hi @siv, The fieldsummary command summarizes field values and counts as a JSON array. We can use that to return the top three values for each field. In the case of a tie, all tied values are returne... See more...
Hi @siv, The fieldsummary command summarizes field values and counts as a JSON array. We can use that to return the top three values for each field. In the case of a tie, all tied values are returned: ``` index=foo ``` | fieldsummary maxvals=0 username src dst port mail etc | fields field values | eval values=json_array_to_mv(values) | eval count=mvindex(mvdedup(mvmap(values, spath(values, "count"))), 0, 2) | mvexpand values | mvexpand count | where spath(values, "count")==count | eval value=spath(values, "value") | fields field value count The use of mvexpand makes this a suboptimal solution, but we can build on this with better use of JSON and multivalue eval functions.
Please could you give an example of what your desired output would look like?
for example  i have this fields and valus: stats count by username . i got this: username root | 102 admin | 71 yara | 34 this is the same for src src 168.172.1.1 | 132 10.10.0.1 | 60 168.... See more...
for example  i have this fields and valus: stats count by username . i got this: username root | 102 admin | 71 yara | 34 this is the same for src src 168.172.1.1 | 132 10.10.0.1 | 60 168.0.8.1 | 12 i want to see it one table but the i want it to check all fields , like dst , port , mail... it could be any thing on the event the goal is to get for each event the top field that have the most values that are repeated with the same value
Let me first say that your requirement is very unusuall. Typically with scripts people rather have the opposite requirement - that you don't run another instance if the previous one is still running.... See more...
Let me first say that your requirement is very unusuall. Typically with scripts people rather have the opposite requirement - that you don't run another instance if the previous one is still running. In your case I think I'd rather go for external cron-launched script with each instance writing to own log file (and some tmpreaper or manual script to clean the log files after some time) and a monitor input to ingest those files. It'll probably be way easier (but not pure-splunk).
@gcuselloYou don't have to do it separately for each sourcetype. If you use output_format=hec with collect you can either retain the original sourcetype or modify it dynamically. @Ash1giving shared ... See more...
@gcuselloYou don't have to do it separately for each sourcetype. If you use output_format=hec with collect you can either retain the original sourcetype or modify it dynamically. @Ash1giving shared access to those 4 indexes would probably the way to go. If you don't wajt your users to have to type in all four indexes names, just define a macro or eventtype.
for today this is the volume used there are 3 indexers each one of them has 16 CPU's
Hi @KhalidAlharthi , you can check IOPS using an external tool as Bonnie++ or others. Abour resource sharing,  it is a configuration in VM-Ware, even if these machines are only for Splunk but they ... See more...
Hi @KhalidAlharthi , you can check IOPS using an external tool as Bonnie++ or others. Abour resource sharing,  it is a configuration in VM-Ware, even if these machines are only for Splunk but they are in a VM-Ware infrastructure where there are other VMs, it's required by Splunk that they must be dedicated (https://docs.splunk.com/Documentation/Splunk/9.3.0/Capacity/Referencehardware#Virtualized_Infrastructures ). Anyway, probably the issue is in the performaces of your virtual storage. Then how many logs (daily average) are you indexing? How many Indexers are you using and how many CPUs are there in each Indexer? Splunk requires at least 12 CPUs for each Indexer and more if there's ES or ITSI, then you can index max 200 GB/day with one indexer (less if you have ES or ITSI), so it's relevant how many logs are you indexing. Ciao. Giuseppe
thanks @gcusello  for responding,   i didn't miss up with disk storage or add any additional partitions .. last week i performed a new index from the CM and push it through indexers ...   about I... See more...
thanks @gcusello  for responding,   i didn't miss up with disk storage or add any additional partitions .. last week i performed a new index from the CM and push it through indexers ...   about IOPS i don't know how can i check that using splunk   for the virtual infrastructure splunk has it's own configuration and not shared with other resources ... (Vsphere)
Hi @KhalidAlharthi , if you have enough disk space the issue could be related to the resources of the Indexers:  have you performant disks on your Indexers? Splunk requires at least 800 IOPS (bett... See more...
Hi @KhalidAlharthi , if you have enough disk space the issue could be related to the resources of the Indexers:  have you performant disks on your Indexers? Splunk requires at least 800 IOPS (better 1200 or more!), and this is the bottleneck of each Splunk installation. If you are using a shared virtual infrastructure, are the resources of the Splunk servers dedicated to them or shared? They must be dedicated not shared. Ciao. Giuseppe
I am facing an issue, while try to create Automation User, this option is not available. Need to create server but for that authorization configuration is required.  to avail this authorization con... See more...
I am facing an issue, while try to create Automation User, this option is not available. Need to create server but for that authorization configuration is required.  to avail this authorization configuration, we need the following options as was in previous versions of Splunk How ever I am getting the below options:       Please suggest What am I doing Wrong? Best Regards  
i have checked the main partitions of the system and hot/cold/frozen partition they have enough space and i think it's not the issue...   Thanks @gcusello 
Hi @Ash1 , as also @PickleRick said, copying logs from one index in another one you pay twice your license (if you want to maintain the same sourcetype), is this acceptable for you? Why do you want... See more...
Hi @Ash1 , as also @PickleRick said, copying logs from one index in another one you pay twice your license (if you want to maintain the same sourcetype), is this acceptable for you? Why do you want to do this? if the reason is the access grants you could use 4 indexes for EP data and one for both EP and EM data, in this way you don't need to duplicate them. Anyway, there is one way to copy logs from an index to another and it isn't relevant if they come from 4 indexes and must be copied in one: 1) schedule a search and add at the end the collect command, something like this: index IN (app-ep-index1, app-ep-index2, app-ep-index3, app-ep-index4) <condition_of_the_log_to_be_copied> | collect index=app-em-index sourcetype=ypur_sourcetype) this solution has three limits: you pay twice the license, there's a delay in the data availability in the app-em-index, yu have to schedule one search for each sourcetype you want to copy.  My hint is to send common logs to one index and give grants to both the groups to this index. Ciao. Giuseppe
 I'd say list them for now.  Thanks for the tips I need them I'm trying to get better with SPL 
  Looks like you are reading app logs from linux, thru Splunk UF (or HF)  yes from Splunk UF 1) pls confirm are you using UF or HF (or directly reading at indexer?) using UF 2) are you ingesting t... See more...
  Looks like you are reading app logs from linux, thru Splunk UF (or HF)  yes from Splunk UF 1) pls confirm are you using UF or HF (or directly reading at indexer?) using UF 2) are you ingesting these logs thru some Splunk TA / App / Add-on's? No
I just had this same issue.  I tried a password reset, but got the same lockout message.  I only need my cloud instance for one assignment in a class, so I am just using another email to start over. ... See more...
I just had this same issue.  I tried a password reset, but got the same lockout message.  I only need my cloud instance for one assignment in a class, so I am just using another email to start over.  No idea what could have caused the lockout.
Hi @Ram2  Same questions of @PickleRick : Looks like you are reading app logs from linux, thru Splunk UF (or HF) 1) pls confirm are you using UF or HF (or directly reading at indexer?) 2) are you... See more...
Hi @Ram2  Same questions of @PickleRick : Looks like you are reading app logs from linux, thru Splunk UF (or HF) 1) pls confirm are you using UF or HF (or directly reading at indexer?) 2) are you ingesting these logs thru some Splunk TA / App / Add-on's?