All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @SplunkExplorer, Did you check "Restart Splunkd" option for your new input app on app settings? Splunk Forwarder needs to be restarted for the new inputs. .  
Hi Splunkers,  we have a Windows log source with a UF installed on it. We have no access to this log source: we only know that we collect Windows logs via UF and it works properly. Collected logs ar... See more...
Hi Splunkers,  we have a Windows log source with a UF installed on it. We have no access to this log source: we only know that we collect Windows logs via UF and it works properly. Collected logs are the usual one: Security, Applications, and so on. Starting from today, we need to add a monitor input: some files are stored in a folder and we need to collect them. So, on our DS, we created another app, inside deployment-app folder, with a proper inputs.conf and props.conf and then we deployed it. Why we created another app and does not simply added a monitor stanza in inputs.conf for Windows addon? Simply because Windows addon is deployed on many host; on the other side, we need to monitor the path only on 1 specific host, so we preferred to deploy another dedicated app, with its server class and so on. DS give no error; app is shown as deployed with no issues. At the same time, we got no error looking on splunkd.log and/or _internal index. By the way, logs are not collected. For sure, we are going to reach Host owner and perform basic checks, like: Is provided path the right one? User in charge of execute UF has read permission on that folder? In UF app folder, is the one deployed by us viewable?  But before this, there is a doubt I have: above point 2, in case of permission denied, I should see in _internal logs some error message, right? Because currently I don't see any error message related to this issue. The behavior is like the inputs.conf we set in deployment app is totally ignored: searching on _internl and/or splunkd.log, I cannot see anything related to path we have to monitor.
Hello ,   I have put the smtp server name in my email settings in splunk...but the issue is a bit complex , all the previous alerts/reports are coming on time which are created on splunk but only t... See more...
Hello ,   I have put the smtp server name in my email settings in splunk...but the issue is a bit complex , all the previous alerts/reports are coming on time which are created on splunk but only the one created by me lately are not coming ..   Any suggestions?
Hi, Can we do the same for BMC remedy add-on? Does the BMC integration work as an ad hoc adaptive response?  
Hi all,  I am trying to set up SSE (v3.8.0) however all the searches that are using sseanalytics are failing. ERROR SearchMessages - orig_component="script" app="Splunk_Security_Essentials" sid... See more...
Hi all,  I am trying to set up SSE (v3.8.0) however all the searches that are using sseanalytics are failing. ERROR SearchMessages - orig_component="script" app="Splunk_Security_Essentials" sid="1713345298.75124_FB5E91CC-FD94-432D-8605-815038CDF897" message_key="EXTERN:SCRIPT_NONZERO_RETURN" message=External search command 'sseanalytics' returned error code 1. .
hi guys! could you recommend better way to archiving logs from k8s to S3 bucket?  maybe better to write custom script or use some Splunk tools (like Hadoop)
@harishlnu one way I have done this in the past is to use 2 lots of automation.  1st automation sends the email with a key nested in the HTML such as "SOARKEY=" and then add useful information to th... See more...
@harishlnu one way I have done this in the past is to use 2 lots of automation.  1st automation sends the email with a key nested in the HTML such as "SOARKEY=" and then add useful information to the other side, usually b64 encoded, at a minimum having the original container id in it. This sends the email and then stops automation.  Then you ingest the replies (you may need to setup mail rules to push replies to SOAR emails into a dedicated inbox) using SOAR and look for the SOARKEY in the HTML Body, get the encoded string, decode and then <do something>.   -- Hope this helps! Happy SOARing! -- 
Hi here is order how those are managed in search time https://docs.splunk.com/Documentation/Splunk/latest/Knowledge/Searchtimeoperationssequence You should ensure that this field has defined before... See more...
Hi here is order how those are managed in search time https://docs.splunk.com/Documentation/Splunk/latest/Knowledge/Searchtimeoperationssequence You should ensure that this field has defined before you can use those e.g. in transforms.conf. E.g. if you are using ALIAS-field1 on props.conf you cannot use that field1 as  a SOURCE_KEY in props.conf. In this kind of situation you should extract that information from _raw instead of field which has defined on later phase of input sequence. I'm not sure about your event.url field is same as this TA has defined or not. If it's then you can see in props.conf that it has defined like EVAL-url = Host+URL and if this is your event.url field then it didn't  exists yet when you try to use it on transforms.conf. r. Ismo
Hi Team, I am having requirement where i need to send an email for approval, if they reply to that email i need to read that email and continue with next steps. Could you please help me with your s... See more...
Hi Team, I am having requirement where i need to send an email for approval, if they reply to that email i need to read that email and continue with next steps. Could you please help me with your suggestions. Thanks in Advance  
Hi When you are creating an app into Splunk with GUI, there are two separate templates to chosen. Based on your chose it will create different files and directories under that app. When you are sta... See more...
Hi When you are creating an app into Splunk with GUI, there are two separate templates to chosen. Based on your chose it will create different files and directories under that app. When you are starting to create own apps and TAs, I strongly propose that you start to use e.g. git to store those and keep track on your changes. Then you could use some editor like Visual Studio Code to write those together with Simple XML editor and/or Dashboard Studio. There are some old .conf presentation how you can do this. Also read instructions from dev.splunk.com about this. r. Ismo
Hi You could use @richgalloway 's presented apps. I think that there was presentation about it last our previous .conf? Other option is just use REST requests to get that information what you want t... See more...
Hi You could use @richgalloway 's presented apps. I think that there was presentation about it last our previous .conf? Other option is just use REST requests to get that information what you want to show. On Splunk Cloud you haven't rest access to indexers and otherwise it has restricted amount of endpoints in use. For that reason you cannot get all that information with this way. IMHO: You should have all this kind of configuration in some version control system like git. Create needed Apps and TAs to store those. Maybe separate TAs based on your needs between HF/UF, Indexers and SH. Then just use any suitable methods / processes to install those into correct environment. Try to avoid configure that kind of information via both GUI and conf files. In long run you will avoid lot of issues to use git + Apps/TAs with conf files! r. Ismo
Hi you should use UF package which is loaded from your SCP stack. Just install it on all your UF+HF which are directly connected to your cloud stack and use its defaults to send into SCP. Don't mesh... See more...
Hi you should use UF package which is loaded from your SCP stack. Just install it on all your UF+HF which are directly connected to your cloud stack and use its defaults to send into SCP. Don't mesh it! r. Ismo
Hi maybe this hits your fingers?  grantableRoles This limits what you can see and set. On cloud you cannot see all users as some of those are restricted for only Splunk's own use. r. Ismo 
I've tried doing that but after filling out the form and submitting it, I get an error message stating that I don't have any entitlement.
Hi   We have a splunk installation with smart store enabled. We have plenty of cache on disk, so we are no near the space padding setting. I have seen bucket downloads from the S3, and I did not e... See more...
Hi   We have a splunk installation with smart store enabled. We have plenty of cache on disk, so we are no near the space padding setting. I have seen bucket downloads from the S3, and I did not expect that. So my question is, do Splunk pre-emptive evict buckets, even if there are enough space ? I se no documentation that states it does anything else than LRU.   Regards André
Dear Cansel,  The query you have shared is running properly on one collector, but what if there are multiple collectors? It is showing me the wait state with its numeric IDs and giving a count for i... See more...
Dear Cansel,  The query you have shared is running properly on one collector, but what if there are multiple collectors? It is showing me the wait state with its numeric IDs and giving a count for it as well. Another thing was, can I show the name of query with it's ID? Please check if the query is right or wrong because it is still not showing. One more thing, I want to let you know my setup is on prem. Please find the attachment below Thanks & Regards, Hardik
@SOARt_of_Lost the only way I can think of initially is to have a scheduled playbook to check for containers from notables without artifacts and then run the relevant playbook against them. Timer app... See more...
@SOARt_of_Lost the only way I can think of initially is to have a scheduled playbook to check for containers from notables without artifacts and then run the relevant playbook against them. Timer app would be used to create the container to kick the utility playbook off as regularly as you want. 
I want to deploy a single Splunk collector in my AWS ECS cluster which will 1. Collect all resource metrics for other running tasks within the same cluster 2. Receive, process and forward all custo... See more...
I want to deploy a single Splunk collector in my AWS ECS cluster which will 1. Collect all resource metrics for other running tasks within the same cluster 2. Receive, process and forward all custom OT metrics sent to it by the applications themselves.   Is this possible?   Thanks
Hi Tony, Based on your first screenshot this is normal, Yes tier was created but the agent is not working anymore. Can you please answer below in order to understand situation? 1- Is this monolith... See more...
Hi Tony, Based on your first screenshot this is normal, Yes tier was created but the agent is not working anymore. Can you please answer below in order to understand situation? 1- Is this monolith Java App 2-Do you have more than 1 JVM instance on same host  Thanks Cansel