All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, The issue has re-occurred. I modified few of the scheduled searches running on All time, staggered the cron etc. and it helped for a while. From past few days, the error for the delayed search ... See more...
Hi, The issue has re-occurred. I modified few of the scheduled searches running on All time, staggered the cron etc. and it helped for a while. From past few days, the error for the delayed search has increased upto 15000(+). Could you please me resolve this permanently   Also is there any query I can use to find out which all searches are getting delayed. I am using this one-   index= _internal sourcetype=scheduler savedsearch_name=* status=skipped | stats count BY savedsearch_name, app, user, reason | sort -count     Can someone please help me out with this. @PickleRick @richgalloway 
  Thanks so much for trying to help me. I agree with what you stated. Something is wrong. However. I did; I posted what was in the what was in the search. Then, I posted what was ingested from the... See more...
  Thanks so much for trying to help me. I agree with what you stated. Something is wrong. However. I did; I posted what was in the what was in the search. Then, I posted what was ingested from the logs. I'm not sure what more information you need from me.
It's very difficult to help with a search when we don't know what is being searched.  Something in your indexed data must be showing when an account is locked out.  Show us those events and we can he... See more...
It's very difficult to help with a search when we don't know what is being searched.  Something in your indexed data must be showing when an account is locked out.  Show us those events and we can help you craft a search for them.
I use 9.3.2 version, but the issue triggers on the remote environment, not locally, and yes, I tried to clear the cache and open the page in a new tab, it doesnt help
I understand you're not able to help. Thanks for your help anyway.
We have been given new Connection Strings to enter into our TA-MS-AAD inputs, running on Splunk Cloud's IDM host, pulling from a client's Event Hub.  The feeds were down for several days before we we... See more...
We have been given new Connection Strings to enter into our TA-MS-AAD inputs, running on Splunk Cloud's IDM host, pulling from a client's Event Hub.  The feeds were down for several days before we were given the Strings. The IDM is now connecting to the Event Hub again but no data is flowing; the IDM's logs say "The supplied sequence number '5529741' is invalid. The last sequence number in the system is '4121'" Is there anything we can do about this?
@PickleRick appreciated your detailed response. 6 point -- where I can implement in syslog server? in syslog can I write props and transforms? In syslog server we will be installing UF to forward th... See more...
@PickleRick appreciated your detailed response. 6 point -- where I can implement in syslog server? in syslog can I write props and transforms? In syslog server we will be installing UF to forward the data to our Splunk. Can you please specify the location and process?
This does not show evidence of the microseconds not matching. The Time field is merely displayed to millseconds not microseconds.  
I have solved this issue. to get the notables accross SHC, you need to send notable data to an index in indexer cluster using outputs.conf once data is sent, new notables will be available in al... See more...
I have solved this issue. to get the notables accross SHC, you need to send notable data to an index in indexer cluster using outputs.conf once data is sent, new notables will be available in all SHs
I have deployed a SH cluster and two SHs, every thing is working fine till now.   Now I have added a new member to the cluster, all configurations are replicated. But the apps are not replicated. ... See more...
I have deployed a SH cluster and two SHs, every thing is working fine till now.   Now I have added a new member to the cluster, all configurations are replicated. But the apps are not replicated. Q1: will the apps be replicated on new member automatically or should I run deploy bundle command on deployer.   Q2: when I run the command from deployer, I get network layer error and Splunk service is stopped automatically.
But that's for a dashboard right? I would need it for the add-on builder app. Which I'm not sure I can modify as it is a splunk app.
Can you please check this discussion if it helps you?
@bowesmana Thanks for the solution and investing your valuable time.  But still micro seconds are not matching.       
A date range/date selector
Hello @marnall , I see what you mean but doing it this way will apply the same login password to all inputs. That's not what I'm looking for The ‘best way’ for me is to use this solution and sto... See more...
Hello @marnall , I see what you mean but doing it this way will apply the same login password to all inputs. That's not what I'm looking for The ‘best way’ for me is to use this solution and stop editing my application using Add-on Builder. And if I need to update my application with Add-on Builder, I have to restore the files (globalConfig and <input>_rh_account) manually. Normally, I'll be taking part in the application building course on Monday. If I get the answer, I'll update this post.   Have a good day  
Hi @BalajiRaju , probably the condition I supposed isn't correct, correct it for your data, e.g. as @yuanliu hinted, but the approach is correct. Ciao. Giuseppe
Small improvements. The wildcard should apply to <anything>Tags{}. mvfind uses regex.  If you need string match, there is too much work to convert an arbitrary string into regex.  But Splunk's equ... See more...
Small improvements. The wildcard should apply to <anything>Tags{}. mvfind uses regex.  If you need string match, there is too much work to convert an arbitrary string into regex.  But Splunk's equality operator applies in multivalue context. So, using foreach suggested by @ITWhisperer, you can do   | foreach *Tags{} [| eval fields=mvappend(fields, if('<<FIELD>>' == "Tag4", "<<FIELD>>", null()))]   Your sample data will give fields Info.Apps.MessageQueue.ReportTags{} Info.Apps.MessageQueue.UserTags{} Since 8.2, Splunk introduced a set of JSON functions.  You can actually use a more formal, semantic approach, although the algorithm is messier because iteration capabilities are limited in SPL. (It is also limited as SPL doesn't support recursion.) Here is an illustration.   | eval key = json_array_to_mv(json_keys(_raw)) | mvexpand key | eval key1 = json_array_to_mv(json_keys(json_extract(_raw, key))) | mvexpand key1 | eval key = if(isnull(key1), key, key . "." . key1) | eval key1 = json_array_to_mv(json_keys(json_extract(_raw, key))) | mvexpand key1 | eval key = if(isnull(key1), key, key . "." . key1) | eval key1 = json_array_to_mv(json_keys(json_extract(_raw, key))) | mvexpand key1 | eval key = if(isnull(key1), key, key . "." . key1) | eval key1 = json_array_to_mv(json_keys(json_extract(_raw, key))) | eval key = if(isnull(key1), key, key . "." . key1) | eval value = json_array_to_mv(json_extract(_raw, key)) | where value == "Tag4"   The above code assumes a path depth of 5 even though your data only has depth of 4.  The result is key value Info.Apps.MessageQueue.ReportTags Tag1 Tag4 Info.Apps.MessageQueue.UserTags Tag3 Tag4 Tag5 Here is an emulation you can play with and compare with real data   | makeresults | eval _raw = "{ \"Info\": { \"Apps\": { \"ReportingServices\": { \"ReportTags\": [ \"Tag1\" ], \"UserTags\": [ \"Tag2\", \"Tag3\" ] }, \"MessageQueue\": { \"ReportTags\": [ \"Tag1\", \"Tag4\" ], \"UserTags\": [ \"Tag3\", \"Tag4\", \"Tag5\" ] }, \"Frontend\": { \"ClientTags\": [ \"Tag12\", \"Tag47\" ] } } } }" | fields - _time | spath ``` data emulation above ```  
Correct resolution for this issue is to force update content as mentioned in this article (if 1 time force update not work try it 3-4 times it will resolve issue) : https://splunk.my.site.com/custome... See more...
Correct resolution for this issue is to force update content as mentioned in this article (if 1 time force update not work try it 3-4 times it will resolve issue) : https://splunk.my.site.com/customer/s/article/SSE-Security-Content-not-loading-issue-KB-will-complet...  Also it required to have internet connectivity on Splunk Machines to use this App ---- Apply karma and mark solution if works.
Which type of input you are trying to create @meirclaroty @albjimen 
Was this successful? Would the Splunk Security Essentials Add-on work too?