All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

OK I don't use delete very often (nobody does), but you could try something like this index=test [| search index=test | stats min(change_set) as change_set by source | format]  
The problem is not in use of case, but in regex you applied. (I think this very same problem was discussed recently.  Is this another homework question?)  There is an unnecessary asterisk (*) at the ... See more...
The problem is not in use of case, but in regex you applied. (I think this very same problem was discussed recently.  Is this another homework question?)  There is an unnecessary asterisk (*) at the end of several expressions.  But that's not necessarily a real problem.  There is also a code choice of case vs if; the latter would be more expressive and concise in your use case.  But that's not a problem, either. The problem is that the regex's probably do not match data.  For volunteers to help you, you need to post output from index=mulesoft environment=DEV applicationName="Test" |stats values(content.FileName) as Filename1 values(content.ErrorMsg) as errormsg values(content.Error) as error values(message) as message values(priority) as priority min(timestamp) AS Logon_Time, max(timestamp) AS Logoff_Time BY correlationId (Anonymize as needed.)  If you ask a data analytics question, you need to illustrate data.
This was my original query to get the list of apis that failed for a client. I have more details of the client in the lookup table. How can I extract that in the `chart`.  index=application_na sour... See more...
This was my original query to get the list of apis that failed for a client. I have more details of the client in the lookup table. How can I extract that in the `chart`.  index=application_na sourcetype=my_logs:hec source=my_Logger_PROD retrievePayments* returncode=Error | rex field=message "Message=.* \((?<apiName>\w+?) -" | lookup My_Client_Mapping client OUTPUT ClientID ClientName Region | chart count over ClientName by apiName This shows the data like  ClientName RetrievePaymentsA RetrievePaymentsB RetrievePaymentsC Client A 2 1 4 Client B 2 0 3 Client C 5 3 1 How can I add other fields to the output like this ClientId ClientName Region RetrievePaymentsA RetrievePaymentsB RetrievePaymentsC             Any help will be appreciated.
I tried something like this  index=abc ("Aggregator * is Error" OR "Aggregator * is Up") NJ12GC102 | rex field=_raw "Aggregator\s(?<aggregator>[^\s]+)\sis\s(?<aggregator_status>\w+)\s" | streamstat... See more...
I tried something like this  index=abc ("Aggregator * is Error" OR "Aggregator * is Up") NJ12GC102 | rex field=_raw "Aggregator\s(?<aggregator>[^\s]+)\sis\s(?<aggregator_status>\w+)\s" | streamstats current=t global=f window=2 range(_time) as time_diff by aggregator,aggregator_status | streamstats current=t global=f window=2 range(_time) as time_diff2 by aggregator | table _time aggregator aggregator_status time_diff time_diff2 | But the output is now what I needed. For that I would need to change the window=2, but it brings more issues.    
Start here - it shows the basics    https://www.splunk.com/en_us/blog/customers/splunk-clara-fication-search-best-practices.html  Here are all the many different commands with SPL with examples  ... See more...
Start here - it shows the basics    https://www.splunk.com/en_us/blog/customers/splunk-clara-fication-search-best-practices.html  Here are all the many different commands with SPL with examples  - once you have developed the basic concepts, you can start to apply various commands for your use cases.  https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/SearchReference/ListOfSearchCommands  
You haven't answered my key questions about data.  Is there is a data ingestion problem that causes corrupt JSON snippet? (The data in your original illustration is NOT compliant.)  Do you have an "e... See more...
You haven't answered my key questions about data.  Is there is a data ingestion problem that causes corrupt JSON snippet? (The data in your original illustration is NOT compliant.)  Do you have an "event" field from Splunk?  If yes, can you post an example? (Anonymize as needed.)  Can you post corrected raw event? (Anonymize as needed.) Without correct data, you cannot expect any good result.
If its not in the event data its difficult to say what's the root cause, Splunk only reports whats in the logs not the root cause, but that could be elsewhere in some log. That said, its normally m... See more...
If its not in the event data its difficult to say what's the root cause, Splunk only reports whats in the logs not the root cause, but that could be elsewhere in some log. That said, its normally mistyped password's, bad password, etc. Check the Group Policy settings related to account lockout policies, password policies, and Kerberos policies with the AD admin. Ensure that these policies are configured correctly and not excessively restrictive. What about some malware or Unauthorized Access thats causing it, so it could be a number if things. It might be worth speaking to the user and ask them to show you what they are doing, so you can see and spot any obvious mistakes, they may be doing, I have also experienced in the past, odd keyboard keys/characters / locale settings that are being used could also be the cause.
what is the best approach to run splunk queries
As your not an expert then it might be better for you to explore Splunks Add-on builder which will have options to create what you need and with credentials, have a look at the below as it may help. ... See more...
As your not an expert then it might be better for you to explore Splunks Add-on builder which will have options to create what you need and with credentials, have a look at the below as it may help.  https://docs.splunk.com/Documentation/AddonBuilder/4.2.0/UserGuide/CreateAlertActions https://docs.splunk.com/Documentation/AddonBuilder/4.2.0/UserGuide/ConfigureDataCollection
I've tried both raw and event,  no joy.
So, I have data like this after I ran a query.  For each aggregator, if the aggregator_status is Error and before15 minutes, the aggregator_status becomes Up, alert should not run. But, if the a... See more...
So, I have data like this after I ran a query.  For each aggregator, if the aggregator_status is Error and before15 minutes, the aggregator_status becomes Up, alert should not run. But, if the aggregator_status is still Error or no new event comes, alert should trigger. The Time field is epoch time which I am thinking can be used to find difference in Up and Error status times. How do I create such a query for the alert? I am thinking of using foreach command or some sort of streamstats, but I am unable to resolve this issue. The alert needs to run once every 24 hours.
Hi Deepak C Thank you so much for you kind and prompt reply. It's more than appreciated. Splunk has been setup to extract the logs and get all the needed information from AD event logs including ev... See more...
Hi Deepak C Thank you so much for you kind and prompt reply. It's more than appreciated. Splunk has been setup to extract the logs and get all the needed information from AD event logs including event ID, User ID, etc, etc in order to troubleshoot any problems in ADDC such as user account lockouts etc. The image from my previous question is from a search of the users ID and in this case it pulled eventcode 4776, basically saying the account is locked out?  The question is how to I investigate how to get to the root cause and find out what is locking the account out. If you are able to help that would be of great significance as I would like to get the user up and running on Monday without any further problems. Regards.
Hi @Habanero, I’m a Community Moderator in the Splunk Community. This question was posted 5 years ago, so it might not get the attention you need for your question to be answered. We recommend th... See more...
Hi @Habanero, I’m a Community Moderator in the Splunk Community. This question was posted 5 years ago, so it might not get the attention you need for your question to be answered. We recommend that you post a new question so that your issue can get the  visibility it deserves. To increase your chances of getting help from the community, follow these guidelines in the Splunk Answers User Manual when creating your post. Thank you! 
I had a look at that one but i am not really an expert so couldn’t get much idea there. Like Where would be my api credentials reaide and how do i call the api from custom alert action?
BTW, I just noticed, you're testing with the /raw endpoint. If the solutions you're trying to get events from claim to support "native Splunk HEC functionality",  they might be trying to post to the ... See more...
BTW, I just noticed, you're testing with the /raw endpoint. If the solutions you're trying to get events from claim to support "native Splunk HEC functionality",  they might be trying to post to the /event endpoint. And if they do it wrong, the input won't accept the data.
@PickleRick “Hear, hear!”  : -) The option's are better, @danroberts  go with PickRick's, same results but his is more efficient.   
As an expansion of @kprior201 's answer - a bit of an explanation. Since DS is the component directly responding to queries from the deployment clients, it maintains and displays the list of clients... See more...
As an expansion of @kprior201 's answer - a bit of an explanation. Since DS is the component directly responding to queries from the deployment clients, it maintains and displays the list of clients that already "phoned home". But if you restart the DS service, it has to rebuild its database. On the other hand, MC does not interact directly with the deployment clients in any way. It only monitors the _internal index for logs forwarded from all components in your environment. So you might have a situation where some forwarders do phone home and get apps from the DS but cannot properly send their events to the indexer layer.
Come on, flip the pickle, Morty, you're not gonna regret it! Haha! Thanks for the reply.  I have not tried actually capturing/sniffing traffic yet, although I'm headed in that direction.  As far as ... See more...
Come on, flip the pickle, Morty, you're not gonna regret it! Haha! Thanks for the reply.  I have not tried actually capturing/sniffing traffic yet, although I'm headed in that direction.  As far as allowed IPs (for HEC ingestion) I set it to allow all for my testing, so I don't think that's the issue.
I get this error Error in 'delete' command: This command cannot be invoked after the command 'eventstats', which is not distributable streaming.     Anything else can i use here? my data i... See more...
I get this error Error in 'delete' command: This command cannot be invoked after the command 'eventstats', which is not distributable streaming.     Anything else can i use here? my data is huge, i cant use a join or subsearch.   Thanks in advance @ITWhisperer     
Since you obviously can't do a tcpdump on the receiving side and I'm not sure about _internal contents in Cloud, you can either try to observe the traffic on the source side (as you're sending to ... See more...
Since you obviously can't do a tcpdump on the receiving side and I'm not sure about _internal contents in Cloud, you can either try to observe the traffic on the source side (as you're sending to the Cloud using TLS, you're not gonna see the payload of course but you'll at least be able to see the overall request-response cycle or lack thereof. You can also install a temporary local instance, mirror the configuration and try to test it in unencrypted form to verify if your source systems handle the posting to HEC well. Also I'm not sure if you don't have to enable sending from an allowed set of IPs to be able to receive traffic in Cloud in the first place (but I'm not a Cloud expert, don't quote me on that ;-))