All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @uagraw01 , if you were in License Violation, Indexing didn't stop, only searching was stopped, so you should have all the logs, also in the no licensing period. If you haven't (as from your scr... See more...
Hi @uagraw01 , if you were in License Violation, Indexing didn't stop, only searching was stopped, so you should have all the logs, also in the no licensing period. If you haven't (as from your screenshot), there is another reason for this, as I described in my answer. Ciao. Giuseppe
okay, let me try
So you mean after the restart of the Splunk, the previous data should visible.
The numbers in the panels are the same when trying different time ranges as I mentioned in the above search query
The principle of what you are doing is correct. So, if it is not working, it may come down to the actually data, which understandably you might not want to share. How are the values which are getting... See more...
The principle of what you are doing is correct. So, if it is not working, it may come down to the actually data, which understandably you might not want to share. How are the values which are getting through different to the ones which are being removed? How large is your lookup table? Are there any special characters being used?
It does not retrieve the blacklist, but rather it retrieves some of the whitelist. I want to make it pass through the lookuptable and show the user who is not authorized to enter.
Hi @PickleRick , I'll try to upgrade to 9.1.3, hoping that it will solve the issue! Ciao. Giuseppe
Noone said that the indexes and sourcetypes have to be different. The outer search can be whatever you want. It's that the subsearch returns its results as additional conditions to the outer search i... See more...
Noone said that the indexes and sourcetypes have to be different. The outer search can be whatever you want. It's that the subsearch returns its results as additional conditions to the outer search is what is important in this example.
Hi @vihshah , aswering to your requirements: 1. get all failed request IDs using the following search you have all the failed request_ids and the count of each one. you can save them in an alert ... See more...
Hi @vihshah , aswering to your requirements: 1. get all failed request IDs using the following search you have all the failed request_ids and the count of each one. you can save them in an alert and send the results as an attachement (csv or pdf) by eMail: sourcetype="mykube.source" "failed request" | rex "failed request:(?<request_id>[\w-]+)" | stats count BY request_id    2. iterate all the request IDs to get more details: You can have the list of all events with other information using the table command: I don't know which fields you have but you can complete the search: sourcetype="mykube.source" "failed request" | rex "failed request:(?<request_id>[\w-]+)" | table _time request_id field1, field2 field3 3. extract require fields from those details and show them in tabular form and generate an email. Using the previous search, you can create an alert sending the results as an attachement (csv or pdf) by eMail. Ciao. Giuseppe
OK it looks like it should work - what is your question?
What report is this? Licensing errors can make your environment stop searching but they shouldn't prevent you from indexing as far as I remember.
I haven't seen this before. But your keywords this is what pop up from google https://github.com/wazuh/wazuh/issues/21383  
That's... strange. As you know (and @isoutamo  already pointed out as well), you quarantine search peers on your search head(s) so that the searches do not get distributed to that search peer. So HF... See more...
That's... strange. As you know (and @isoutamo  already pointed out as well), you quarantine search peers on your search head(s) so that the searches do not get distributed to that search peer. So HF shouldn't have anything to do with quarantine. swidtag directory is a part of normal Splunk distribution and has been around for a long time. If you didn't have it before... Are you sure someone didn't try to ineptly "upgrade" your Splunk installation?
The earliest and latest settings in the search are overriding the values chosen from the timepicker and since these are the same, the numbers in your panels are the same.
Then probably this https://splunkbase.splunk.com/app/5037 is what you need to look. I haven't try it, but I have use/modify some internally built alert actions on one of my Clients. It's not so hard ... See more...
Then probably this https://splunkbase.splunk.com/app/5037 is what you need to look. I haven't try it, but I have use/modify some internally built alert actions on one of my Clients. It's not so hard to do that by yourself if needed. Just read Jira's REST api reference and do what is needed.
Hi @gcusello , thank you for the answer, can you please let me know how can I rephrase my query?
Hi @vihshah , I don't see differenes between 2nd and 3rd requirement: using my second search you can have all the details you need grouped by request_id. You have only to save this search as an ale... See more...
Hi @vihshah , I don't see differenes between 2nd and 3rd requirement: using my second search you can have all the details you need grouped by request_id. You have only to save this search as an alert or a report and you'll have them by eMail. Ciao. Giuseppe
Hi @isoutamo, thank for your help! Yes I already saw the above link, for this reason I opened the case: because in the url is described an action on the search head, but I don't have SHs and in HFs... See more...
Hi @isoutamo, thank for your help! Yes I already saw the above link, for this reason I opened the case: because in the url is described an action on the search head, but I don't have SHs and in HFs distsearch.conf there isn't the described lines. I suppose that's a quarantine issue because I have many messages in splunkd.log that speaks of quarantined files, but I don't know how to unquarantine the machine. I'm waiting for the call from Splunk Support, hoping that they can guide me. Have you never exeperienced this issue? Local MC doesn't give any quarantine message, only that "the downstream queue is not accepting data", but I can reach Splunk Cloud by telnet, so it isn't a firewall issue. Thank you again, please hint every check that you can think (if you have). Ciao. Giuseppe
Hi Thank you for your reply. I understand it and I'll try, but for now I couldn't find any splunk supported add-on which will help my case on splunk base... e.g. "Splunk Add-on for Jira Cloud" an... See more...
Hi Thank you for your reply. I understand it and I'll try, but for now I couldn't find any splunk supported add-on which will help my case on splunk base... e.g. "Splunk Add-on for Jira Cloud" and "Splunk Add-on for Jira Data Center" would be only for getting data from Jira to Splunk, not support to send data from Splunk to Jira. and "Jira" add-on might be only for Splunk SOAR. If need, I'd like to check other add-ons supported by each developer, but to be honest, I hope some splunk supported add-on for my case...
Hi there are several Jira apps/TAs in splunkbase. See https://splunkbase.splunk.com/apps?keyword=jira It's hard to say which one is the best or best suited for your case. If no-one else cannot hint... See more...
Hi there are several Jira apps/TAs in splunkbase. See https://splunkbase.splunk.com/apps?keyword=jira It's hard to say which one is the best or best suited for your case. If no-one else cannot hint to you, then you must just read through those descriptions and select which one best suite for your need. Maybe it's best to start with those Splunk supported add-ons? r. Ismo