All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

hello @gcusello  thank you for your reply 
Hi @Siddharthnegi , it's possible to create a common search to use in more panels (for more infos see at https://docs.splunk.com/Documentation/Splunk/9.2.1/Viz/Savedsearches#Post-process_searches ),... See more...
Hi @Siddharthnegi , it's possible to create a common search to use in more panels (for more infos see at https://docs.splunk.com/Documentation/Splunk/9.2.1/Viz/Savedsearches#Post-process_searches ), only if the search is the same and you have different calculations in each panel from the search, e.g. in one panel you use stats and in one panel you use table. Are your searches different or similar? if they are similar, please share them, otherwise, it isn't possible. Ciao. Giuseppe
thanks, i have tried that to reformat the field start but it result in an empty field. | xyseries guid property value | fields guid start end duration status | eval start=strftime(strptime(start, "%... See more...
thanks, i have tried that to reformat the field start but it result in an empty field. | xyseries guid property value | fields guid start end duration status | eval start=strftime(strptime(start, "%FT%T.%Q%Z"), "%F %T")
Hi @hazem , in an Indexer Cluster (single site or multisite) usually retention is the same in both sites, because you should have, at least, one searcheable copy of data in each site. If you have t... See more...
Hi @hazem , in an Indexer Cluster (single site or multisite) usually retention is the same in both sites, because you should have, at least, one searcheable copy of data in each site. If you have to design a multisite Indexer Cluster, engage a Splunk Architect (or a Splunk PS), it's always better Ciao. Giuseppe
This gives almost the same result, time of succses events. This is useful for the future. For now I wait with this query, because it seems we need a field 'Duration' in message to check the performan... See more...
This gives almost the same result, time of succses events. This is useful for the future. For now I wait with this query, because it seems we need a field 'Duration' in message to check the performances of response call
Hello! 1. You're on the right track. https://splunkbase.splunk.com/app/1151 is what you need to be using. The documentation for this add-on has information about how the ldapsearch part works. You c... See more...
Hello! 1. You're on the right track. https://splunkbase.splunk.com/app/1151 is what you need to be using. The documentation for this add-on has information about how the ldapsearch part works. You can run ldapsearch commands via the command line of wherever this is configured. If you're wanting to import certain ldap data, you'll need to create scheduled searches (on the HFW) to pull that data into Splunk. Read through https://docs.splunk.com/Documentation/SA-LdapSearch/3.0.8/User/AbouttheSplunkSupportingAdd-onforActiveDirectory to get a good background on how to do that. 2. Yes, this is possible. The easiest way to do this is probably just to separate the data into different indexes using the collect command. Whatever data you want user1 to have, run a query for that data and collect to a certain index. Whatever data you want user2 to have, run a separate query to collect to a different index. There are other ways to do this as well, but that's the simplest I could think of.
Is there anyway to reassign all the Knowledge Objects owner by a specific user ? instead of transferring one Knowledge object at a time ? Also, is the "/my_search" in the example mentioned below t... See more...
Is there anyway to reassign all the Knowledge Objects owner by a specific user ? instead of transferring one Knowledge object at a time ? Also, is the "/my_search" in the example mentioned below the title of the Knowledge Object ?
we plan to have a multi-site clustering setup in HQ and DR so the question is can i configure the indexers located at DR with a retention policy less than indexers located at HQ?
I have the following environment: 1 HF -> 1 indexer -> 1 SH , code 9.1 How do I onboard the AD controller data into my HF ? I am using Add-on for Active Directory, any ldap commands? any recommendat... See more...
I have the following environment: 1 HF -> 1 indexer -> 1 SH , code 9.1 How do I onboard the AD controller data into my HF ? I am using Add-on for Active Directory, any ldap commands? any recommendations ? is this the right tool ?  
Try this | rex "\"changes\":(?<changes>\{.*?\}\})"
Thank you for your message. You are correct, I need everything between {} as a value of the field I can include in the table.
This is a different question. Please start a new question with as much detail as possible.
It works, thank you very much. One more thing, time filter isn't work, I mean if I set for 24H, search return logs for all time  
Exactly what have you tried and exactly what doesn't work? What results / errors messages do you get?
Hi @romainbouajila, JournalCompression setting is related to only new created warm buckets. Freezing process just copies warm buckets rawdata from warm folder to frozen folder when their freezing ru... See more...
Hi @romainbouajila, JournalCompression setting is related to only new created warm buckets. Freezing process just copies warm buckets rawdata from warm folder to frozen folder when their freezing rules valid (size or age).   In your case it seems your zstd setting applied after 28 Feb. That is why previous created buckets  are gzipped. You should see zstd files in your frozen buckets after some time.  
i have test the format directly to the value it's work. my concerne is to apply it after the xseries on =>  | fields guid start end duration status . On the result of the field start if i put the e... See more...
i have test the format directly to the value it's work. my concerne is to apply it after the xseries on =>  | fields guid start end duration status . On the result of the field start if i put the eval at the end it doesn't work.
Thanks for the help. Turns out I was using the "splunk:splunk" user and group instead of "splunkfwd". a clean install and correct addition of permissions helped
Have you already extracted accountId and response? If response does not have any value (null) does the event come from log1? If so, you could try something like this | eval state=coalesce(response, ... See more...
Have you already extracted accountId and response? If response does not have any value (null) does the event come from log1? If so, you could try something like this | eval state=coalesce(response, "Start auth") | chart count by accountId state
OK assuming your start and end fields match the timestamp format you are using to parse, then this should work for both fields (but your example data doesn't show it as such). Have you tried it?
Yes but is it to apply the result of the date reformating provided into a fields of this answer :).  But i can open a new topic if necessary