All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@Ka21  is this fixed? You found any solution?
I'm not a Cloud guru but still index removal as such is done by expediting the request from the search-head layer to the indexers (you do not see this as the customer/user in the Cloud but it's happe... See more...
I'm not a Cloud guru but still index removal as such is done by expediting the request from the search-head layer to the indexers (you do not see this as the customer/user in the Cloud but it's happening under the hood). So the actual removal which you are finding in _internal is one thing but it is triggered by something. And for this something you should look for in _audit index. In an on-premise installation you get: Audit:[timestamp=05-16-2024 10:25:24.161, user=admin, action=indexes_edit, info=granted object="deletetest" operation=remove]  I suppose you should get something similar in Cloud as well. So look for index=_audit action=indexes_edit operation=remove
I'm not sure what you mean but if you want the value of the newValue field from the first object in the modifiedProperties array, you can use spath to extract value from particular node of your json ... See more...
I'm not sure what you mean but if you want the value of the newValue field from the first object in the modifiedProperties array, you can use spath to extract value from particular node of your json structure. | spath targetResources[0].modifiedProperties[0].newValue[0] (if I remember the path syntax correctly; writing from memory)
Try something like this index=unix [|inputlookup hostname_list.csv] | stats latest(_time) as latest_time, latest(source) as source, latest(_raw) as event by host | convert ctime(latest_time) as late... See more...
Try something like this index=unix [|inputlookup hostname_list.csv] | stats latest(_time) as latest_time, latest(source) as source, latest(_raw) as event by host | convert ctime(latest_time) as latest_time | table host, latest_time, source, event
The only available options for the configuration tracker are here: https://docs.splunk.com/Documentation/Splunk/latest/Admin/Serverconf#Configuration_Change_Tracker It doesn't mention whether the c... See more...
The only available options for the configuration tracker are here: https://docs.splunk.com/Documentation/Splunk/latest/Admin/Serverconf#Configuration_Change_Tracker It doesn't mention whether the config change tracker follows symlinks or not. If your experience shows that it doesn't, it means it probably doesn't and you can't change that. (I didn't test it myself). You can submit an idea on https://ideas.splunk.com/  
Yes, of course you should not do some things. It doesn't mean though that they are not done. Also, default was just one of the possible cases. Another is mentioned by me log.cfg. Yet another could be... See more...
Yes, of course you should not do some things. It doesn't mean though that they are not done. Also, default was just one of the possible cases. Another is mentioned by me log.cfg. Yet another could be - manual fixing some errors within sendemail.py (yes, I had to do something like this a few versions back at one customer's site). Reality can be complicated and "rpm, then sync files" lets you do the process in the controlled manner whereas "install files with rpm over existing directory contents" does not. So I'm sticking with my order of doing this.
With splunk updated I must disagree this. There are clear instructions how you should/must update your configurations etc. (don't update any default files etc.). It's better to get those issues fixed... See more...
With splunk updated I must disagree this. There are clear instructions how you should/must update your configurations etc. (don't update any default files etc.). It's better to get those issues fixed before you start production in a new environment or you will found those on 1st real update.
Hello together, with the introduction of the new ConfigurationTracker in Splunk 9.0 we noticed that some of our apps are not being logged.   The system is a linux single splunk enterprise server a... See more...
Hello together, with the introduction of the new ConfigurationTracker in Splunk 9.0 we noticed that some of our apps are not being logged.   The system is a linux single splunk enterprise server and the apps which are not being logged are not directly located under /opt/splunk/etc/apps. Instead we do only have symbolic links to another folder on the system. It works for everything else, but the configuration tracker seems to ignore symbolic links. It is also not a permission issue of the linked folder. The linked folder has the same splunk group and permissions assigned.   /opt/splunk/etc/apps/symboliclinkapp     ->   /anotherfolder/symboliclinkapp   Is there an option to change the configuration tracker to also consider symbolic links?
Hi @gcusello  Yes, I enabled notables as adaptive response actions for my CS. I tried checking the incident review dashboard and run search queries, but the output is 0.
Overwriting existing files with rpm -i  is not a very good idea. Especially if someone thought it smart to - for example - edit system/default files (as far as I can see, the splunk package doesn't h... See more...
Overwriting existing files with rpm -i  is not a very good idea. Especially if someone thought it smart to - for example - edit system/default files (as far as I can see, the splunk package doesn't have any files marked as config). Or tweaked log.cfg. Or...
Your wording might be a bit confusing. You apparently wanted to search for all events that do not have any fields matching the PROD* pattern in them whereas @ITWhisperer showed you how to show all ev... See more...
Your wording might be a bit confusing. You apparently wanted to search for all events that do not have any fields matching the PROD* pattern in them whereas @ITWhisperer showed you how to show all events without showing the PROD* fields. You can do the opposite to previous @ITWhisperer 's solution <your initial search> | eval keep=1 | foreach PROD*   [ | eval keep=0 ] | where keep=1 | fields - keep This will filter out any event containing PROD* fields. But this is a very ineffective way of searching your data.    
Hi @ash2, max_match is an option of the rex command that says to the rex to take only the first extracted value, it isn't a field to display. please try this (adding the new field (newValue2) to th... See more...
Hi @ash2, max_match is an option of the rex command that says to the rex to take only the first extracted value, it isn't a field to display. please try this (adding the new field (newValue2) to the table to see the difference: | rex max_match=1 "(?ms)newValue:\s+\[*\"(?<newValue2>[^\"]+)" Ciao. Giuseppe
Hi @blank , some very stupid questions: did you enabled Notables as Adaptive Response actions for you Correlation Searches? are any Correlation Searches triggered? Ciao. Giuseppe
I have some events that have `PROD*` in field names (e.g. `PROD error`, `Production warning`, etc.) Other events don't have `PROD*` in field names, e.g. they have `DEV error`, `Development warning`... See more...
I have some events that have `PROD*` in field names (e.g. `PROD error`, `Production warning`, etc.) Other events don't have `PROD*` in field names, e.g. they have `DEV error`, `Development warning`, etc. The end goal is to build dashboard that will compare statistics across different envs, that's why I need data from other fields like `ERROR_CODE` It really does not make sense because the above kind of contradicts what is asked in the OP. A good way - in fact the very basic way to ask an answerable question is to illustrate your data with a table (you can give a few fields of relevance and value variants of importance), then illustrate desired output from the data with a table, then explain the logic to arrive at desired output from illustrated data as if you have no Splunk.
Currently, this is my SPL query and it just displays different results this is my hostname_list.csv host hostname_a* hostname_b* hostname_c* | inputlookup hostname_list.csv ... See more...
Currently, this is my SPL query and it just displays different results this is my hostname_list.csv host hostname_a* hostname_b* hostname_c* | inputlookup hostname_list.csv | fields host | join type=inner host [search index=unix | stats latest(_time) as latest_time, latest(source) as source, latest(_raw) as event by host | convert ctime(latest_time) as latest_time] | table host, latest_time, source, event and it displays like this one: host latest_time source event hostname_a*       hostname_b*       hostname_c*       I assume that the wildcard "*" is acting like a literal string. I'm expecting results like this. host latest_time source event hostname_a12 test test test hostname_a23 test test test hostname_c123 test test test please help thanks!
@ITWhisperer sure, sorry if it wasn't clear  I have some events that have `PROD*` in field names (e.g. `PROD error`, `Production warning`, etc.) Other events don't have `PROD*` in field names, e.g.... See more...
@ITWhisperer sure, sorry if it wasn't clear  I have some events that have `PROD*` in field names (e.g. `PROD error`, `Production warning`, etc.) Other events don't have `PROD*` in field names, e.g. they have `DEV error`, `Development warning`, etc. The end goal is to build dashboard that will compare statistics across different envs, that's why I need data from other fields like `ERROR_CODE` Hope it makes sense and thanks a lot for your help and patience, it's much appreciated!
Incident review dashboard is displaying no value, despite having correlation searches enabled. Upon investigation, I noticed that the notable index has 0 bytes.  Could someone kindly guide me on how... See more...
Incident review dashboard is displaying no value, despite having correlation searches enabled. Upon investigation, I noticed that the notable index has 0 bytes.  Could someone kindly guide me on how to troubleshoot this issue? Thanks!
Hi Giuseppe, Thank you for the query, unfortunately its not working for me.     
Hi @Ismail_BSA, did you tried to use the evals in the requested sequence directly in a search? it should run. Then you could put these evals in a macro and call the macro all the times you need it.... See more...
Hi @Ismail_BSA, did you tried to use the evals in the requested sequence directly in a search? it should run. Then you could put these evals in a macro and call the macro all the times you need it. Ciao. Giuseppe  
Hi @ash2, please try this: | rex max_match=1 "(?ms)newValue:\s+\[*\"(?<newValue>[^\"]+)" Ciao. Giuseppe