All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The answer is "it depends". Let's start from the end. You should _not_ rename the default directory. If you want tp override any default settings you create a new directory called local and place co... See more...
The answer is "it depends". Let's start from the end. You should _not_ rename the default directory. If you want tp override any default settings you create a new directory called local and place config items there. For more info about config file precedence see here https://docs.splunk.com/Documentation/Splunk/latest/admin/Wheretofindtheconfigurationfiles For the first three questions the answer is "it depends". It depends on whether the add-on contains search-time definitions (then you deploy it on SH-tier) and whether it contains index-time definitions (then you deploy it in your indexing pipeline - where exactly it depends on your ingestion process).
7/10 cases of "the secret is OK" it turns out to not be OK. 2/10 cases it's the network problem (lack of firewall rules or routing problems). The remaining 10% is some other misconfiguration.
"ecs" is not a native Splunk command. Whatever add-on it came from you need to look in its docs. The only Splunk-related thing is that the string which apparently contains some command for external s... See more...
"ecs" is not a native Splunk command. Whatever add-on it came from you need to look in its docs. The only Splunk-related thing is that the string which apparently contains some command for external service must be properly escaped. Other than that it's beyond Splunk realm.
I tried to search data with dynamic script:   | ecs "opensearch_dashboards_sample_data_flights" "{ \"from\": 0, \"size\": 1000, \"query\": { \"match_all\": {} }, \"script_fields\": { \"fields\": { ... See more...
I tried to search data with dynamic script:   | ecs "opensearch_dashboards_sample_data_flights" "{ \"from\": 0, \"size\": 1000, \"query\": { \"match_all\": {} }, \"script_fields\": { \"fields\": { \"script\": { \"source\": \\\"def fields = params['_source'].keySet(); def result = new HashMap(); for (field in fields) { def value = params['_source'][field]; if (value instanceof String && value.contains('DE')) { result.put(field, value.replace('DE', 'Germany')); } else { result.put(field, value); }} return result;\\\" } } }, \"track_total_hits\": true }" "only" | table *   But it not working. I think the problem is from my source command, but I don't know how to fix this   \"source\": \\\"def fields = params['_source'].keySet(); def result = new HashMap(); for (field in fields) { def value = params['_source'][field]; if (value instanceof String && value.contains('DE')) { result.put(field, value.replace('DE', 'Germany')); } else { result.put(field, value); }} return result;\\\"    Hope someone can help me fix this. Thank very much for speding tim for my issue.
Could not contact master. Check that the master is up, the master_uri=https://10.0.209.11:8089 and secret are specified correctly on IDX.   I went in and fixed the previous error of the password ... See more...
Could not contact master. Check that the master is up, the master_uri=https://10.0.209.11:8089 and secret are specified correctly on IDX.   I went in and fixed the previous error of the password but I still have this error. I would like to learn to troubleshoot my issue. would someone be willing to come on zoom and assist me? 
That is a Splunk-supported app so the best way to report a failure like this is to file a case with Splunk Support. If you do not have a support entitlement, submit it at https://ideas.splunk.com.
Hello Esteemed Splunkers, I have a long question, and I wish to have a long and detailed discussion ^-^  First of all:                    We have a distributed environment:                    Dep... See more...
Hello Esteemed Splunkers, I have a long question, and I wish to have a long and detailed discussion ^-^  First of all:                    We have a distributed environment:                    Deployer with 3x search heads.                    indexer master with 3x indexer.                   Deployment server with 2x heavy forwarder. and we want to deploy "Splunk_TA_fortinet_fortigate" the below is the content: the question is: should we deploy this app from the deployer to all search heads? should we deploy this app from the Indexer Master to all indexers? should we deploy this app from the deployment server to all heavy forwarders? should we change the name of the default folder to local? In a nutshell, what should we do and the consideration should we look at?   Thanks in advance!
All,  I am currently working with Splunk Add-on for Microsoft Office 365.  The default regex in transforms.conf for extract_src_user_domain and extract_recipient_domain will only extract the last tw... See more...
All,  I am currently working with Splunk Add-on for Microsoft Office 365.  The default regex in transforms.conf for extract_src_user_domain and extract_recipient_domain will only extract the last two parts of an email domain, resulting in domains like bank.co.in returning as co.in  Current [extract_src_user_domain] SOURCE_KEY = ExchangeMetaData.From REGEX = (?<SrcUserDomain>[a-zA-Z]*\.[a-zA-Z]*$) [extract_recipient_domain] SOURCE_KEY = ExchangeMetaData.To{} REGEX = (?<RecipientDomain>[a-zA-Z]*\.[a-zA-Z]*$) MV_ADD = true Suggest updating it to be inline with messagetrace rex [extract_messagetrace_src_user_domain] SOURCE_KEY = SenderAddress REGEX = @(?<src_user_domain>\S*) [extract_messagetrace_recipient_domain] SOURCE_KEY = RecipientAddress REGEX = @(?<recipient_domain>\S*) Thanks, 
I tried to upload a zip file. It showed "Upload failed ERROR: Read Timeout." I am using Windows. The file size is 1910KB.  Also, I successfully uploaded some files (not zip). But they were not displa... See more...
I tried to upload a zip file. It showed "Upload failed ERROR: Read Timeout." I am using Windows. The file size is 1910KB.  Also, I successfully uploaded some files (not zip). But they were not displaying in the data summary. Please help. Thank you.
The query looks like it would meet the requirements.  The only change I would make is to add userSesnId=* to the base query. What is it about the logs you don't need that makes them match the quer... See more...
The query looks like it would meet the requirements.  The only change I would make is to add userSesnId=* to the base query. What is it about the logs you don't need that makes them match the query?  Can you share them (sanitized)? What is wrong with the one specific log "Request recd"?  It meets the requirements.
  | rex "(?<json>\{.*\})" | spath input=json​ so the above command works fine right for mixed pattern (json and xml) for my example? currently and for upcoming events? is there any other way to hid... See more...
  | rex "(?<json>\{.*\})" | spath input=json​ so the above command works fine right for mixed pattern (json and xml) for my example? currently and for upcoming events? is there any other way to hide this query apart from macro?
I mean that KV_MODE=something works only when the _whole event_ is just a blob of structured data. Without any additional parts to it. So KV_MODE=json will work if your whole even consists of {"my"... See more...
I mean that KV_MODE=something works only when the _whole event_ is just a blob of structured data. Without any additional parts to it. So KV_MODE=json will work if your whole even consists of {"my":"data","is":"json"} but will not work if it's <144>2014-11-11 11:23 Some lousy[24]: pseudo-syslog header with {"json":"data","further":"down","the":street"}
I suspect the HTML entities were due to some copy-pasting magic, not as part of the regexes themselves. As for the regex - I don't understand what @puneetgupz means by "unexpected close tag" When u... See more...
I suspect the HTML entities were due to some copy-pasting magic, not as part of the regexes themselves. As for the regex - I don't understand what @puneetgupz means by "unexpected close tag" When unescaped, the regex works perfectly well in regex101 - https://regex101.com/r/mR5JiJ/1 (you don't need to escape the quotes in regex; just in a string in Splunk). EDIT: OK. Escaping is needed but in another place | rex field=SERVICE_RESPONSE "\"status\"\\s*:\\s*(?P<ERROR_CODE>\\d+)"
Still getting the same error
Hi @PickleRick , Then what is use of KV_MODE = json that needs to be given in props.conf (saw somewhere a while ago). Please let me understand whether my data contains both json and xml or only jso... See more...
Hi @PickleRick , Then what is use of KV_MODE = json that needs to be given in props.conf (saw somewhere a while ago). Please let me understand whether my data contains both json and xml or only json? Because when i am using spath command provided by @ITWhisperer it extracted the fields... is it wrong? (if json and xml both present in my example event) any idea on this?
Unfortunately, for now Splunk cannot perform a structured data extraction if the whole event is not a structured data (in other words - if you have a json or XML data which has some header, like in y... See more...
Unfortunately, for now Splunk cannot perform a structured data extraction if the whole event is not a structured data (in other words - if you have a json or XML data which has some header, like in your example, Splunk cannot automatically extract data from it). There is an idea about it at https://ideas.splunk.com/ideas/EID-I-208 - while it's already as "Future Prospect", you can give your vote to show your support for it. At the moment the only thing you could do would be to cut the whole header away with SEDCMD during ingestion so that all that's left is a valid json structure. But that's not always what you want.
The regex used in the rex command goes through multiple layers of parsing so it needs multiple escape characters for embedded quotation marks. Solution 1: | rex field=SERVICE_RESPONSE "\\\"status\\... See more...
The regex used in the rex command goes through multiple layers of parsing so it needs multiple escape characters for embedded quotation marks. Solution 1: | rex field=SERVICE_RESPONSE "\\\"status\\\"\s*:\s*(?P<ERROR_CODE>\d+)"  Solution 2 won't work because regular expressions don't honor URL encoding.
Hi, I am trying to instrument a service in kubernetes that run on apache. I have looked for docker image I can use, but I could not find it. Point me in the right direction
Coming in years after this question was asked, because I've been trying to do the same and I finally figured it out today! The TA is currently on version 4.1.1 To get additional fields to appear in... See more...
Coming in years after this question was asked, because I've been trying to do the same and I finally figured it out today! The TA is currently on version 4.1.1 To get additional fields to appear in AD_Obj_User you need to do the following: Edit the macro `ms_obj_admon_base_out_user` and include the fields you want in the SPL for "fields" and "table" Do the same for the macro `ms_obj_user_base_migrate` just in case. The part I was missing for years up until now was you have to edit the KV Store to specify what fields are allowed to be stored. Edit the Lookup (KV Store) AD_Obj_User (Collection name is AD_Obj_User_LDAP_list_kv) and add the desired fields. Rebuild your lookup and you're good to go!
Coming in years after this question was asked, because I've been trying to do the same and I finally figured it out today! The TA is currently on version 4.1.1 To get additional fields to appear in... See more...
Coming in years after this question was asked, because I've been trying to do the same and I finally figured it out today! The TA is currently on version 4.1.1 To get additional fields to appear in AD_Obj_User you need to do the following: Edit the macro `ms_obj_admon_base_out_user` and include the fields you want in the SPL for "fields" and "table" Do the same for the macro `ms_obj_user_base_migrate` just in case. The part I was missing for years up until now was you have to edit the KV Store to specify what fields are allowed to be stored. Edit the Lookup (KV Store) AD_Obj_User (Collection name is AD_Obj_User_LDAP_list_kv) and add the desired fields. Rebuild your lookup and you're good to go!