All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Appreciated @PickleRick and @ITWhisperer . Please answer my last question | rex "(?<json>\{.*\})" | spath input=json​ so the above command works fine right for mixed pattern (json and xml) for my e... See more...
Appreciated @PickleRick and @ITWhisperer . Please answer my last question | rex "(?<json>\{.*\})" | spath input=json​ so the above command works fine right for mixed pattern (json and xml) for my example? currently and for upcoming events? is there any other way to hide this query apart from macro?
I am pretty new to Splunk and my project is also new. Can someone please explain the configurations given in our cluster manager. We have a syslog server which receives logs from F5 WAF devices and U... See more...
I am pretty new to Splunk and my project is also new. Can someone please explain the configurations given in our cluster manager. We have a syslog server which receives logs from F5 WAF devices and UF in syslog server forwards the data to our cluster manager.
same having this issue when we upgraded to 9.2.3 looks like Splunk installation files installed python 2.7 by default
@PickleRick thank you 
i am trying to integrate group ib with splunk for which i installed the app entered my api key and username from which it redirects to the homepage. but all my dashboards are empty and no indexes are... See more...
i am trying to integrate group ib with splunk for which i installed the app entered my api key and username from which it redirects to the homepage. but all my dashboards are empty and no indexes are created? how can i troubleshoot or fix it?
Hi Rick, Ok understood, to sum it up as below: The search-time extraction settings are much simpler and there is less load to our environment compared to the index-time extraction. For our inde... See more...
Hi Rick, Ok understood, to sum it up as below: The search-time extraction settings are much simpler and there is less load to our environment compared to the index-time extraction. For our index-time extraction, there should be additional configurations as well in our props and transforms conf files and most likely that's why our existing ones didn't work. We resolved it by moving the regex settings to the props.conf in our search heads (search-time extraction) manually in the /opt/splunk/etc/system/local directory as below: [aws:elb:accesslogs] EXTRACT-aws_elb_accesslogs = ^(?P<Protocol>\S+)\s+(?P<Timestamp>\S+)\s+(?P<ELB>\S+)\s+(?P<ClientPort>\S+)\s+(?P<TargetPort>\S+)\s+(?P<RequestProcessingTime>\S+)\s+(?P<TargetProcessingTime>\S+)\s+(?P<ResponseProcessingTime>\S+)\s+(?P<ELBStatusCode>\S+)\s+(?P<TargetStatusCode>\S+)\s+(?P<ReceivedBytes>\S+)\s+(?P<SentBytes>\S+)\s+\"(?P<Request>[^\"]+)\"\s+\"(?P<UserAgent>[^\"]+)\"\s+(?P<SSLCipher>\S+)\s+(?P<SSLProtocol>\S+)\s+(?P<TargetGroupArn>\S+)\s+\"(?P<TraceId>[^\"]+)\"\s+\"(?P<DomainName>[^\"]+)\"\s+\"(?P<ChosenCertArn>[^\"]+)\"\s+(?P<MatchedRulePriority>\S+)\s+(?P<RequestCreationTime>\S+)\s+\"(?P<ActionExecuted>[^\"]+)\"\s+\"(?P<RedirectUrl>[^\"]+)\"\s+\"(?P<ErrorReason>[^\"]+)\"\s+(?P<AdditionalInfo1>\S+)\s+(?P<AdditionalInfo2>\S+)\s+(?P<AdditionalInfo3>\S+)\s+(?P<AdditionalInfo4>\S+)\s+(?P<TransactionId>\S+) Thank you for the help.
I am trying to add an EXTRACT-field command in Splunk cloud. I added the regex, it is working in search and capturing the value. But the field is not populating when applied to the props.conf file. T... See more...
I am trying to add an EXTRACT-field command in Splunk cloud. I added the regex, it is working in search and capturing the value. But the field is not populating when applied to the props.conf file. The value I want to extract is "Stage=number". The regex I created is:  EXTRACT-Stage = Stage=(?<Stage>\d+) What could be the reason?
1. See the "Inspect job" dashboard 2. See the job log 3. Know how Splunk indexing works Generally, it's a huuuuge topic.
If you want a list of all UF's you ever installed anywhere regardless of whether they are alive or not, that's impossible of course without a third-party inventory software. You can get a list of de... See more...
If you want a list of all UF's you ever installed anywhere regardless of whether they are alive or not, that's impossible of course without a third-party inventory software. You can get a list of deployment clients using deployment server. But these will only be clients that use said DS to get their apps. You can use Forwarder Monitoring section in Monitoring Console but this will only populate after some time of your forwarders' activity. (And you have to enable this functionality in MC).
Hi @Paramy  If you are using deployment server to manage your forwarders, then you can access the forwarder management interface through Splunk Web on the deployment server and view your forwarders ... See more...
Hi @Paramy  If you are using deployment server to manage your forwarders, then you can access the forwarder management interface through Splunk Web on the deployment server and view your forwarders which are connected as clients. You can also check it from the CLI with below command  splunk list deploy-clients
How can I troubleshoot slow search performance in Splunk when searching across large datasets?"
Hello ,   Can you help me out How can I find a listing of all universal forwarders that I have in my Splunk environment?
The answer is "it depends". Let's start from the end. You should _not_ rename the default directory. If you want tp override any default settings you create a new directory called local and place co... See more...
The answer is "it depends". Let's start from the end. You should _not_ rename the default directory. If you want tp override any default settings you create a new directory called local and place config items there. For more info about config file precedence see here https://docs.splunk.com/Documentation/Splunk/latest/admin/Wheretofindtheconfigurationfiles For the first three questions the answer is "it depends". It depends on whether the add-on contains search-time definitions (then you deploy it on SH-tier) and whether it contains index-time definitions (then you deploy it in your indexing pipeline - where exactly it depends on your ingestion process).
7/10 cases of "the secret is OK" it turns out to not be OK. 2/10 cases it's the network problem (lack of firewall rules or routing problems). The remaining 10% is some other misconfiguration.
"ecs" is not a native Splunk command. Whatever add-on it came from you need to look in its docs. The only Splunk-related thing is that the string which apparently contains some command for external s... See more...
"ecs" is not a native Splunk command. Whatever add-on it came from you need to look in its docs. The only Splunk-related thing is that the string which apparently contains some command for external service must be properly escaped. Other than that it's beyond Splunk realm.
I tried to search data with dynamic script:   | ecs "opensearch_dashboards_sample_data_flights" "{ \"from\": 0, \"size\": 1000, \"query\": { \"match_all\": {} }, \"script_fields\": { \"fields\": { ... See more...
I tried to search data with dynamic script:   | ecs "opensearch_dashboards_sample_data_flights" "{ \"from\": 0, \"size\": 1000, \"query\": { \"match_all\": {} }, \"script_fields\": { \"fields\": { \"script\": { \"source\": \\\"def fields = params['_source'].keySet(); def result = new HashMap(); for (field in fields) { def value = params['_source'][field]; if (value instanceof String && value.contains('DE')) { result.put(field, value.replace('DE', 'Germany')); } else { result.put(field, value); }} return result;\\\" } } }, \"track_total_hits\": true }" "only" | table *   But it not working. I think the problem is from my source command, but I don't know how to fix this   \"source\": \\\"def fields = params['_source'].keySet(); def result = new HashMap(); for (field in fields) { def value = params['_source'][field]; if (value instanceof String && value.contains('DE')) { result.put(field, value.replace('DE', 'Germany')); } else { result.put(field, value); }} return result;\\\"    Hope someone can help me fix this. Thank very much for speding tim for my issue.
Could not contact master. Check that the master is up, the master_uri=https://10.0.209.11:8089 and secret are specified correctly on IDX.   I went in and fixed the previous error of the password ... See more...
Could not contact master. Check that the master is up, the master_uri=https://10.0.209.11:8089 and secret are specified correctly on IDX.   I went in and fixed the previous error of the password but I still have this error. I would like to learn to troubleshoot my issue. would someone be willing to come on zoom and assist me? 
That is a Splunk-supported app so the best way to report a failure like this is to file a case with Splunk Support. If you do not have a support entitlement, submit it at https://ideas.splunk.com.
Hello Esteemed Splunkers, I have a long question, and I wish to have a long and detailed discussion ^-^  First of all:                    We have a distributed environment:                    Dep... See more...
Hello Esteemed Splunkers, I have a long question, and I wish to have a long and detailed discussion ^-^  First of all:                    We have a distributed environment:                    Deployer with 3x search heads.                    indexer master with 3x indexer.                   Deployment server with 2x heavy forwarder. and we want to deploy "Splunk_TA_fortinet_fortigate" the below is the content: the question is: should we deploy this app from the deployer to all search heads? should we deploy this app from the Indexer Master to all indexers? should we deploy this app from the deployment server to all heavy forwarders? should we change the name of the default folder to local? In a nutshell, what should we do and the consideration should we look at?   Thanks in advance!
All,  I am currently working with Splunk Add-on for Microsoft Office 365.  The default regex in transforms.conf for extract_src_user_domain and extract_recipient_domain will only extract the last tw... See more...
All,  I am currently working with Splunk Add-on for Microsoft Office 365.  The default regex in transforms.conf for extract_src_user_domain and extract_recipient_domain will only extract the last two parts of an email domain, resulting in domains like bank.co.in returning as co.in  Current [extract_src_user_domain] SOURCE_KEY = ExchangeMetaData.From REGEX = (?<SrcUserDomain>[a-zA-Z]*\.[a-zA-Z]*$) [extract_recipient_domain] SOURCE_KEY = ExchangeMetaData.To{} REGEX = (?<RecipientDomain>[a-zA-Z]*\.[a-zA-Z]*$) MV_ADD = true Suggest updating it to be inline with messagetrace rex [extract_messagetrace_src_user_domain] SOURCE_KEY = SenderAddress REGEX = @(?<src_user_domain>\S*) [extract_messagetrace_recipient_domain] SOURCE_KEY = RecipientAddress REGEX = @(?<recipient_domain>\S*) Thanks,