All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a requirement to forward Okta logs to S3 buckets, in addition to ingesting into Splunk. So I see there might two ways we could forward logs to S3 buckets,  one being, during input phase, wh... See more...
I have a requirement to forward Okta logs to S3 buckets, in addition to ingesting into Splunk. So I see there might two ways we could forward logs to S3 buckets,  one being, during input phase, where data can be cloned and forwarded to S3 bucket ? In the outputs.conf, there is the below parameter which seems like an option that can fulfil above reqmt, however, its still under development.   remote_queue.sqs.large_message_store.endpoint = <URL> * Currently not supported. This setting is related to a feature that is still under development.Or after indexing logs into SplunkHowever, I am unsure, whether for both options, there is a clearly documented reliable process to achieve the outcome.Can please advise on this ?  
Hi, I have a dashboard where I have a time range and a filter for the CI branch. In the time_range what timings I am taking same timings I wanted to apply for CI branch filter also. As of now, ... See more...
Hi, I have a dashboard where I have a time range and a filter for the CI branch. In the time_range what timings I am taking same timings I wanted to apply for CI branch filter also. As of now, it is taking the last 24hr and I don't see any option to assign the time_range for the CiBranch filter. These are the options under CiBranch   Thanks, SG
Hi Splunkers. We are having an issue whereby a TAXII feed has stopped being incorporated into the Enterprise Security Threat Intelligence module. The feed has been working o.k. (i.e. downloading an... See more...
Hi Splunkers. We are having an issue whereby a TAXII feed has stopped being incorporated into the Enterprise Security Threat Intelligence module. The feed has been working o.k. (i.e. downloading and importing indicators) for some time but in recent times only the download is working. - Threat Intelligence Audit in ES the download shows no errors (exit_status of 0) - We can see the downloaded .xml file with TAXII indicators in the SA-ThreatIntelligence/local/data/threat_intel directory. - threatlist.log also shows a successful download I don't see anything specific in the logs showing an issue processing the download files. In Security Intelligence --> Threat Intelligence --> Threat Artifacts we see where earlier files. Any other suggestions for where to look to diagnose/resolve this issue. Cheers!
Hi, I am using Splunk_TA_aws to pull two cloudwatch metrics for EC2 service - CPUUtilization and CPUCreditBalance. The first one is getting ingested but CPUCreditBalance is not coming through. I am a... See more...
Hi, I am using Splunk_TA_aws to pull two cloudwatch metrics for EC2 service - CPUUtilization and CPUCreditBalance. The first one is getting ingested but CPUCreditBalance is not coming through. I am able to get this metric from command line, so access issue can be ruled out. Metric is available on the EC2 machines at 5 min interval (confirmed on cloudwatch console). Here is my input stanza. Can you please advise what can b changed to get it working: [aws_cloudwatch://Cloudwatch_EC2_CPUCreditBalance_d44a1ce1-ed7c-4d0c-b516-077446146b6b] aws_account = xxx-splunk-collector-03-uat-collector_3 aws_iam_role = xxx_AWS_EC2_metrics_AssumeRole aws_region = ap-southeast-2 index = au_test_aws_metrics   #Splunk_TA_aws metric_dimensions = [{"InstanceId":[".*"]}] metric_names = ["CPUCreditBalance"] metric_namespace = AWS/EC2 period = 300 polling_interval = 600 sourcetype = aws:cloudwatch:metric statistics = ["Average","Sum","SampleCount","Maximum","Minimum"] use_metric_format = true metric_expiration = 3600 query_window_size = 24
Header is also getting indexed as events while onboarding csv data so the fields are not extracted properly
Hi All, Have a search that is not returning what I would like. Need to unest some JSON but having issues. Here is an example of the JSON     {"configuration": {"targetResourceType": "AWS::EC... See more...
Hi All, Have a search that is not returning what I would like. Need to unest some JSON but having issues. Here is an example of the JSON     {"configuration": {"targetResourceType": "AWS::EC2::Volume", "targetResourceId": "resource123", "configRuleList": [{"configRuleId": "config1", "configRuleArn": "removed", "configRuleName": "config1rule", "complianceType": "COMPLIANT"}, {"configRuleId": "config2", "configRuleArn": "removed", "configRuleName": "config2rule", "complianceType": "COMPLIANT"}, {"configRuleId": "config3", "configRuleArn": "removed", "configRuleName": "config3rule", "complianceType": "NON_COMPLIANT"}], "complianceType": "NON_COMPLIANT"}, "configurationItemStatus": "OK", "configurationStateId": 11111111, "configurationStateMd5Hash": "", "supplementaryConfiguration": {}, "resourceId": "AWS::EC2::Volume/resource123", "resourceType": "AWS::Config::ResourceCompliance", "relatedEvents": [], "tags": {}, "relationships": [{"resourceType": "AWS::EC2::Volume", "name": "Is associated with ", "resourceId": "resource123"}], "configurationItemVersion": "1.3", "configurationItemCaptureTime": "2021-01-23T06:28:07.415Z", "awsAccountId": "removed", "awsRegion": "removed"}       Here is the logic I am using     MY SEARCH | spath configuration{} output=configuration | stats count by resourceId configuration | eval _raw=configuration | spath configRuleList{} output=configRuleList | stats count by resourceId configuration configRuleList | eval _raw=configRuleList | spath complianceType output=complianceType | spath configRuleArn output=configRuleArn | spath configRuleId output=configRuleId | spath configRuleName output=configRuleName | table resourceId compianceType configRuleArn configRuleId configRuleName        Desired result would be a table that accounts for the 3 different rules and created 3 different rows for each.
I was wondering if anyone has successfully deployed a clustered instance of Splunk enterprise on AWS ECS Fargate. I'm looking to get rid of server management altogether for my cluster without having ... See more...
I was wondering if anyone has successfully deployed a clustered instance of Splunk enterprise on AWS ECS Fargate. I'm looking to get rid of server management altogether for my cluster without having to go to Splunk Cloud. I read through a Splunk blog post on deploying a single instance to Fargate, but it didn't really cover things like dealing with Smart Store or the high memory requirements for indexers and search heads. If anyone has experience with this, I'd much appreciate the lessons learned.
Hi everyone! Hoping I might be missing something simple.   We're running splunk enterprise 8.1.0 with the officially distributed docker image. All is well with our search head cluster, with one sli... See more...
Hi everyone! Hoping I might be missing something simple.   We're running splunk enterprise 8.1.0 with the officially distributed docker image. All is well with our search head cluster, with one slightly difficult-to-track-down issue that has been causing frequent restarts of our search head tasks.     Everything starts up cleanly, we have a good search head cluster, UIs are returning results normally, etc. But, it appears that an ansible health check that runs at the very end of the playbook is failing to validate that the splunkweb UI is up and running (it is).     included: /opt/ansible/roles/splunk_search_head/tasks/../../../roles/splunk_common/tasks/wait_for_splunk_instance.yml for localhost Monday 23 August 2021 23:48:24 +0000 (0:00:00.045) 0:00:59.426 ********* FAILED - RETRYING: Check Splunk instance is running (60 retries left).     This will eventually fail after 60 retries and will force the container to restart, briefly disrupting the search head cluster.     We haven't overridden many options on the web.conf side aside from setting up ProxySSO (this was happening before configuring SSO also).   According to the file, this is the configured check: --- - name: Check Splunk instance is running uri: url: "{{ scheme | default(cert_prefix) }}://{{ splunk_instance_address }}:{{ port | default(splunk.svc_port) }}" method: GET validate_certs: false use_proxy: no register: task_response until: - task_response.status == 200 retries: "{{ wait_for_splunk_retry_num }}" delay: "{{ retry_delay }}" ignore_errors: true no_log: "{{ hide_password }}"   I can't see anything in the ansible logs detailing what that URL renders as.   My suspicion is that this is attempting to contact either the wrong hostname, is using SSL (we have disabled SSL and are terminating on a reverse proxy), but I can't find any evidence to back that up.   Is there something else I can do to force that check to use http://localhost:8000?     Thanks!!
I currently use the monitoring console to tell me if a Forwarder has not reported in the last 15 min & I consider that FW gone plus I check the list of decommissioned Hosts to consider a FW + Host go... See more...
I currently use the monitoring console to tell me if a Forwarder has not reported in the last 15 min & I consider that FW gone plus I check the list of decommissioned Hosts to consider a FW + Host gone for good! Well, what if the FW software has an issue & the host is just fine? Is there a SPL or way to tell if the Forwarder agent / SW is broken, so I can at least troubleshoot or re-install the FW? Thank u for your help in advance.
Need help :   I have a splunk query where i want to evaluate today (day of week) using now() and then use it to compare data for past 4 weeks for same day of week. if today is MOnday, i want to com... See more...
Need help :   I have a splunk query where i want to evaluate today (day of week) using now() and then use it to compare data for past 4 weeks for same day of week. if today is MOnday, i want to compare data for past 4 mondays with today.
Good Afternoon Splunkers, Let me start by saying that I hope this is the right sub-forum for this question. I'm working on a dashboard within Splunk to visualize our AWS Web Application Firewall dat... See more...
Good Afternoon Splunkers, Let me start by saying that I hope this is the right sub-forum for this question. I'm working on a dashboard within Splunk to visualize our AWS Web Application Firewall data. The purpose of this dashboard is to show general statistics and information about the requests our AWS WAF Solution is processing. Ultimately, we would like to use this dashboard to debug and tune our WAF solution as we move our WAF into enforcement mode. One of the many charts / tables I'm trying to put together is a list of AWS WAF Rule-sets, and their sub-rules that have been triggered, by website our WAF is monitoring. A concrete example of what I'm looking to create would be: Webpage WAF Rulegroups Triggered Sub-Rules Triggered Count SomeWebpage.com AWSManagedCommonRuleSet         GenericRFI_Body 5     SomeOtherVuln 10     NoUserAgent_HEADER 15           AWSAnonymousIpList         HostingProviderIpList 20   the biggest issue I'm currently facing is that the AWS WAF data, while in JSON format from AWS, does not follow proper JSON, Key/Value pairings, and has nested arrays containing multiple types of information. Specifically the nested array that contains the rule evaluation information for a particular request contains all of the rules evaluated, even if the rules did not match, or no sub-rules were fired. Example below,       ruleGroupList: [ [-] { [-] excludedRules: null nonTerminatingMatchingRules: [ [-] ] ruleGroupId: AWS#AWSManagedRulesCommonRuleSet terminatingRule: null } { [-] excludedRules: null nonTerminatingMatchingRules: [ [-] ] ruleGroupId: AWS#AWSManagedRulesSQLiRuleSet terminatingRule: null } { [-] excludedRules: null nonTerminatingMatchingRules: [ [-] ] ruleGroupId: AWS#AWSManagedRulesLinuxRuleSet terminatingRule: null } { [-] excludedRules: null nonTerminatingMatchingRules: [ [-] ] ruleGroupId: AWS#AWSManagedRulesAdminProtectionRuleSet terminatingRule: null } { [-] excludedRules: null nonTerminatingMatchingRules: [ [-] ] ruleGroupId: AWS#AWSManagedRulesKnownBadInputsRuleSet terminatingRule: null } { [-] excludedRules: null nonTerminatingMatchingRules: [ [-] ] ruleGroupId: AWS#AWSManagedRulesAmazonIpReputationList terminatingRule: null } { [-] excludedRules: null nonTerminatingMatchingRules: [ [-] ] ruleGroupId: AWS#AWSManagedRulesAnonymousIpList terminatingRule: { [-] action: BLOCK ruleId: HostingProviderIPList ruleMatchDetails: null } } ]       As you can see, even though only one AWS Rulegroup fired for this request "AWSManagedRulesAnonymousIpList", and within that group, the sub-rule "HostingProviderIpList" fired, all of the rule-groups assigned to the WAF are present within the array. Therefore, if I were to search for something like     $search stats count by nonTerminatingMatchingRules{}.ruleId, ruleGroupList{}.terminatingRule.ruleId | stats list(ruleGroupList{}.terminatingRule.ruleId), list(count), by nonTerminatingMatchingRules{}.ruleId     I would get back a list of each rule-set but I would also get back each sub-rule that has fired as well, even if the sub-rule is not part of the rule-set that fired. What commands can I use to transform this data into proper key-value pairs on a per rule-group basis? Based on what I've read I think I want to use "Spath" and "mvexpand", I'm just not sure of the best path forward. For full transparency, here's an entire WAF log in JSON format, so you can see all of the fields. Here's the guide for understanding these fields as well.     { [-] action: ALLOW formatVersion: 1 httpRequest: { [-] args: clientIp: 8.8.8.8 country: CA headers: [ [-] { [-] name: Authorization value: SomeToken } { [-] name: User-Agent value: Site24x7 } { [-] name: Cache-Control value: no-cache } { [-] name: Accept value: */* } { [-] name: Connection value: Keep-Alive } { [-] name: Accept-Encoding value: gzip } { [-] name: Content-Type value: application/json; charset=UTF-8 } { [-] name: X-Site24x7-Id value: Redacted } { [-] name: Content-Length value: 1396 } { [-] name: Host value: mywebpage.com } ] httpMethod: POST httpVersion: HTTP/1.1 requestId: RedactedID uri: /big/uri/path } httpSourceId: Redacted ID httpSourceName: ALB labels: [ [-] { [-] name: awswaf:managed:aws:anonymous-ip-list:HostingProviderIPList } ] nonTerminatingMatchingRules: [ [-] { [-] action: COUNT ruleId: AWSCommonRuleSet ruleMatchDetails: [ [-] ] } { [-] action: COUNT ruleId: AWSAnonymousIpList ruleMatchDetails: [ [-] ] } ] rateBasedRuleList: [ [-] ] requestHeadersInserted: null responseCodeSent: null ruleGroupList: [ [-] { [-] excludedRules: null nonTerminatingMatchingRules: [ [-] ] ruleGroupId: AWS#AWSManagedRulesCommonRuleSet terminatingRule: { [-] action: BLOCK ruleId: GenericRFI_BODY ruleMatchDetails: null } } { [-] excludedRules: null nonTerminatingMatchingRules: [ [-] ] ruleGroupId: AWS#AWSManagedRulesSQLiRuleSet terminatingRule: null } { [-] excludedRules: null nonTerminatingMatchingRules: [ [-] ] ruleGroupId: AWS#AWSManagedRulesLinuxRuleSet terminatingRule: null } { [-] excludedRules: null nonTerminatingMatchingRules: [ [-] ] ruleGroupId: AWS#AWSManagedRulesAdminProtectionRuleSet terminatingRule: null } { [-] excludedRules: null nonTerminatingMatchingRules: [ [-] ] ruleGroupId: AWS#AWSManagedRulesKnownBadInputsRuleSet terminatingRule: null } { [-] excludedRules: null nonTerminatingMatchingRules: [ [-] ] ruleGroupId: AWS#AWSManagedRulesAmazonIpReputationList terminatingRule: null } { [-] excludedRules: null nonTerminatingMatchingRules: [ [+] ] ruleGroupId: AWS#AWSManagedRulesAnonymousIpList terminatingRule: { [-] action: BLOCK ruleId: HostingProviderIPList ruleMatchDetails: null } } ] terminatingRuleId: Default_Action terminatingRuleMatchDetails: [ [-] ] terminatingRuleType: REGULAR timestamp: 1629751363362 webaclId: RedactedID }    
I'm trying to install Splunk Phantom on  CentOS server but I'm getting the below error. About to proceed with Phantom install Do you wish to proceed [y/N] y sed: can't read /opt/phantom/bin/s... See more...
I'm trying to install Splunk Phantom on  CentOS server but I'm getting the below error. About to proceed with Phantom install Do you wish to proceed [y/N] y sed: can't read /opt/phantom/bin/stop_phantom.sh: No such file or directory Enter username: vikram@abc.com Enter password: ********** ./phantom_setup.sh: line 357: python: command not found ./phantom_setup.sh: line 358: python: command not found 21 files removed Updating phantom repo package Error updating Phantom Repo package Errors during downloading metadata for repository 'phantom-base': - Status code: 404 for https://***@repo.phantom.us/phantom/4.5/base/8/x86_64/repodata/repomd.xml (IP: 54.165.15.205) Error: Failed to download metadata for repo 'phantom-base': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried
Hi Splunk Community,  I have a query which has 5eventtypes  index=apple source=Data AccountNo=* eventType=DallasOR  eventType=Houston OR eventType=New York OR  eventType=Boston OR  eventType=S... See more...
Hi Splunk Community,  I have a query which has 5eventtypes  index=apple source=Data AccountNo=* eventType=DallasOR  eventType=Houston OR eventType=New York OR  eventType=Boston OR  eventType=San Jose| table AccountNo eventType _time It has to pass eventType=1 to reach it to next stage i.e, eventType=2 so on. Then only we can assume as it's a successful account Now I wanted to have the query for the unsuccessful accounts meaning..the account does not pass  eventtype=1 but it reached to next stages like eventType=2 or eventType=3 so on. -- Currently I'm using this query but it's not working  index=apple source=Data AccountNo=*  eventType!=1 Please help
Hello, I am attempting to combine 2 reports (1 is a normal stats search return and the other is a pie chart using the data produced from the search of report 1). I have searched and tried numerous d... See more...
Hello, I am attempting to combine 2 reports (1 is a normal stats search return and the other is a pie chart using the data produced from the search of report 1). I have searched and tried numerous different things but none have solved the issue.  Ex: Windows Monthly Data Windows Monthly Data Pie Chart I also have to combine 4 reports of Firewall Logs into 1 report. Ex: Firewall: Building G Firewall: Building F and so on for the remaining 2 firewall log reports. If anyone could offer any advice or suggestions it would be greatly appreciated! Thank You
Hi, I am new to Splunk and inherited the infrastructure. I noticed the bucket creation keeps failing and the hot warm file system on one site is in 70% and on the other site 90% - can anyone help, p... See more...
Hi, I am new to Splunk and inherited the infrastructure. I noticed the bucket creation keeps failing and the hot warm file system on one site is in 70% and on the other site 90% - can anyone help, please? Thank you
Hello,  I am using the Storage Password mechanism to store important keys and passwords used in App. password.conf file generated properly from sdk. While decrypting passwords to use in the app, ... See more...
Hello,  I am using the Storage Password mechanism to store important keys and passwords used in App. password.conf file generated properly from sdk. While decrypting passwords to use in the app, not getting a clear_password by the Splunk Storage Password mechanism, Just getting an error clear_password Piece of code which is not working is given below, using app on 8.0.8 version of Splunk Enterprise:     def get_passwords(app): '''Retrieve the user's API keys from Storage/Passwords''' pwd_dict = {} try: sessionKey = sys.stdin.readline().strip() # list all credentials entities = entity.getEntities(['storage', 'passwords'], namespace=app, owner='nobody', sessionKey=sessionKey) # # return set of credentials for i, c in entities.items(): pwd_dict[c['username']] = c['clear_password'] except Exception as e: #Here got Exception clear_password raise Exception("Could not get %s passwords from storage. Error: %s" % (app, str(e))) return pwd_dict         Any suggestions 
Hello, I'm upgrading a search head from 7.3.0 to 8.2.1. First I upgraded it to 8.1.5 and I didn't experienced any problems. Then I upgraded to 8.2.1 and the knowledge bundle replication to the searc... See more...
Hello, I'm upgrading a search head from 7.3.0 to 8.2.1. First I upgraded it to 8.1.5 and I didn't experienced any problems. Then I upgraded to 8.2.1 and the knowledge bundle replication to the search peers failed with the following errors in the logs. In search head splunkd.log: 08-23-2021 18:48:56.228 +0200 WARN BundleTransaction [2589 BundleReplThreadPoolWorker-1] - Upload bundle="/opt/splunk/current/var/run/sh01-1629737334.bundle" to peer name=idx01 uri=https://10.10.22.14:8089 failed; http_status=409 http_description="Conflict" 08-23-2021 18:48:56.234 +0200 ERROR ClassicBundleReplicationProvider [2589 BundleReplThreadPoolWorker-1] - Unable to upload bundle to peer named idx01 with uri=https://10.10.22.14:8089. In indexers splunkd.log: 08-23-2021 18:48:56.225 +0200 ERROR DistBundleRestHandler - Checksum mismatch: received copy of bundle="/opt/splunk/var/run/searchpeers/sh01-1629737334.bundle" has transferred_checksum=15251024310319607191 instead of checksum=5204570444500435281 -- removing temporary file="/opt/splunk/var/run/searchpeers/sh01-1629737334.bundle.c2ead49153e7b186.tmp". This should be fixed with the next knowledge bundle replication. If it persists, please check your filesystem and network interface for errors. The bundle size is not big, but the size reported in the .info is quite different from the size on the filesystem: [splunk@sh01 run]$ ls -l ... -rw------- 1 splunk splunk 4280079 Aug 23 18:48 sh01-1629737334.bundle -rw------- 1 splunk splunk 42 Aug 23 18:48 sh01-1629737334.bundle.info [splunk@sh01 run]$ cat sh01-1629737334.bundle.info checksum,size 5204570444500435281,6574080 The indexers are in a cluster and all nodes are running version 7.3.0. I know Splunk recommends the manager node to be higher or equal version but I'm validating some custom apps on a test search head, which I wanted to do in version 8.2. In another not production environment a search head on 8.2 works (no bundle replication problems) with indexers 7.3.0.
I have a simple TA that makes a request to a REST endpoint and writes the data to the index (no UI associated with this, only indexing).  Im exploring a distributed Splunk environment (with a forward... See more...
I have a simple TA that makes a request to a REST endpoint and writes the data to the index (no UI associated with this, only indexing).  Im exploring a distributed Splunk environment (with a forwarder, indexer and search head) but im unsure where to install the TA. On the forwarder, the indexer or somewhere else? Reading similar forum posts it appears that the answer can depend on the TA, however what about a TA decides in what environment it should be installed?
I have my paging polices set to send a push notification to all of my devices, but I am only getting the audio alert through my bluetooth. I have the current splunk app v7.52 and an android galaxy no... See more...
I have my paging polices set to send a push notification to all of my devices, but I am only getting the audio alert through my bluetooth. I have the current splunk app v7.52 and an android galaxy note20
I need a Splunk ID for taking a Splunk Certification exam on PearsonVUE. How do I get the 6-digit ID?