All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, we have created many custom correlation searches in our client's deployed instance. Right now they are creating too many notable events even with the "window limitation". Can somebody help?
Hey Team, I'm looking to Ingest Microsoft unified labeling logs into Splunk. MSFT unified labeling is an Azure AIP based app. Any kind of help/info will be helpful.
Hi team, I have below data in splunk.   And I want to get the time duration when below range. ACT start with "AUTOSAVEFORM_trigReq_AutoSaveForm", and end with "AUTOSAVEFORM_after_sendReques" ... See more...
Hi team, I have below data in splunk.   And I want to get the time duration when below range. ACT start with "AUTOSAVEFORM_trigReq_AutoSaveForm", and end with "AUTOSAVEFORM_after_sendReques" I have tried below queries , but it doesn;t return the correct result. index=*bizx_application AND sourcetype=perf_log_bizx AND PID="PM_REVIEW" AND PLV=EVENT AND ACT="AUTOSAVEFORM_*"  AND C_ACTV="*commentEdit*" OR ACT="*SendRequest" |reverse | transaction CMID SID UID startswith="AUTOSAVEFORM_trigReq_AutoSaveForm" endswith="AUTOSAVEFORM_after_sendRequest" | table _time duration eventcount   Can anyone pease help provide a solution?  
Hi, i am trying the App "Lookup File Editor" and have problems with the match_type settings. Normal lookups can be configured to use match_type=CIDR or something else. I cant find similar settings i... See more...
Hi, i am trying the App "Lookup File Editor" and have problems with the match_type settings. Normal lookups can be configured to use match_type=CIDR or something else. I cant find similar settings in the "Lookup File Editor" app from splunkbase. Do i something wrong or is this feature not included?   Thanks  
This is the table. How can I group together similar names into one entry and the count is added for both of them. For example 5-Mock Activity and 6-Mock activity should come in 1 row as "Mock Act... See more...
This is the table. How can I group together similar names into one entry and the count is added for both of them. For example 5-Mock Activity and 6-Mock activity should come in 1 row as "Mock Activity" and count for that field should be 19+5 i.e. 24  
Hi, We have requirement to send alert to our Teams channel, I have tested both the Splunk Teams AddOn and a general Webhook, none of them can send out the alert. Can any one please help on this? Thank... See more...
Hi, We have requirement to send alert to our Teams channel, I have tested both the Splunk Teams AddOn and a general Webhook, none of them can send out the alert. Can any one please help on this? Thanks Xin
We’re running Splunk 8.1.2 on RHEL 8.x and are using some dashboards that makes use of a lookup file “itsp_compliance_settings.csv” with an exemple below   host_environment,title,setting,must,value... See more...
We’re running Splunk 8.1.2 on RHEL 8.x and are using some dashboards that makes use of a lookup file “itsp_compliance_settings.csv” with an exemple below   host_environment,title,setting,must,value … Production,IP default-gateway,default_gateway,equal,1.2.3.4 Production,IP default-gateway,default_gateway,equal,5.6.7.9 …     This is an extract of the search behind the dashboard using the above lookup   index="cisco_ios_config" sourcetype="ApplianceConfigurations:Cisco:IOS" | dedup host | fields - tag, -_raw, - tag::eventtype | rex field=source "\/usr\/local\/rancid\/var\/(?<host_environment>\w+)\/configs\/" | rex field=source "\/usr\/local\/rancid\/var\/\w+\/configs\/\w+-\w+-(?<extra_host_environment_check>\w+)-" | lookup ITSP:Compliance_Settings host_environment | eval zip=mvzip(title, setting, "||") | eval zip=mvzip(zip, must, "||") | eval zip=mvzip(zip, value, "||") | mvexpand zip | makemv delim="||" zip | eval title=mvindex(zip,0) | eval setting=mvindex(zip,1) | eval must=mvindex(zip,2) | eval value=mvindex(zip,3) | foreach * [ eval field=if("<<FIELD>>"==setting,<<MATCHSTR>>,field)] | fillnull value="Setting not found" field | mvexpand field | eval fail=if(trim(field)==trim(value),if(must=="equal",0,1),if(must=="equal",1,0)) | stats sum(fail) AS "Count" by title | rename title AS "Setting" | eval Status=if(Count > 0, "error", "ok")   Can someone please help and tell me if this is possible to adapt the search to take into account more than 1 possible values (2 default gateways are both valid) in the lookup as per the above example ? Thanks
Hi Experts, Question: Anyone know how to change the STS endpoint to private VPCe Interface address when adding an account to ADD-ON for AWS during setup?   I am trying to deploy Splunk on a VM in... See more...
Hi Experts, Question: Anyone know how to change the STS endpoint to private VPCe Interface address when adding an account to ADD-ON for AWS during setup?   I am trying to deploy Splunk on a VM in private subnet (no route to the internet) in a VPC in AWS, and to index data on S3 (and more later). Currently, I have set up VPC endpoint (interface) for S3 and STS, and confirmed those 2 endpoints are accessible from the VM via an account from awscli. When I tried to add an account in add-on Account setup, add-on actually tried to talk STS through public STS which the private network does not have route to.  I would like to change add-on configuration to have the addon talk to private STS VPCe address to complete the setup/adding an account. If there is another way to have splunk run in a private subnet, I would like to know about it. Any comment would be appreciated.. Thank you! 
I am looking for a solution to transfer logs from Splunk and store them in MongoDB, can anyone suggest me?
I have a requirement to forward Okta logs to S3 buckets, in addition to ingesting into Splunk. So I see there might two ways we could forward logs to S3 buckets,  one being, during input phase, wh... See more...
I have a requirement to forward Okta logs to S3 buckets, in addition to ingesting into Splunk. So I see there might two ways we could forward logs to S3 buckets,  one being, during input phase, where data can be cloned and forwarded to S3 bucket ? In the outputs.conf, there is the below parameter which seems like an option that can fulfil above reqmt, however, its still under development.   remote_queue.sqs.large_message_store.endpoint = <URL> * Currently not supported. This setting is related to a feature that is still under development.Or after indexing logs into SplunkHowever, I am unsure, whether for both options, there is a clearly documented reliable process to achieve the outcome.Can please advise on this ?  
Hi, I have a dashboard where I have a time range and a filter for the CI branch. In the time_range what timings I am taking same timings I wanted to apply for CI branch filter also. As of now, ... See more...
Hi, I have a dashboard where I have a time range and a filter for the CI branch. In the time_range what timings I am taking same timings I wanted to apply for CI branch filter also. As of now, it is taking the last 24hr and I don't see any option to assign the time_range for the CiBranch filter. These are the options under CiBranch   Thanks, SG
Hi Splunkers. We are having an issue whereby a TAXII feed has stopped being incorporated into the Enterprise Security Threat Intelligence module. The feed has been working o.k. (i.e. downloading an... See more...
Hi Splunkers. We are having an issue whereby a TAXII feed has stopped being incorporated into the Enterprise Security Threat Intelligence module. The feed has been working o.k. (i.e. downloading and importing indicators) for some time but in recent times only the download is working. - Threat Intelligence Audit in ES the download shows no errors (exit_status of 0) - We can see the downloaded .xml file with TAXII indicators in the SA-ThreatIntelligence/local/data/threat_intel directory. - threatlist.log also shows a successful download I don't see anything specific in the logs showing an issue processing the download files. In Security Intelligence --> Threat Intelligence --> Threat Artifacts we see where earlier files. Any other suggestions for where to look to diagnose/resolve this issue. Cheers!
Hi, I am using Splunk_TA_aws to pull two cloudwatch metrics for EC2 service - CPUUtilization and CPUCreditBalance. The first one is getting ingested but CPUCreditBalance is not coming through. I am a... See more...
Hi, I am using Splunk_TA_aws to pull two cloudwatch metrics for EC2 service - CPUUtilization and CPUCreditBalance. The first one is getting ingested but CPUCreditBalance is not coming through. I am able to get this metric from command line, so access issue can be ruled out. Metric is available on the EC2 machines at 5 min interval (confirmed on cloudwatch console). Here is my input stanza. Can you please advise what can b changed to get it working: [aws_cloudwatch://Cloudwatch_EC2_CPUCreditBalance_d44a1ce1-ed7c-4d0c-b516-077446146b6b] aws_account = xxx-splunk-collector-03-uat-collector_3 aws_iam_role = xxx_AWS_EC2_metrics_AssumeRole aws_region = ap-southeast-2 index = au_test_aws_metrics   #Splunk_TA_aws metric_dimensions = [{"InstanceId":[".*"]}] metric_names = ["CPUCreditBalance"] metric_namespace = AWS/EC2 period = 300 polling_interval = 600 sourcetype = aws:cloudwatch:metric statistics = ["Average","Sum","SampleCount","Maximum","Minimum"] use_metric_format = true metric_expiration = 3600 query_window_size = 24
Header is also getting indexed as events while onboarding csv data so the fields are not extracted properly
Hi All, Have a search that is not returning what I would like. Need to unest some JSON but having issues. Here is an example of the JSON     {"configuration": {"targetResourceType": "AWS::EC... See more...
Hi All, Have a search that is not returning what I would like. Need to unest some JSON but having issues. Here is an example of the JSON     {"configuration": {"targetResourceType": "AWS::EC2::Volume", "targetResourceId": "resource123", "configRuleList": [{"configRuleId": "config1", "configRuleArn": "removed", "configRuleName": "config1rule", "complianceType": "COMPLIANT"}, {"configRuleId": "config2", "configRuleArn": "removed", "configRuleName": "config2rule", "complianceType": "COMPLIANT"}, {"configRuleId": "config3", "configRuleArn": "removed", "configRuleName": "config3rule", "complianceType": "NON_COMPLIANT"}], "complianceType": "NON_COMPLIANT"}, "configurationItemStatus": "OK", "configurationStateId": 11111111, "configurationStateMd5Hash": "", "supplementaryConfiguration": {}, "resourceId": "AWS::EC2::Volume/resource123", "resourceType": "AWS::Config::ResourceCompliance", "relatedEvents": [], "tags": {}, "relationships": [{"resourceType": "AWS::EC2::Volume", "name": "Is associated with ", "resourceId": "resource123"}], "configurationItemVersion": "1.3", "configurationItemCaptureTime": "2021-01-23T06:28:07.415Z", "awsAccountId": "removed", "awsRegion": "removed"}       Here is the logic I am using     MY SEARCH | spath configuration{} output=configuration | stats count by resourceId configuration | eval _raw=configuration | spath configRuleList{} output=configRuleList | stats count by resourceId configuration configRuleList | eval _raw=configRuleList | spath complianceType output=complianceType | spath configRuleArn output=configRuleArn | spath configRuleId output=configRuleId | spath configRuleName output=configRuleName | table resourceId compianceType configRuleArn configRuleId configRuleName        Desired result would be a table that accounts for the 3 different rules and created 3 different rows for each.
I was wondering if anyone has successfully deployed a clustered instance of Splunk enterprise on AWS ECS Fargate. I'm looking to get rid of server management altogether for my cluster without having ... See more...
I was wondering if anyone has successfully deployed a clustered instance of Splunk enterprise on AWS ECS Fargate. I'm looking to get rid of server management altogether for my cluster without having to go to Splunk Cloud. I read through a Splunk blog post on deploying a single instance to Fargate, but it didn't really cover things like dealing with Smart Store or the high memory requirements for indexers and search heads. If anyone has experience with this, I'd much appreciate the lessons learned.
Hi everyone! Hoping I might be missing something simple.   We're running splunk enterprise 8.1.0 with the officially distributed docker image. All is well with our search head cluster, with one sli... See more...
Hi everyone! Hoping I might be missing something simple.   We're running splunk enterprise 8.1.0 with the officially distributed docker image. All is well with our search head cluster, with one slightly difficult-to-track-down issue that has been causing frequent restarts of our search head tasks.     Everything starts up cleanly, we have a good search head cluster, UIs are returning results normally, etc. But, it appears that an ansible health check that runs at the very end of the playbook is failing to validate that the splunkweb UI is up and running (it is).     included: /opt/ansible/roles/splunk_search_head/tasks/../../../roles/splunk_common/tasks/wait_for_splunk_instance.yml for localhost Monday 23 August 2021 23:48:24 +0000 (0:00:00.045) 0:00:59.426 ********* FAILED - RETRYING: Check Splunk instance is running (60 retries left).     This will eventually fail after 60 retries and will force the container to restart, briefly disrupting the search head cluster.     We haven't overridden many options on the web.conf side aside from setting up ProxySSO (this was happening before configuring SSO also).   According to the file, this is the configured check: --- - name: Check Splunk instance is running uri: url: "{{ scheme | default(cert_prefix) }}://{{ splunk_instance_address }}:{{ port | default(splunk.svc_port) }}" method: GET validate_certs: false use_proxy: no register: task_response until: - task_response.status == 200 retries: "{{ wait_for_splunk_retry_num }}" delay: "{{ retry_delay }}" ignore_errors: true no_log: "{{ hide_password }}"   I can't see anything in the ansible logs detailing what that URL renders as.   My suspicion is that this is attempting to contact either the wrong hostname, is using SSL (we have disabled SSL and are terminating on a reverse proxy), but I can't find any evidence to back that up.   Is there something else I can do to force that check to use http://localhost:8000?     Thanks!!
I currently use the monitoring console to tell me if a Forwarder has not reported in the last 15 min & I consider that FW gone plus I check the list of decommissioned Hosts to consider a FW + Host go... See more...
I currently use the monitoring console to tell me if a Forwarder has not reported in the last 15 min & I consider that FW gone plus I check the list of decommissioned Hosts to consider a FW + Host gone for good! Well, what if the FW software has an issue & the host is just fine? Is there a SPL or way to tell if the Forwarder agent / SW is broken, so I can at least troubleshoot or re-install the FW? Thank u for your help in advance.
Need help :   I have a splunk query where i want to evaluate today (day of week) using now() and then use it to compare data for past 4 weeks for same day of week. if today is MOnday, i want to com... See more...
Need help :   I have a splunk query where i want to evaluate today (day of week) using now() and then use it to compare data for past 4 weeks for same day of week. if today is MOnday, i want to compare data for past 4 mondays with today.
Good Afternoon Splunkers, Let me start by saying that I hope this is the right sub-forum for this question. I'm working on a dashboard within Splunk to visualize our AWS Web Application Firewall dat... See more...
Good Afternoon Splunkers, Let me start by saying that I hope this is the right sub-forum for this question. I'm working on a dashboard within Splunk to visualize our AWS Web Application Firewall data. The purpose of this dashboard is to show general statistics and information about the requests our AWS WAF Solution is processing. Ultimately, we would like to use this dashboard to debug and tune our WAF solution as we move our WAF into enforcement mode. One of the many charts / tables I'm trying to put together is a list of AWS WAF Rule-sets, and their sub-rules that have been triggered, by website our WAF is monitoring. A concrete example of what I'm looking to create would be: Webpage WAF Rulegroups Triggered Sub-Rules Triggered Count SomeWebpage.com AWSManagedCommonRuleSet         GenericRFI_Body 5     SomeOtherVuln 10     NoUserAgent_HEADER 15           AWSAnonymousIpList         HostingProviderIpList 20   the biggest issue I'm currently facing is that the AWS WAF data, while in JSON format from AWS, does not follow proper JSON, Key/Value pairings, and has nested arrays containing multiple types of information. Specifically the nested array that contains the rule evaluation information for a particular request contains all of the rules evaluated, even if the rules did not match, or no sub-rules were fired. Example below,       ruleGroupList: [ [-] { [-] excludedRules: null nonTerminatingMatchingRules: [ [-] ] ruleGroupId: AWS#AWSManagedRulesCommonRuleSet terminatingRule: null } { [-] excludedRules: null nonTerminatingMatchingRules: [ [-] ] ruleGroupId: AWS#AWSManagedRulesSQLiRuleSet terminatingRule: null } { [-] excludedRules: null nonTerminatingMatchingRules: [ [-] ] ruleGroupId: AWS#AWSManagedRulesLinuxRuleSet terminatingRule: null } { [-] excludedRules: null nonTerminatingMatchingRules: [ [-] ] ruleGroupId: AWS#AWSManagedRulesAdminProtectionRuleSet terminatingRule: null } { [-] excludedRules: null nonTerminatingMatchingRules: [ [-] ] ruleGroupId: AWS#AWSManagedRulesKnownBadInputsRuleSet terminatingRule: null } { [-] excludedRules: null nonTerminatingMatchingRules: [ [-] ] ruleGroupId: AWS#AWSManagedRulesAmazonIpReputationList terminatingRule: null } { [-] excludedRules: null nonTerminatingMatchingRules: [ [-] ] ruleGroupId: AWS#AWSManagedRulesAnonymousIpList terminatingRule: { [-] action: BLOCK ruleId: HostingProviderIPList ruleMatchDetails: null } } ]       As you can see, even though only one AWS Rulegroup fired for this request "AWSManagedRulesAnonymousIpList", and within that group, the sub-rule "HostingProviderIpList" fired, all of the rule-groups assigned to the WAF are present within the array. Therefore, if I were to search for something like     $search stats count by nonTerminatingMatchingRules{}.ruleId, ruleGroupList{}.terminatingRule.ruleId | stats list(ruleGroupList{}.terminatingRule.ruleId), list(count), by nonTerminatingMatchingRules{}.ruleId     I would get back a list of each rule-set but I would also get back each sub-rule that has fired as well, even if the sub-rule is not part of the rule-set that fired. What commands can I use to transform this data into proper key-value pairs on a per rule-group basis? Based on what I've read I think I want to use "Spath" and "mvexpand", I'm just not sure of the best path forward. For full transparency, here's an entire WAF log in JSON format, so you can see all of the fields. Here's the guide for understanding these fields as well.     { [-] action: ALLOW formatVersion: 1 httpRequest: { [-] args: clientIp: 8.8.8.8 country: CA headers: [ [-] { [-] name: Authorization value: SomeToken } { [-] name: User-Agent value: Site24x7 } { [-] name: Cache-Control value: no-cache } { [-] name: Accept value: */* } { [-] name: Connection value: Keep-Alive } { [-] name: Accept-Encoding value: gzip } { [-] name: Content-Type value: application/json; charset=UTF-8 } { [-] name: X-Site24x7-Id value: Redacted } { [-] name: Content-Length value: 1396 } { [-] name: Host value: mywebpage.com } ] httpMethod: POST httpVersion: HTTP/1.1 requestId: RedactedID uri: /big/uri/path } httpSourceId: Redacted ID httpSourceName: ALB labels: [ [-] { [-] name: awswaf:managed:aws:anonymous-ip-list:HostingProviderIPList } ] nonTerminatingMatchingRules: [ [-] { [-] action: COUNT ruleId: AWSCommonRuleSet ruleMatchDetails: [ [-] ] } { [-] action: COUNT ruleId: AWSAnonymousIpList ruleMatchDetails: [ [-] ] } ] rateBasedRuleList: [ [-] ] requestHeadersInserted: null responseCodeSent: null ruleGroupList: [ [-] { [-] excludedRules: null nonTerminatingMatchingRules: [ [-] ] ruleGroupId: AWS#AWSManagedRulesCommonRuleSet terminatingRule: { [-] action: BLOCK ruleId: GenericRFI_BODY ruleMatchDetails: null } } { [-] excludedRules: null nonTerminatingMatchingRules: [ [-] ] ruleGroupId: AWS#AWSManagedRulesSQLiRuleSet terminatingRule: null } { [-] excludedRules: null nonTerminatingMatchingRules: [ [-] ] ruleGroupId: AWS#AWSManagedRulesLinuxRuleSet terminatingRule: null } { [-] excludedRules: null nonTerminatingMatchingRules: [ [-] ] ruleGroupId: AWS#AWSManagedRulesAdminProtectionRuleSet terminatingRule: null } { [-] excludedRules: null nonTerminatingMatchingRules: [ [-] ] ruleGroupId: AWS#AWSManagedRulesKnownBadInputsRuleSet terminatingRule: null } { [-] excludedRules: null nonTerminatingMatchingRules: [ [-] ] ruleGroupId: AWS#AWSManagedRulesAmazonIpReputationList terminatingRule: null } { [-] excludedRules: null nonTerminatingMatchingRules: [ [+] ] ruleGroupId: AWS#AWSManagedRulesAnonymousIpList terminatingRule: { [-] action: BLOCK ruleId: HostingProviderIPList ruleMatchDetails: null } } ] terminatingRuleId: Default_Action terminatingRuleMatchDetails: [ [-] ] terminatingRuleType: REGULAR timestamp: 1629751363362 webaclId: RedactedID }