All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello,  My index configuration is provided below, but I have a question regarding frozenTimePeriodInSecs = 7776000. I have configured Splunk to move data to frozen storage after 7,776,000 seconds (3... See more...
Hello,  My index configuration is provided below, but I have a question regarding frozenTimePeriodInSecs = 7776000. I have configured Splunk to move data to frozen storage after 7,776,000 seconds (3 months). Once data reaches the frozen state, how can I control the frozen storage if the frozen disk becomes full? How does Splunk handle the frozen storage in such scenarios? [custom_index] repFactor = auto homePath = volume:hot/$_index_name/db coldPath = volume:cold/$_index_name/colddb thawedPath = /opt/thawed/$_index_name/thaweddb homePath.maxDataSizeMB = 1664000 coldPath.maxDataSizeMB = 1664000 maxWarmDBCount = 200 frozenTimePeriodInSecs = 7776000 maxDataSize = auto_high_volume coldToFrozenDir = /opt/frozen/custom_index/frozendb
I have been trying to set up splunk on my Kubernetes cluster so i can use it with a python script to access the rest API. i have a splunk enterprise standalone instance running. i used traefik ... See more...
I have been trying to set up splunk on my Kubernetes cluster so i can use it with a python script to access the rest API. i have a splunk enterprise standalone instance running. i used traefik ingress to expose port 8089      apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: splunk-ingress namespace: splunk annotations: cert-manager.io/cluster-issuer: letsencrypt-issuer traefik.ingress.kubernetes.io/router.entrypoints: websecure spec: ingressClassName: common-traefik tls: - hosts: - splunk.example.com secretName: app-certificate rules: - host: splunk.example.com http: paths: - path: / pathType: Prefix backend: service: name: splunk-stdln-standalone-service port: number: 8089       when i try to curl to the client it returns internal server error       curl -X POST https://splunk.example.com/services/auth/login --data-urlencode username=admin --data-urlencode password=<mysplunkpassword> -k -v       output:     * Host splunk.example.com:443 was resolved. * IPv6: (none) * IPv4: xx.xx.xxx.xxx * Trying xx.xx.xxx.xxx:443... * Connected to splunk.example.com (xx.xx.xxx.xxx) port 443 * ALPN: curl offers h2,http/1.1 * (304) (OUT), TLS handshake, Client hello (1): * (304) (IN), TLS handshake, Server hello (2): * (304) (IN), TLS handshake, Unknown (8): * (304) (IN), TLS handshake, Certificate (11): * (304) (IN), TLS handshake, CERT verify (15): * (304) (IN), TLS handshake, Finished (20): * (304) (OUT), TLS handshake, Finished (20): * SSL connection using TLSv1.3 / AEAD-CHACHA20-POLY1305-SHA256 / [blank] / UNDEF * ALPN: server accepted h2 * Server certificate: * subject: CN=splunk.example.com * start date: Dec 6 23:53:06 2024 GMT * expire date: Mar 6 23:53:05 2025 GMT * issuer: C=US; O=Let's Encrypt; CN=R10 * SSL certificate verify ok. * using HTTP/2 * [HTTP/2] [1] OPENED stream for https://splunk.example.com/services/auth/login * [HTTP/2] [1] [:method: POST] * [HTTP/2] [1] [:scheme: https] * [HTTP/2] [1] [:authority: splunk.example.com] * [HTTP/2] [1] [:path: /services/auth/login] * [HTTP/2] [1] [user-agent: curl/8.7.1] * [HTTP/2] [1] [accept: */*] * [HTTP/2] [1] [content-length: 34] * [HTTP/2] [1] [content-type: application/x-www-form-urlencoded] > POST /services/auth/login HTTP/2 > Host: splunk.example.com > User-Agent: curl/8.7.1 > Accept: */* > Content-Length: 34 > Content-Type: application/x-www-form-urlencoded > * upload completely sent off: 34 bytes < HTTP/2 500 < content-length: 21 < date: Mon, 09 Dec 2024 06:54:50 GMT < * Connection #0 to host splunk.example.com left intact Internal Server Error%     when i port forward to localhost the curl works     curl -X POST https://localhost:8089/services/auth/login --data-urlencode username=admin --data-urlencode password=<mysplunkpassword> -k -v     output:     Note: Unnecessary use of -X or --request, POST is already inferred. * Host localhost:8089 was resolved. * IPv6: ::1 * IPv4: 127.0.0.1 * Trying [::1]:8089... * Connected to localhost (::1) port 8089 * ALPN: curl offers h2,http/1.1 * (304) (OUT), TLS handshake, Client hello (1): * (304) (IN), TLS handshake, Server hello (2): * TLSv1.2 (IN), TLS handshake, Certificate (11): * TLSv1.2 (IN), TLS handshake, Server key exchange (12): * TLSv1.2 (IN), TLS handshake, Server finished (14): * TLSv1.2 (OUT), TLS handshake, Client key exchange (16): * TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1): * TLSv1.2 (OUT), TLS handshake, Finished (20): * TLSv1.2 (IN), TLS change cipher, Change cipher spec (1): * TLSv1.2 (IN), TLS handshake, Finished (20): * SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384 / [blank] / UNDEF * ALPN: server did not agree on a protocol. Uses default. * Server certificate: * subject: CN=SplunkServerDefaultCert; O=SplunkUser * start date: Dec 9 02:21:04 2024 GMT * expire date: Dec 9 02:21:04 2027 GMT * issuer: C=US; ST=CA; L=San Francisco; O=Splunk; CN=SplunkCommonCA; emailAddress=support@splunk.com * SSL certificate verify result: self signed certificate in certificate chain (19), continuing anyway. * using HTTP/1.x > POST /services/auth/login HTTP/1.1 > Host: localhost:8089 > User-Agent: curl/8.7.1 > Accept: */* > Content-Length: 34 > Content-Type: application/x-www-form-urlencoded > * upload completely sent off: 34 bytes < HTTP/1.1 200 OK < Date: Mon, 09 Dec 2024 06:59:54 GMT < Expires: Thu, 26 Oct 1978 00:00:00 GMT < Cache-Control: no-store, no-cache, must-revalidate, max-age=0 < Content-Type: text/xml; charset=UTF-8 < X-Content-Type-Options: nosniff < Content-Length: 204 < Connection: Keep-Alive < X-Frame-Options: SAMEORIGIN < Server: Splunkd < <response> <sessionKey> {some sessionKey...} </sessionKey> <messages> <msg code=""></msg> </messages> </response> * Connection #0 to host localhost left intact      I am using default confs not sure if i need to update my server.conf  for this more context: i checked the splunkd.log from when i made the request and i get these logs: 12-09-2024 17:19:36.904 +0000 WARN SSLCommon [951 HTTPDispatch] - Received fatal SSL3 alert. ssl_state='SSLv3 read client key exchange A', alert_description='bad certificate'. 12-09-2024 17:19:36.904 +0000 WARN HttpListener [951 HTTPDispatch] - Socket error from 192.168.xx.xx:52528 while idling: error:14094412:SSL routines:ssl3_read_bytes:sslv3 alert bad certificate
Hi Team, We have a requirement to mask/filter data before ingestion at Splunk cloud environment. Custom has Splunk Cloud .  I am reading through ingest processor and ingest actions documentation. S... See more...
Hi Team, We have a requirement to mask/filter data before ingestion at Splunk cloud environment. Custom has Splunk Cloud .  I am reading through ingest processor and ingest actions documentation. Sounds like both do pretty much the same capability in Splunk cloud.    Is there any major difference in these?    Thanks, SGS
We are using a metrics index to store metric events. These metric events are linked to a different parent dataset through a unique ID dimension. This ID dimension can have tens of thousands of unique... See more...
We are using a metrics index to store metric events. These metric events are linked to a different parent dataset through a unique ID dimension. This ID dimension can have tens of thousands of unique values, and the parent dataset primarily consists of string values. Given the cardinality issues associated with metric indices (where it's best to avoid dimensions with a large range of unique values), what would be the best practice in this scenario? https://docs.splunk.com/Documentation/Splunk/latest/Metrics/BestPractices#Cardinality_issues  Would it be a good idea to use a key-value store (kvstore) for the parent data and perform lookups from the metric data? How would this approach impact performance?
I downloaded splunk Enterprise on EC2 at /opt folder using tgz file. unzipped it using tar.  then started it on port no 8000. it shows i succesfully started at 8000. But after enabling 8000 port in ... See more...
I downloaded splunk Enterprise on EC2 at /opt folder using tgz file. unzipped it using tar.  then started it on port no 8000. it shows i succesfully started at 8000. But after enabling 8000 port in ec2 security gruops and using the public ip of ec2 with :8000 I can't access the webpage. I just shows this site can't be reached. please help me.
I am using S3SPL from datapunctum and am trying to get some data to be search. In the internal index there are no errors logged. I have setup my ingest actions with .json or .ndjson and also config... See more...
I am using S3SPL from datapunctum and am trying to get some data to be search. In the internal index there are no errors logged. I have setup my ingest actions with .json or .ndjson and also configured my prefix correctly to reflect the timestamp. I am using minio      root@esprimo-piere:/opt/splunk/etc/apps/splunk_ingest_actions/local# cat outputs.conf [rfs:splunk] batchSizeThresholdKB = 131072 batchTimeout = 30 compression = none dropEventsOnUploadError = false format = json format.json.index_time_fields = true format.ndjson.index_time_fields = true partitionBy = day path = s3://splunk/ remote.s3.access_key = XXXX remote.s3.encryption = none remote.s3.endpoint = https://localhost:9000 remote.s3.secret_key = XXX remote.s3.signature_version = v4 remote.s3.supports_versioning = false remote.s3.url_version = v1       root@esprimo-piere:/opt/splunk/etc/apps/SA-DP-s3spl/local# cat s3spl_bucket.conf [s3spl_bucket://splunk] aws_access_key = XXXXX aws_secret_key = ******** bucket_ia = True bucket_name = splunk endpoint_url = https://localhost:9000 max_events_per_file = -1 max_files_read = -1 max_total_events = 1000 prefix = /year=${_time:%Y}/month=${_time:%m}/day=${_time:%d}/ timezone = Europe/Berlin verify_ssl = False      
Hi,  From java service I want to call Splunk Cloud REST API endpoints. I need help in how to create authentication token for splunk cloud and then pass that token while calling search endpoints t... See more...
Hi,  From java service I want to call Splunk Cloud REST API endpoints. I need help in how to create authentication token for splunk cloud and then pass that token while calling search endpoints to execute the query and get the results back. When logging into my <org-name>.splunkcloud.com in browser it is done via SSO. Can anybody please provide sample curl which I can use, I went through documentation, but I didn't get much, I'm new to this, any help is highly appreciated. #splunkcloud @splunkclouduser @splunkcloudnoob  Thanks.
During upgrade of our Splunk Ent. (production) 9.2.4 to 9.30 - throws an error: not found SSLEAY32.dll (+libeay32.dll) Nb. Splunk is installed on drive "d:\program files\Splunk   Rebooted our W... See more...
During upgrade of our Splunk Ent. (production) 9.2.4 to 9.30 - throws an error: not found SSLEAY32.dll (+libeay32.dll) Nb. Splunk is installed on drive "d:\program files\Splunk   Rebooted our Windows 2019 server and tried again, but with the same result. Yes, I found a SSLEAY32 and LIB32 file in the folder "D:\Program Files\splunk\bin"!? I have no idea what to do now and I am very reluctant to experiment further - although I have found similar problems on the internet, not specific related to Splunk. Does anyone have a tip or a suggestion for me waht to do next? For example: Can I skip 9.3.0 and continue with 9.3.1 or 9.3.2? Thanks for all the responses AshleyP
Hi everyone, I’m currently working on extracting the webaclId field from AWS WAF logs and setting it as the host metadata in Splunk. However, I’ve been running into issues where the regex doesn’t se... See more...
Hi everyone, I’m currently working on extracting the webaclId field from AWS WAF logs and setting it as the host metadata in Splunk. However, I’ve been running into issues where the regex doesn’t seem to work, and Splunk throws the error:   Log Example: Below is an obfuscated example of an event from the logs I’m working with:     { "timestamp": 1733490000011, "formatVersion": 1, "webaclId": "arn:aws:wafv2:region:account-id:regional/webacl/webacl-name/resource-id", "action": "ALLOW", "httpRequest": { "clientIp": "192.0.2.1", "country": "XX", "headers": [ { "name": "Host", "value": "example.com" } ], "uri": "/v2.01/endpoint/path/resource", "httpMethod": "GET" } }    I want to extract the webacl-name from the webaclId field and set it as the host metadata in Splunk. For the above example, the desired host value should be: webacl-name Here’s my current Splunk configuration: inputs.conf: [monitor:///opt/splunk/etc/tes*.txt] disabled = false index = test sourcetype = aws:waf   props.conf:   [sourcetype::aws:waf] TRANSFORMS-set_host = extract_webacl_name   transforms.conf:   [extract_webacl_name] REGEX = \"webaclId\":\"[^:]+:[^:]+:[^:]+:[^:]+:[^:]+:regional\/webacl\/([^\/]+)\/ FORMAT = host::$1 DEST_KEY = MetaData:Host SOURCE_KEY = _raw     What I’ve Tried: I’ve validated the regex on external tools like regex101, and it works for the log structure. For example, the regex successfully extracts webacl-name from: "webaclId":"arn:aws:wafv2:region:account-id:regional/webacl/webacl-name/resource-id" Manual rex Testing in Splunk:   index=test sourcetype=aws:waf | rex field=_raw "\"webaclId\":\"[^:]+:[^:]+:[^:]+:[^:]+:[^:]+:regional\/webacl\/(?<webacl_name>[^\/]+)\/" | table _raw webacl_name     Questions: Does my transforms.conf configuration have any issues I might be missing? Is there an alternative or more efficient way to handle this extraction and rewrite the host field? Are there any known limitations or edge cases with using JSON data for MetaData:Host updates? I’d greatly appreciate any insights or suggestions. Thank you for your help!
I have created a dashboard that takes input from the users in 4 textbox inputs and store it in a lookup file. My requirement is that tokens should be passed to the search query only after submit but... See more...
I have created a dashboard that takes input from the users in 4 textbox inputs and store it in a lookup file. My requirement is that tokens should be passed to the search query only after submit button is clicked by user. But submit button is not working as per expectation. Sometimes query is executing automatically when we click outside the text boxes or it is executing when the page is reloaded.  My second requirement is to clear the textbox once submit button is clicked. I  searched the community for similar questions ,made changes in  the code as suggested but it is not working. Thanks in advance.   <fieldset submitButton="true" autoRun="false"> <input type="text" token="usecasename" searchWhenChanged="false"> <label>Enter UseCaseName Here</label> </input> <input type="text" token="error" searchWhenChanged="false"> <label>Enter Error/Exception here</label> </input> <input type="text" token="impact" searchWhenChanged="false"> <label>Enter Impact here</label> </input> <input type="text" token="reason" searchWhenChanged="false"> <label>Enter Reason here</label> </input> </fieldset> <row depends="$hide$"> <panel> <table> <title></title> <search> <query>| stats count| fields - count|eval useCaseName="$usecasename$", "Error/Exception in logs"="$error$", Impact="$impact$", Reason="$reason$"|append[| inputlookup lookup_exceptions_all_usecase1.csv]| outputlookup lookup_exceptions_all_usecase1.csv</query> <earliest>-24h</earliest> <latest></latest> <sampleRatio>1</sampleRatio> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row>   4  
Hello community, I want to make it efficient when offboarding with clients. Is there an spl to find ALL of the KO's created in a particular app?
Hi, My dashboard has 2 inputs, i.e dropdown , time picker. I have a requirement where I need to provide both inputs  then only my panels should appear. I tried the same ( below dashboard code) , w... See more...
Hi, My dashboard has 2 inputs, i.e dropdown , time picker. I have a requirement where I need to provide both inputs  then only my panels should appear. I tried the same ( below dashboard code) , when first time dashboard loads , I choose both inputs and panel appears.After that when I choose another item from dropdown ( keeping the same time) nothing happens. I have to change a different time and then the respective panel appears. What should I change in the code so that even if I change only dropdown item , panel should appear for the same chosen timeframe.  Dashboard Code: <form version="1.1" theme="light"> <label>Time Picker Input</label> <description>Replicate time picker issue</description> <fieldset submitButton="false"> <input type="dropdown" token="item" searchWhenChanged="true"> <label>Select Item</label> <choice value="table1">TABLE-1</choice> <choice value="table2">TABLE-2</choice> <choice value="table3">TABLE-3</choice> <change> <condition value="table1"> <set token="tab1">"Table1"</set> <unset token="tab2"></unset> <unset token="tab3"></unset> <unset token="time"></unset> <unset token="form.time"></unset> <unset token="is_time_selected"></unset> </condition> <condition value="table2"> <set token="tab2">"Table2"</set> <unset token="tab1"></unset> <unset token="tab3"></unset> <unset token="time"></unset> <unset token="form.time"></unset> <unset token="is_time_selected"></unset> </condition> <condition value="table3"> <set token="tab3">"Table3"</set> <unset token="tab1"></unset> <unset token="tab2"></unset> <unset token="time"></unset> <unset token="form.time"></unset> <unset token="is_time_selected"></unset> </condition> <condition> <unset token="tab1"></unset> <unset token="tab2"></unset> <unset token="tab3"></unset> <unset token="time"></unset> <unset token="form.time"></unset> <unset token="is_time_selected"></unset> </condition> </change> </input> <input type="time" token="time" searchWhenChanged="true"> <label>Select Time</label> <change> <set token="is_time_selected">true</set> </change> </input> </fieldset> <row depends="$tab1$$is_time_selected$"> <panel> <table> <title>Table1</title> <search> <query> | makeresults | eval Table = "Table1" | eval e_time = "$time.earliest$", l_time = "$time.latest$" | table Table e_time l_time </query> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row depends="$tab2$$is_time_selected$"> <panel> <table> <title>Table2</title> <search> <query> | makeresults | eval Table = "Table2" | eval e_time = "$time.earliest$", l_time = "$time.latest$" | table Table e_time l_time </query> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row depends="$tab3$$is_time_selected$"> <panel> <table> <title>Table3</title> <search> <query> | makeresults | eval Table = "Table3" | eval e_time = "$time.earliest$", l_time = "$time.latest$" | table Table e_time l_time </query> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form> Thanks & Regards, Shashwat
Hello.  I've been trying for two days not to activate a trial Splunk Cloud instance.  I don't get the email to activate.  I've tried even creating another account.  Any thoughts or known issue with t... See more...
Hello.  I've been trying for two days not to activate a trial Splunk Cloud instance.  I don't get the email to activate.  I've tried even creating another account.  Any thoughts or known issue with the trial service automation?
How can I get the total sum of the Duration fields? Regards.  
Hello, I want to make a drilldown with those services : and I have to apply a drilldow for (s3-bucket / vpc / ec2)   I've tried several things but nothing works   <row> <panel> <title>AWS ... See more...
Hello, I want to make a drilldown with those services : and I have to apply a drilldow for (s3-bucket / vpc / ec2)   I've tried several things but nothing works   <row> <panel> <title>AWS Services Monitoring</title> <table> <search> <!--done> <set token="Services">$click.name$</set> </done--> <query>index="aws_vpc_corp-it_security-prd" sourcetype="aws:s3:csv" ShortConfigRuleName="*" | eval Services = case( match(ShortConfigRuleName, "s3-bucket"), "s3-bucket", match(ShortConfigRuleName, "iam-password"), "iam-password", match(ShortConfigRuleName, "iam-policy"), "iam-policy", match(ShortConfigRuleName, "iam-user"), "iam-user", match(ShortConfigRuleName, "guardduty"), "guardduty", match(ShortConfigRuleName, "ec2"), "ec2", match(ShortConfigRuleName, "vpc"), "vpc", match(ShortConfigRuleName, "ebs-snapshot"), "ebs-snapshot", match(ShortConfigRuleName, "rds-snapshots"), "rds-snapshots", match(ShortConfigRuleName, "cloudtrail"), "cloudtrail", match(ShortConfigRuleName, "subnet"), "subnet", match(ShortConfigRuleName, "lambda-function"), "lambda-function", 1=1, "Other") |search Services!=Other | lookup aws_security_all_account_ids account_id AS AccountId OUTPUT name | table name AccountId Services ShortConfigRuleName ComplianceType OrderingTimestamp ResultRecordedTime | dedup AccountId Services ShortConfigRuleName ComplianceType | rename name as "AWS Account Name", "ComplianceType" as "Status", "OrderingTimestamp" as "Last Check", "ResultRecordedTime" as "Next Check" |fillnull value="N/A" |search $ResourceName$ $Services$ $Status$</query> <earliest>$earliest$</earliest> <latest>$latest$</latest> </search> <option name="count">100</option> <option name="drilldown">row</option> <option name="refresh.display">progressbar</option> <option name="wrap">true</option> <format type="color" field="Status"> <colorPalette type="map">{"NON_COMPLIANT":#D94E17}</colorPalette> </format> <drilldown> <condition match="$Services$ != &quot;s3-bucket&quot;"> <set token="Services">s3-bucket</set> <link target="_blank">/app/search/dev_vwt_dashboards_uc48_details?ShortConfigRuleName=$row.ShortConfigRuleName$&amp;AccountId=$row.AccountId$&amp;Services=$row.Services$&amp;S3_details=true&amp;earliest=$earliest$&amp;latest=$latest$&amp;Status=$row.Status$</link> </condition> <condition match="$Services$ != &quot;vpc&quot;"> <set token="Services">vpc</set> <link target="_blank">/app/search/dev_vwt_dashboards_uc48_details?ShortConfigRuleName=$row.ShortConfigRuleName$&amp;AccountId=$row.AccountId$&amp;Services=$row.Services$&amp;VPC_details=true&amp;earliest=$earliest$&amp;latest=$latest$&amp;Status=$row.Status$</link> </condition> <condition match="$Services$ != &quot;ec2&quot;"> <set token="Services">ec2</set> <link target="_blank">/app/search/dev_vwt_dashboards_uc48_details?ShortConfigRuleName=$row.ShortConfigRuleName$&amp;AccountId=$row.AccountId$&amp;Services=$row.Services$&amp;EC2_details=true&amp;earliest=$earliest$&amp;latest=$latest$&amp;Status=$row.Status$</link> </condition> </drilldown> </table> </panel> </row>   The drilldown is supposed to ‘point’ to a second dashboard in the following way:  </panel> <panel depends="$VPC_details$"> <title>VPC DETAILS : ShortConfigRuleName=$ShortConfigRuleName$ Service=$Services$</title> <table> <search> <query>index="aws_vpc_corp-it_security-prd" | search ShortConfigRuleName=$ShortConfigRuleName$ |search AccountId=$AccountId$ |search ComplianceType=$Status$ | eval Services = case( match(ShortConfigRuleName, "s3-bucket"), "s3-bucket", match(ShortConfigRuleName, "iam-password"), "iam-password", match(ShortConfigRuleName, "iam-policy"), "iam-policy", match(ShortConfigRuleName, "iam-user"), "iam-user", match(ShortConfigRuleName, "guardduty"), "guardduty", match(ShortConfigRuleName, "ec2"), "ec2", match(ShortConfigRuleName, "vpc"), "vpc", match(ShortConfigRuleName, "ebs-snapshot"), "ebs-snapshot", match(ShortConfigRuleName, "rds-snapshots"), "rds-snapshots", match(ShortConfigRuleName, "cloudtrail"), "cloudtrail", match(ShortConfigRuleName, "subnet"), "subnet", match(ShortConfigRuleName, "lambda-function"), "lambda-function", 1=1, "Other") | where ResourceName!="N/A" | table AccountId ResourceName Services ComplianceType |rename ResourceName as "InstanceName" | table AccountId Services ComplianceType | dedup AccountId Services ComplianceType |appendcols [ search index="aws_vpc_corp-it_security-prd" source="s3://vwt-s3-secuprod-*" |search AccountId=$AccountId$ |table InstanceId InstanceName Platform State |dedup InstanceId InstanceName Platform State] | table AccountId Services ComplianceType InstanceId InstanceName Platform State</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <option name="count">100</option> <option name="drilldown">cell</option> <option name="refresh.display">progressbar</option> <format type="color" field="ComplianceType"> <colorPalette type="map">{"NON_COMPLIANT":#D94E17}</colorPalette> </format> <format type="color" field="State"> <colorPalette type="map">{"stopped":#D94E17,"running":#55C169}</colorPalette> </format> <drilldown> <condition> <!-- Vérifiez que le filtre correspond exactement au service sélectionné --> <eval token="S3_details">if(match($click.value$, "s3-bucket"), "true", "false")</eval> <eval token="VPC_details">if(match($click.value$, "vpc"), "true", "false")</eval> <eval token="EC2_details">if(match($click.value$, "ec2"), "true", "false")</eval> </condition> </drilldown> </table> </panel> <panel depends="$EC2_details$"> <title>EC2 DETAILS : ShortConfigRuleName=$ShortConfigRuleName$ Service=$Services$</title> <table> <search> <query>index="aws_vpc_corp-it_security-prd" | search ShortConfigRuleName=$ShortConfigRuleName$ |search AccountId=$AccountId$ |search ComplianceType=$Status$ | eval Services = case( match(ShortConfigRuleName, "s3-bucket"), "s3-bucket", match(ShortConfigRuleName, "iam-password"), "iam-password", match(ShortConfigRuleName, "iam-policy"), "iam-policy", match(ShortConfigRuleName, "iam-user"), "iam-user", match(ShortConfigRuleName, "guardduty"), "guardduty", match(ShortConfigRuleName, "ec2"), "ec2", match(ShortConfigRuleName, "vpc"), "vpc", match(ShortConfigRuleName, "ebs-snapshot"), "ebs-snapshot", match(ShortConfigRuleName, "rds-snapshots"), "rds-snapshots", match(ShortConfigRuleName, "cloudtrail"), "cloudtrail", match(ShortConfigRuleName, "subnet"), "subnet", match(ShortConfigRuleName, "lambda-function"), "lambda-function", 1=1, "Other") | where ResourceName!="N/A" | table AccountId ResourceName Services ComplianceType |rename ResourceName as "InstanceName" | table AccountId Services ComplianceType | dedup AccountId Services ComplianceType |appendcols [ search index="aws_vpc_corp-it_security-prd" source="s3://vwt-s3-secuprod-*" |search AccountId=$AccountId$ |table InstanceId InstanceName Platform State |dedup InstanceId InstanceName Platform State] | table AccountId Services ComplianceType InstanceId InstanceName Platform State</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <option name="count">100</option> <option name="drilldown">cell</option> <option name="refresh.display">progressbar</option> <format type="color" field="ComplianceType"> <colorPalette type="map">{"NON_COMPLIANT":#D94E17}</colorPalette> </format> <format type="color" field="State"> <colorPalette type="map">{"stopped":#D94E17,"running":#55C169}</colorPalette> </format> <drilldown> <condition> <!-- Vérifiez que le filtre correspond exactement au service sélectionné --> <eval token="S3_details">if(match($click.value$, "s3-bucket"), "true", "false")</eval> <eval token="VPC_details">if(match($click.value$, "vpc"), "true", "false")</eval> <eval token="EC2_details">if(match($click.value$, "ec2"), "true", "false")</eval> </condition> </drilldown> </table> </panel> <panel depends="$SERVICES_details$"> <title>SERVICES DETAILS : ShortConfigRuleName=$ShortConfigRuleName$ Service=$Services$</title> <table> <search> <query>index="aws_vpc_corp-it_security-prd" | search ShortConfigRuleName=$ShortConfigRuleName$ |search AccountId=$AccountId$ |search ComplianceType=$Status$ | eval Services = case( match(ShortConfigRuleName, "s3-bucket"), "s3-bucket", match(ShortConfigRuleName, "iam-password"), "iam-password", match(ShortConfigRuleName, "iam-policy"), "iam-policy", match(ShortConfigRuleName, "iam-user"), "iam-user", match(ShortConfigRuleName, "guardduty"), "guardduty", match(ShortConfigRuleName, "ec2"), "ec2", match(ShortConfigRuleName, "vpc"), "vpc", match(ShortConfigRuleName, "ebs-snapshot"), "ebs-snapshot", match(ShortConfigRuleName, "rds-snapshots"), "rds-snapshots", match(ShortConfigRuleName, "cloudtrail"), "cloudtrail", match(ShortConfigRuleName, "subnet"), "subnet", match(ShortConfigRuleName, "lambda-function"), "lambda-function", 1=1, "Other") | where ResourceName!="N/A" | table AccountId ResourceName Services ComplianceType |rename ResourceName as "InstanceName" | table AccountId Services ComplianceType | dedup AccountId Services ComplianceType |appendcols [ search index="aws_vpc_corp-it_security-prd" source="s3://vwt-s3-secuprod-*" |search AccountId=$AccountId$ |table InstanceId InstanceName Platform State |dedup InstanceId InstanceName Platform State] | table AccountId Services ComplianceType InstanceId InstanceName Platform State</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <option name="count">100</option> <option name="drilldown">cell</option> <option name="refresh.display">progressbar</option> <format type="color" field="ComplianceType"> <colorPalette type="map">{"NON_COMPLIANT":#D94E17}</colorPalette> </format> <format type="color" field="State"> <colorPalette type="map">{"stopped":#D94E17,"running":#55C169}</colorPalette> </format> <drilldown> <condition> <!-- Vérifiez que le filtre correspond exactement au service sélectionné --> <eval token="S3_details">if(match($click.value$, "s3-bucket"), "true", "false")</eval> <eval token="VPC_details">if(match($click.value$, "vpc"), "true", "false")</eval> <eval token="EC2_details">if(match($click.value$, "ec2"), "true", "false")</eval> </condition> </drilldown> </table> </panel> </row>   When s3-bucket is selected, we point to the ‘S3_details’ panel, and so on The link target works fine, but it's the click value at the beginning with the service selection that doesn't work
I want to focus your attention on the method of collecting CPU utilization data in Splunk_TA_nix (cpu_metric.sh). I have been dealing with many false positive alerts regarding CPU usage in our or... See more...
I want to focus your attention on the method of collecting CPU utilization data in Splunk_TA_nix (cpu_metric.sh). I have been dealing with many false positive alerts regarding CPU usage in our organization. We have ITSI implemented and use Splunk_TA_nix to collect data. An alert is generated when 2 values ​​of CPU usage > 90%. We collect values ​​every 5 minutes. Script for collecting this data (Splunk_TA_nix/bin/cpu_metric.sh) use the command sar -P ALL 1 1. This command will display the CPU load within 1 second. If used for CPU monitoring in our setup (every 5 min) we only have information about 1 second out of five minutes. Based on this data we evaluate CPU usage. Normally the CPU usage fluctuates depending on how the commands are started, how long they run, and how difficult they are. With this method of measurement, it happens quite often that 2 values ​​cross the threshold in a row. Based on this, an alert is subsequently generated. For monitoring, however, it is important to know the average CPU utilization and not random peaks. When collecting average values, such false positive alerts would not occur (if the CPU is not overloaded). The standard way good administrators test CPU usage is, for example: sar 120 1 when they get an average CPU usage in 2 minutes. Data collection in sar via cron was once recommended to be set up like this: */10 * * * * root /usr/lib64/sa/sa1 -S XALL 600 1. This setup collected the average CPU usage over a 10-minute period, wrote this value to a sar file, and repeated this every 10 minutes. Such a setting gives a real overview of how the CPU is pulled out. Splunk does not provide a reasonable way to set these values ​​in the cpu_metric.sh script. The only way to solve it is to copy this script and modify it according to yourself. However, the connection to Splunk_TA_nix will be lost. What happens when Splunk_TA_nix is ​​upgraded? My preference is to enable CPU data collection by introducing the following stanza in our application (deployed via the deployment server) which is linked to Splunk_TA_nix. [script://$SPLUNK_HOME/etc/apps/Splunk_TA_nix/bin/cpu_metric.sh] disabled = false index = unix_perfmon_metrics But this method does not give us the possibility to set OPTIONS for sar. It would be ideal if something like this could be done: [script://./bin/my_cpu_metric.sh] disabled = false index = unix_perfmon_metrics ./bin/my_cpu_metric.sh exec $SPLUNK_HOME/etc/apps/Splunk_TA_nix/bin/cpu_metric.sh 120 1 But this doesn't work. It would not be necessary for cpu_metric.sh to be able to process some input settings and modify the use of the sar command. The same can also be applied to other scripts in this TA. If you have similar experiences, feel free to share them. If my concerns are justified, it would be right if this TA would be updated and give administrators the opportunity to set better metrics collection parameters. What do you think?
Working on supplementing a search we are using to implement conditional access policies. The search identifies successful logins and produces a percentage of compliant logins over a period. What I am... See more...
Working on supplementing a search we are using to implement conditional access policies. The search identifies successful logins and produces a percentage of compliant logins over a period. What I am trying to add, is the last login time which is identified by the "createdDateTime" in the logs.  Here is the current search:  index="audit" sourcetype="signin" userPrincipalName="*domain.com" status.errorCode=0 | eval DeviceCompliance='deviceDetail.isCompliant' | chart count by userPrincipalName DeviceCompliance | eval total=true + false | rename true as compliant | eval percent=((compliant/total)*100) | table userPrincipalName compliant total percent I have tried adding / modifying pipes like "stats latest(createdDateTime) by userPrincilaName compliant total percent" but this is inserting the time into the true / false fields. I feel that I am modifying the data too much up front and maybe need to change around the piping order perhaps? All suggestions welcomed.
Does anyone know if GlobalMantics dataset is available in the free version of splunk, or is it only included in the paid plans? If it is available in free version then how and where can i access that... See more...
Does anyone know if GlobalMantics dataset is available in the free version of splunk, or is it only included in the paid plans? If it is available in free version then how and where can i access that file?
Hi there im currently at a search to get the usage of Indexes, so i have an overview which indexes gets used in searches and which indexes doesnt so i can speak with the usecase owner if the data ... See more...
Hi there im currently at a search to get the usage of Indexes, so i have an overview which indexes gets used in searches and which indexes doesnt so i can speak with the usecase owner if the data is still needed and why it doesnt get used. Thats the current state of the search:   | rest "/services/data/indexes" | table title totalEventCount frozenTimePeriodInSecs | dedup title | append [search index=_audit sourcetype="audittrail" search_id="*" action=search earliest=-24h latest=now ``` Regex Extraction ``` | rex field=search max_match=0 "index\=\s*\"?(?<used_index>\S+)\"?" | rex field=search max_match=0 "\`(?<used_macro>\S+)\`" | rex field=search max_match=0 "eventtype\=\s*(?<used_evttype>\S+)" ``` Eventtype resolving ``` | mvexpand used_evttype | join type=left used_evttype [| rest "/services/saved/eventtypes" | table title search | stats values(search) as search by title | rename search as resolved_eventtype, title as used_evttype] | rex field=resolved_eventtype max_match=0 "eventtype\=\s*(?<nested_eventtype>\S+)" | mvexpand nested_eventtype | join type=left nested_eventtype [| rest "/services/saved/eventtypes" | table title search | stats values(search) as search by title | rename search as resolved_nested_eventtype, title as nested_eventtype] ``` Macro resolving ``` | mvexpand used_macro | join type=left used_macro [| rest "/servicesNS/-/-/admin/macros" count=0 | table title definition | stats values(definition) as definition by title | rename definition as resolved_macro, title as used_macro] | rex field=resolved_macro max_match=0 "\`(?<nested_macro>[^\`]+)\`" | mvexpand nested_macro | join type=left nested_macro [| rest "/servicesNS/-/-/admin/macros" count=0 | table title definition | stats values(definition) as definition by title | rename definition as resolved_nested_macro, title as nested_macro] | where like(resolved_nested_macro,"%index=%") OR isnull(resolved_nested_macro) ``` merge resolved stuff into one field ``` | foreach used* nested* [eval datasrc=mvdedup(if(<<FIELD>>!="",mvappend(datasrc, "<<FIELD>>"),datasrc))] | eval datasrc=mvfilter(!match(datasrc, "usedData")) | eval usedData = mvappend(used_index, if(!isnull(resolved_nested_eventtype),resolved_nested_eventtype, resolved_eventtype), if(!isnull(resolved_nested_macro),resolved_nested_macro, resolved_macro)) | eval usedData = mvdedup(usedData) | table app user action info search_id usedData datasrc | mvexpand usedData | eval usedData=replace(usedData, "\)","") | where !like(usedData, "`%`") AND !isnull(usedData) | rex field=usedData "index\=\s*\"?(?<usedData>[^\s\"]+)\"?" | eval usedData=replace(usedData, "\"","") | eval usedData=replace(usedData,"'","") | stats count by usedData ]    The search first gets the indexes via | rest with its eventcount and retentiontime. Then audittrail data gets appended and used Indexes, Macros and Eventtypes gets extracted from the searchstring and resolved (since some apps uses nested eventtypes/macros in my environment they get resolved twice). Still needs some sanitizing of the extracted used-indexes. that gives me a table like this (limited the table to splunkinternal indexes as example) title totalEventCount frozenTimePeriodInSecs count usedData _audit 771404957 188697600     _configtracker 717 2592000     _dsappevent 240 5184000     _dsclient 232 5184000     _dsphonehome 843820 604800     _internal 7039169453 15552000     _introspection 39100728 1209600     _telemetry 55990 63072000     _thefishbucket 0 2419200           22309 _*       1039 _audit       2 _configtracker       1340 _dsappevent       1017 _dsclient       1 _dsclient]       709 _dsphonehome       2089 _internal       117 _introspection       2 _metrics       2 _metrics_rollup       2 _telemetry       2 _thefishbucket But i didnt managed to merge the rows together so that i have count=1039 for _audit plus the 22309 from searches that uses all internal indexes  in one row for each index.
im trying to capture address, city and state that are in one line but they have ", : and , i would like to excluede (Quotes Coma and Colon) see test example below 12345 noth test Avenue","city":... See more...
im trying to capture address, city and state that are in one line but they have ", : and , i would like to excluede (Quotes Coma and Colon) see test example below 12345 noth test Avenue","city":"test","state":"test",