All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

want to report a pattern for each day and grab event times from different logs for that pattern , tried something like this but not working as expected , need suggestions.   Current Query index="b... See more...
want to report a pattern for each day and grab event times from different logs for that pattern , tried something like this but not working as expected , need suggestions.   Current Query index="blah" source="*blah*" "Incomming" | rex "Incomming: (?<s_json>.*])" | timechart span=1d earliest(_time) as a_time by s_json | join type=outer s_json [ search index="blah" source="*blah*" "Internal" | rex "Internal: (?<s_json>.*])" | timechart span=1d latest(_time) as c_time by s_json ] | convert ctime(a_time) | convert ctime(c_time) | table _time,s_json,a_time,c_time   Expected - Date1 , j1 , a_time,c_time Date1,j2,a_time,c_time Date2,j3,a_time,c_time Date3,j4,a_time,c_time Date4,j1,a_time,c_time Date4,j2,a_time,_ctime   Each day can have its own unique patterns ( j1,j2,j3,j4 ... ) , so need to dyamicallly pick that for that day and report a_time and c_time
Hi Is it possible to have APM with Splunk like ELK APM? Application Performance Monitoring (APM) with Elasticsearch | Elastic Monitoring Services with Elasticsearch APM | by suraj kumar | Medium ... See more...
Hi Is it possible to have APM with Splunk like ELK APM? Application Performance Monitoring (APM) with Elasticsearch | Elastic Monitoring Services with Elasticsearch APM | by suraj kumar | Medium   Thanks,
Hi, First question here - apologies if it's obvious or basic! I am trying to parse a nested list and find specific policies that match a couple of criteria. But I can't seem to get the logic right.... See more...
Hi, First question here - apologies if it's obvious or basic! I am trying to parse a nested list and find specific policies that match a couple of criteria. But I can't seem to get the logic right.  If you look at the JSON below, there is a nested list of "policies". I just want to find the policies with a result of "false" and with a filename starting with "./hard" and I want to print the "print" messages in a table. The JSON output is here:   { "resource": { "action": "hard_failed", "meta": { "result": false, "passed": 1, "total_failed": 2, "hard_failed": 1, "soft_failed": 0, "advisory_failed": 1, "duration_ms": 0, "sentinel": { "schema_version": 1, "data": { "sentinel-policy-networking": { "can_override": false, "error": null, "policies": [ { "allowed_failure": true, "error": null, "policy": "sentinel-policy-networking/advisory-mandatory-tags", "result": false, "trace": { "description": "This policy uses the Sentinel tfplan/v2 import to require that\nspecified AWS resources have all mandatory tags", "error": null, "print": "aws_customer_gateway.customer_gateway has tags that is missing, null, or is not a map or a list. It should have had these items: [Name]\naws_vpn_connection.main has tags that is missing, null, or is not a map or a list. It should have had these items: [Name]\n", "result": false, "rules": { "main": { "desc": "Main rule", "ident": "main", "position": { "filename": "./advisory-mandatory-tags.sentinel", "offset": 1244, "line": 38, "column": 1 }, "value": false } } } }, { "allowed_failure": false, "error": null, "policy": "sentinel-policy-networking/soft-mandatory-vpn", "result": false, "trace": { "description": "This policy uses the Sentinel tfplan/v2 import to require that\nAWS VPNs only used allowed DH groups", "error": null, "print": "aws_vpn_connection.main has tunnel1_phase1_dh_group_numbers [2] with items [2] that are not in the allowed list: [19, 20, 21]\n", "result": false, "rules": { "main": { "desc": "Main rule", "ident": "main", "position": { "filename": "./soft-mandatory-vpn.sentinel", "offset": 740, "line": 23, "column": 1 }, "value": false } } } }, { "allowed_failure": false, "error": null, "policy": "sentinel-policy-networking/hard-mandatory-policy", "result": true, "trace": { "description": "This policy uses the Sentinel tfplan/v2 import to validate that no security group\nrules have the CIDR \"0.0.0.0/0\" for ingress rules. It covers both the\naws_security_group and the aws_security_group_rule resources which can both\ndefine rules.", "error": null, "print": "", "result": true, "rules": { "main": { "desc": "", "ident": "main", "position": { "filename": "./hard-mandatory-policy.sentinel", "offset": 2136, "line": 58, "column": 1 }, "value": true } } } } ], "result": false } } }, "comment": null, "run": { "id": "run-5bY1pzrxAHWMH8Qx", "message": "Update main.tf" }, "workspace": { "id": "ws-LvRrPmVrm4MSnDC9", "name": "aws-networking-sentinel-policed" } }, "type": "policy_check", "id": "polchk-i8KAHhKX7Dqb7T3A" }, "request": { "id": null }, "auth": { "impersonator_id": null, "type": "Client", "accessor_id": "user-pF6Tu2NVN7hgNa7E", "description": "gh-webhooks-nicovibert-org-yuYK0J4bQO", "organization_id": "org-b5PqUHqMpyQ2M86A" }, "timestamp": "2021-11-26T22:08:52.000Z", "version": "0", "type": "Resource", "id": "9890fc46-f913-48d9-b2f7-64f8fc1c4d0e" }   This search below isn't quite working for me. There must be an easier way to do - perhaps with spath ? - but I can't get it to work. sourcetype="terraform_cloud" AND resource.meta.sentinel.data.sentinel-policy-networking.policies{}.trace.rules.main.position.filename = "./hard*" AND resource.meta.sentinel.data.sentinel-policy-networking.policies{}.trace.rules.main.value="false" | table auth.description, resource.meta.hard_failed, resource.meta.sentinel.data.sentinel*.policies*.trace.print   Thanks in advance for any pointers, examples or hints.
ldap authentication method is configured and users are showing on user settings page, but sometimes users not showing in owner list for assigning notable .  Enterprise Security Version 6.6.2 Build... See more...
ldap authentication method is configured and users are showing on user settings page, but sometimes users not showing in owner list for assigning notable .  Enterprise Security Version 6.6.2 Build 51 Any advice?
i'm trying to use the dynamic inventory from the splunk ansible repo for ec2 (non container) instances, however it appears that the inventory is not converting my environment variables to groups.  it... See more...
i'm trying to use the dynamic inventory from the splunk ansible repo for ec2 (non container) instances, however it appears that the inventory is not converting my environment variables to groups.  it always defaults to local host or says can't parse the inventory.  I tried to use the example in the wrapper directory but still the same issue.    anyone have any success with using this 
Team - looking for ideas how to achieve the below scenario Query 1 - get list of unique patterns for each day Query 2 - for the list of above patterns get earliest occurence for each day Query 3 ... See more...
Team - looking for ideas how to achieve the below scenario Query 1 - get list of unique patterns for each day Query 2 - for the list of above patterns get earliest occurence for each day Query 3 - for the list of above patterns get latest occurence for each day Report - Day , Pattern , earliest time , latest time Note - Query 1, 2,3 are all different search queries , as they all are in different log lines
Is it possible to do a search that returns the last 4 full hours? Meaning, if it is 5:13 PM it would return results between 1:00 PM and 5:00 PM (filters out off the current hour) Thanks in advance f... See more...
Is it possible to do a search that returns the last 4 full hours? Meaning, if it is 5:13 PM it would return results between 1:00 PM and 5:00 PM (filters out off the current hour) Thanks in advance for any responses
I experienced the following 3 issues when collecting Splunk data with Python splunk-sdk package. The 1st issue is: during peak hours from 10AM to 4PM, I may experience the following error.  How to i... See more...
I experienced the following 3 issues when collecting Splunk data with Python splunk-sdk package. The 1st issue is: during peak hours from 10AM to 4PM, I may experience the following error.  How to increase concurrency_limit to avoid this error?  Is concurrency_limit to be modified on Splunk server? splunklib.binding.HTTPError: HTTP 503 Service Unavailable -- Search not executed: The maximum number of concurrent historical searches on this instance has been reached., concurrency_category="historical", concurrency_context="instance-wide", current_concurrency=52, concurrency_limit=52
Since TAs run in the background & usually not viewed, how do I check on their health? Any useful SPLs are appreciated.
Hi, i have added a line chart with drilldowns on a Splunk dashboard. When I click on any on the data points on this chart, other line disappears. How can both the lines be kept intact during dr... See more...
Hi, i have added a line chart with drilldowns on a Splunk dashboard. When I click on any on the data points on this chart, other line disappears. How can both the lines be kept intact during drilldowns?   
Hello Team, I am looking for a list of the top used JSP's used in our application. Is it possible to get from AppDynamics?   TIA
Hi, I am just taking the total count of incident using stats command form the json and the query is working fine. But when I using timechart command it is not giving me the visualization. Please any... See more...
Hi, I am just taking the total count of incident using stats command form the json and the query is working fine. But when I using timechart command it is not giving me the visualization. Please anyone help me on this. index=incident_index  source="/mi_data/dc_in_events.json" | spath path=Incident__Number output=INC | stats values(*) as * by INC | stats count(Incident__Number) Thanks  
My security device cannot set the data type to be transmitted. How can I convert these data? Something like this:\xB0\xD7\xC3\xFB\xB5\xA5
I have a query that returns multiple times and I need to step through this result doing a subtraction count between one and the other. Example: Time = time[0] - time[1] Time = time[1] - time[2] e... See more...
I have a query that returns multiple times and I need to step through this result doing a subtraction count between one and the other. Example: Time = time[0] - time[1] Time = time[1] - time[2] etc .... My search looks like this:   <my search> | eval Time=_time | stats values(Time) as Time by UserName| eval reconnection=if(UserName == UserName, tonumber(mvindex(Time,1))-tonumber(mvindex(Time,0)), "falha") | where reconnection>0 AND reconnection<600 | eval reconnection=tostring(reconnection, "duration")
hello my friends.  how using regex can delete everything in bold   {"test": "  {   \n \"data\": \"check\",\n \"git_branch\": \"master\",\n \"git_repo_name\": \"reponame\",\n \"id\": 234,\n \"times... See more...
hello my friends.  how using regex can delete everything in bold   {"test": "  {   \n \"data\": \"check\",\n \"git_branch\": \"master\",\n \"git_repo_name\": \"reponame\",\n \"id\": 234,\n \"timestamp\": 16378522342,\n } "}   output  { \"data\": \"check\", \"git_branch\": \"master\", \"git_repo_name\": \"reponame\", \"id\": 3413, \"timestamp\": 16378522342 }
I can send events to Splunk cloud and Splunk Enterprise servers with HttpEventCollectorLogbackAppender, but with SSL not enabled. I need to do this with SSL Enabled for my HEC connection.  I need to... See more...
I can send events to Splunk cloud and Splunk Enterprise servers with HttpEventCollectorLogbackAppender, but with SSL not enabled. I need to do this with SSL Enabled for my HEC connection.  I need to send events over a secure connection from my java application to Splunk Enterprise. I need to configure my HttpEventCollector to verify certs.   How do I configure my HttpEventCollectorLogbackAppender to use a certificate? I only see code examples wit  disableCertificateValidation="true"    How do I specify verify with my certificate?  Have you a code example please?
| eval new_name=mvindex(split(name, ","),0),         first name 0 and last name 1 split first and last name why split and 0&1 used in the pipeline
Hi, Sorry for repeating questions, that were already answered in...  https://community.splunk.com/t5/Dashboards-Visualizations/Button-to-run-splunk-query/m-p/576236#M47218 I'm trying to make a s... See more...
Hi, Sorry for repeating questions, that were already answered in...  https://community.splunk.com/t5/Dashboards-Visualizations/Button-to-run-splunk-query/m-p/576236#M47218 I'm trying to make a simple outputlookup with user's confirmation window and I'm struggling with the following issues: 1. Every time I refresh the page the query executes by itself but when I press the submit button, nothing happens. 2. How can I make a comment or popup or display some message after clicking the submit button: "Thank you for clicking" + writing a click time. This is what I have for now <form script="test_submit.js"> <row> <panel id="test"> <table> <search id="base"> <query>index=_internal | head 10000 | bin _time span=12h | stats count by sourcetype source name _time</query> <sampleRatio>1</sampleRatio> </search> <option name="count">6</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> <row> <panel> <html> <div> <button type="button" id="buttonId" class="btn btn-primary">Submit</button> </div> </html> </panel> </row> <row> <panel> <search base="base"> <query>| eval time = now() | outputlookup append=t users.csv</query> </search> </panel> </row> </form>   JS   require([ 'jquery', 'splunkjs/mvc', 'splunkjs/mvc/simplexml/ready!' ], function ($, mvc) { function submit_btn() { var submittedTokens = mvc.Components.get('submitted'); var defaultTokens = mvc.Components.get('default'); if (submittedTokens && defaultTokens) { submittedTokens.set(defaultTokens.toJSON()); } } $('#buttonId').on("click", function (){ submit_btn(); }); });   Thanks a lot
Hi  I've noticed that controller verison 21.11.1-778 has introduced a new create users method, previously on older controller versions, it was possible to enter username, email, name and password fo... See more...
Hi  I've noticed that controller verison 21.11.1-778 has introduced a new create users method, previously on older controller versions, it was possible to enter username, email, name and password for a new user, now this has been changed to only name and e-mail. I'm very curious to why this has been implemented? It makes sense that newly created users now receive an e-mail where they can enter their own desired password, but the fact that the e-mail address now functions as their username .. Well that's really a pain.. Is there any way to change this back? I administer a fairly large amount of AppDynamics users where everyone has their initials as current username and I really dislike this new enforced "email as username" idea that has been implemented.. 
I opened report acceleration for a report.The acceleration summary build well when user role has no Search filter restrictions. But as long I add any search filter restrictions for the role,the acce... See more...
I opened report acceleration for a report.The acceleration summary build well when user role has no Search filter restrictions. But as long I add any search filter restrictions for the role,the acceleration summary will never start to build. In page Report Acceleration summaries,the summary status shows that the progress is 0. Can anyone tell me why this happens?Any response will be appreciated