All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Due to a disaster the Cluster Master of my indexer cluster is gone. There is no way to recover its data and we do not have a backup of its configuration files.  The peers keeps working fine so far w... See more...
Due to a disaster the Cluster Master of my indexer cluster is gone. There is no way to recover its data and we do not have a backup of its configuration files.  The peers keeps working fine so far without the CM but we need to built a new instance from the scratch and configure it. How can I do that?
I have many different machines that move around the country (USA), each with its own GPS lat and long coordinates. I'd like to be able to show the last known location for each item on a map.  My c... See more...
I have many different machines that move around the country (USA), each with its own GPS lat and long coordinates. I'd like to be able to show the last known location for each item on a map.  My current search is as follows:     source=".../ops.log" | table host, _time, gps_latitude, gps_longitude | where gps_latitude > 0.0 AND gps_longitude < 0.0 | dedup host     (the first line looks into a .log file which has information for latitude and longitude, many times these values are inputted as 0.0 or null and therefore I need to filter those out before I add them to the map, hence the "where" command). This correctly gets all of the machines with their Gps coords and displays them on a table (albeit very slowly). Now I would like to translate that information onto a marker map where each marker represents a host with its last known GPS coordinates.  I recognize that I must probably use a cluster map for this and have tried to add the following line to generate the map. (this is placed as the second last line)     geostats latfield=gps_latitude, longfield=gps_longitude count by host |     Unfortunately, nothing happens when attempting to add this line.  I would very much appreciate any help or guidance I could get to help set me in the right direction.  Thank you for your time.
i forgot my username and passwords and now iam unable login. i dint even find the filepath $SPLUNK_HOME/etc/passwd) so please help me in login to the account  
the agent was reported successfully  until the following error happened ِِERROR NetVizAgentRequest - Fatal transport error while connecting to URL [http://127.0.0.1:3892/api/agentinfo?timestamp=0&ag... See more...
the agent was reported successfully  until the following error happened ِِERROR NetVizAgentRequest - Fatal transport error while connecting to URL [http://127.0.0.1:3892/api/agentinfo?timestamp=0&agentType=APP_AGENT&agentVersion=0.10.0]: org.apache.http.conn.HttpHostConnectException: Connect to 127.0.0.1:3892 [/127.0.0.1] failed: Connection refused (Connection refused)
Hello   Am a newbie and am looking to extract data from a sample set that looks like this (its ingested in JSON): {    level: info    log: uid="302650",  a_msg="HandlingStatus=Finished, Message=... See more...
Hello   Am a newbie and am looking to extract data from a sample set that looks like this (its ingested in JSON): {    level: info    log: uid="302650",  a_msg="HandlingStatus=Finished, Message=Changed,     log_type: containerlogs    stream: stdout }   I want to extract the uid data as well as the Message which is inside the a_msg. I have  rex field=log "uid=\"(?<uid>\d{1,}+)" which gives me the uid, but I am REALLY struggling with the Message, ideally I would like a table to be produced so from the above data it would look like UID, Message ------------------- 302650, PlanChanged   I am reading up on Rex and Reg Ex etc, but this particular request requires a quick turnaround and i am really struggling.  Any help would be appreciated.   Many thanks
Hi Is it possible to integrate Splunk Enterprise on-premise with SignalFx or it just work with Splunk ITSI (APM)?   Any idea? Thanks,
Hey guys! This is my first question here, so I'm sorry if I'm not being clear. I want to enrich the data we have and add a few fields with data that I receive from an external API. For this, I wan... See more...
Hey guys! This is my first question here, so I'm sorry if I'm not being clear. I want to enrich the data we have and add a few fields with data that I receive from an external API. For this, I want to create a custom command to receive a field name and add run a python code to send requests to the API with the field values and create new fields with the additional data for each row. I have no experience with creating new commands with python, so I'd much appreciate an explanation how to do it (or if you have a better idea how to implement this) and some examples to rely on. Thanks!
I opened report acceleration for a report.The acceleration summary build well when user role has no Search filter restrictions. But as long I add any search filter restrictions for the role,the acce... See more...
I opened report acceleration for a report.The acceleration summary build well when user role has no Search filter restrictions. But as long I add any search filter restrictions for the role,the acceleration summary will never start to build. In page Report Acceleration summaries,the summary status shows that the progress is 0. Can anyone tell me why this happens?Any response will be appreciated
Hi dear splunk community, Can someone help me to convert/translate the following syslog-ng config to the corresponding rsyslog server side config please ? The standard syslog-ng.conf file simply in... See more...
Hi dear splunk community, Can someone help me to convert/translate the following syslog-ng config to the corresponding rsyslog server side config please ? The standard syslog-ng.conf file simply includes the statements below which are in a file in the conf.d dir like so: @include "/etc/syslog-ng/conf.d/*.conf" I'd really appreciate it.  It doesn't have to be perfect or exact or even completely converted, as long as most of it can be  translated...the main concerns being the audit logs and all the rest of the program logs... Thanks so very much,   source s_remote { syslog(port(514), transport(tcp), flags(), max-connections(100),log-fetch-limit(100),log_iw_size(20000)); }; destination d_kern { file("/var/log/syslog-to-splunk/$HOST/kernel.log" create-dirs(yes)); }; destination d_mail { file("/var/log/syslog-to-splunk/$HOST/mail.log" create-dirs(yes)); }; destination d_daemon { file("/var/log/syslog-to-splunk/$HOST/daemon.log" create-dirs(yes)); }; destination d_auth { file("/var/log/syslog-to-splunk/$HOST/auth.log" create-dirs(yes)); }; destination d_cron { file("/var/log/syslog-to-splunk/$HOST/cron.log" create-dirs(yes)); }; destination d_security { file("/var/log/syslog-to-splunk/$HOST/audit.log" create-dirs(yes)); }; # All else. destination d_rest { file("/var/log/syslog-to-splunk/$HOST/program/$PROGRAM.log" create-dirs(yes)); }; filter f_kern { facility(kern); }; filter f_mail { facility(mail); }; filter f_daemon { facility(daemon, user, syslog); }; filter f_auth { facility(auth, authpriv, security); }; filter f_cron { facility(cron); }; filter f_security { facility(kern, auth, authpriv, security, local7); }; filter f_rest { not facility(auth, authpriv, cron, kern, mail, user, security, syslog); }; log { source(s_remote); filter(f_kern); destination(d_kern); }; log { source(s_remote); filter(f_mail); destination(d_mail); }; log { source(s_remote); filter(f_daemon); destination(d_daemon); }; log { source(s_remote); filter(f_auth); destination(d_auth); }; log { source(s_remote); filter(f_cron); destination(d_cron); }; log { source(s_remote); filter(f_security); destination(d_security); }; log { source(s_remote); filter(f_rest); destination(d_rest); };  
want to report a pattern for each day and grab event times from different logs for that pattern , tried something like this but not working as expected , need suggestions.   Current Query index="b... See more...
want to report a pattern for each day and grab event times from different logs for that pattern , tried something like this but not working as expected , need suggestions.   Current Query index="blah" source="*blah*" "Incomming" | rex "Incomming: (?<s_json>.*])" | timechart span=1d earliest(_time) as a_time by s_json | join type=outer s_json [ search index="blah" source="*blah*" "Internal" | rex "Internal: (?<s_json>.*])" | timechart span=1d latest(_time) as c_time by s_json ] | convert ctime(a_time) | convert ctime(c_time) | table _time,s_json,a_time,c_time   Expected - Date1 , j1 , a_time,c_time Date1,j2,a_time,c_time Date2,j3,a_time,c_time Date3,j4,a_time,c_time Date4,j1,a_time,c_time Date4,j2,a_time,_ctime   Each day can have its own unique patterns ( j1,j2,j3,j4 ... ) , so need to dyamicallly pick that for that day and report a_time and c_time
Hi Is it possible to have APM with Splunk like ELK APM? Application Performance Monitoring (APM) with Elasticsearch | Elastic Monitoring Services with Elasticsearch APM | by suraj kumar | Medium ... See more...
Hi Is it possible to have APM with Splunk like ELK APM? Application Performance Monitoring (APM) with Elasticsearch | Elastic Monitoring Services with Elasticsearch APM | by suraj kumar | Medium   Thanks,
Hi, First question here - apologies if it's obvious or basic! I am trying to parse a nested list and find specific policies that match a couple of criteria. But I can't seem to get the logic right.... See more...
Hi, First question here - apologies if it's obvious or basic! I am trying to parse a nested list and find specific policies that match a couple of criteria. But I can't seem to get the logic right.  If you look at the JSON below, there is a nested list of "policies". I just want to find the policies with a result of "false" and with a filename starting with "./hard" and I want to print the "print" messages in a table. The JSON output is here:   { "resource": { "action": "hard_failed", "meta": { "result": false, "passed": 1, "total_failed": 2, "hard_failed": 1, "soft_failed": 0, "advisory_failed": 1, "duration_ms": 0, "sentinel": { "schema_version": 1, "data": { "sentinel-policy-networking": { "can_override": false, "error": null, "policies": [ { "allowed_failure": true, "error": null, "policy": "sentinel-policy-networking/advisory-mandatory-tags", "result": false, "trace": { "description": "This policy uses the Sentinel tfplan/v2 import to require that\nspecified AWS resources have all mandatory tags", "error": null, "print": "aws_customer_gateway.customer_gateway has tags that is missing, null, or is not a map or a list. It should have had these items: [Name]\naws_vpn_connection.main has tags that is missing, null, or is not a map or a list. It should have had these items: [Name]\n", "result": false, "rules": { "main": { "desc": "Main rule", "ident": "main", "position": { "filename": "./advisory-mandatory-tags.sentinel", "offset": 1244, "line": 38, "column": 1 }, "value": false } } } }, { "allowed_failure": false, "error": null, "policy": "sentinel-policy-networking/soft-mandatory-vpn", "result": false, "trace": { "description": "This policy uses the Sentinel tfplan/v2 import to require that\nAWS VPNs only used allowed DH groups", "error": null, "print": "aws_vpn_connection.main has tunnel1_phase1_dh_group_numbers [2] with items [2] that are not in the allowed list: [19, 20, 21]\n", "result": false, "rules": { "main": { "desc": "Main rule", "ident": "main", "position": { "filename": "./soft-mandatory-vpn.sentinel", "offset": 740, "line": 23, "column": 1 }, "value": false } } } }, { "allowed_failure": false, "error": null, "policy": "sentinel-policy-networking/hard-mandatory-policy", "result": true, "trace": { "description": "This policy uses the Sentinel tfplan/v2 import to validate that no security group\nrules have the CIDR \"0.0.0.0/0\" for ingress rules. It covers both the\naws_security_group and the aws_security_group_rule resources which can both\ndefine rules.", "error": null, "print": "", "result": true, "rules": { "main": { "desc": "", "ident": "main", "position": { "filename": "./hard-mandatory-policy.sentinel", "offset": 2136, "line": 58, "column": 1 }, "value": true } } } } ], "result": false } } }, "comment": null, "run": { "id": "run-5bY1pzrxAHWMH8Qx", "message": "Update main.tf" }, "workspace": { "id": "ws-LvRrPmVrm4MSnDC9", "name": "aws-networking-sentinel-policed" } }, "type": "policy_check", "id": "polchk-i8KAHhKX7Dqb7T3A" }, "request": { "id": null }, "auth": { "impersonator_id": null, "type": "Client", "accessor_id": "user-pF6Tu2NVN7hgNa7E", "description": "gh-webhooks-nicovibert-org-yuYK0J4bQO", "organization_id": "org-b5PqUHqMpyQ2M86A" }, "timestamp": "2021-11-26T22:08:52.000Z", "version": "0", "type": "Resource", "id": "9890fc46-f913-48d9-b2f7-64f8fc1c4d0e" }   This search below isn't quite working for me. There must be an easier way to do - perhaps with spath ? - but I can't get it to work. sourcetype="terraform_cloud" AND resource.meta.sentinel.data.sentinel-policy-networking.policies{}.trace.rules.main.position.filename = "./hard*" AND resource.meta.sentinel.data.sentinel-policy-networking.policies{}.trace.rules.main.value="false" | table auth.description, resource.meta.hard_failed, resource.meta.sentinel.data.sentinel*.policies*.trace.print   Thanks in advance for any pointers, examples or hints.
ldap authentication method is configured and users are showing on user settings page, but sometimes users not showing in owner list for assigning notable .  Enterprise Security Version 6.6.2 Build... See more...
ldap authentication method is configured and users are showing on user settings page, but sometimes users not showing in owner list for assigning notable .  Enterprise Security Version 6.6.2 Build 51 Any advice?
i'm trying to use the dynamic inventory from the splunk ansible repo for ec2 (non container) instances, however it appears that the inventory is not converting my environment variables to groups.  it... See more...
i'm trying to use the dynamic inventory from the splunk ansible repo for ec2 (non container) instances, however it appears that the inventory is not converting my environment variables to groups.  it always defaults to local host or says can't parse the inventory.  I tried to use the example in the wrapper directory but still the same issue.    anyone have any success with using this 
Team - looking for ideas how to achieve the below scenario Query 1 - get list of unique patterns for each day Query 2 - for the list of above patterns get earliest occurence for each day Query 3 ... See more...
Team - looking for ideas how to achieve the below scenario Query 1 - get list of unique patterns for each day Query 2 - for the list of above patterns get earliest occurence for each day Query 3 - for the list of above patterns get latest occurence for each day Report - Day , Pattern , earliest time , latest time Note - Query 1, 2,3 are all different search queries , as they all are in different log lines
Is it possible to do a search that returns the last 4 full hours? Meaning, if it is 5:13 PM it would return results between 1:00 PM and 5:00 PM (filters out off the current hour) Thanks in advance f... See more...
Is it possible to do a search that returns the last 4 full hours? Meaning, if it is 5:13 PM it would return results between 1:00 PM and 5:00 PM (filters out off the current hour) Thanks in advance for any responses
I experienced the following 3 issues when collecting Splunk data with Python splunk-sdk package. The 1st issue is: during peak hours from 10AM to 4PM, I may experience the following error.  How to i... See more...
I experienced the following 3 issues when collecting Splunk data with Python splunk-sdk package. The 1st issue is: during peak hours from 10AM to 4PM, I may experience the following error.  How to increase concurrency_limit to avoid this error?  Is concurrency_limit to be modified on Splunk server? splunklib.binding.HTTPError: HTTP 503 Service Unavailable -- Search not executed: The maximum number of concurrent historical searches on this instance has been reached., concurrency_category="historical", concurrency_context="instance-wide", current_concurrency=52, concurrency_limit=52
Since TAs run in the background & usually not viewed, how do I check on their health? Any useful SPLs are appreciated.
Hi, i have added a line chart with drilldowns on a Splunk dashboard. When I click on any on the data points on this chart, other line disappears. How can both the lines be kept intact during dr... See more...
Hi, i have added a line chart with drilldowns on a Splunk dashboard. When I click on any on the data points on this chart, other line disappears. How can both the lines be kept intact during drilldowns?   
Hello Team, I am looking for a list of the top used JSP's used in our application. Is it possible to get from AppDynamics?   TIA