All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, What is the use of  move_policy = sinkhole and on which scenario we will use batch (Batch will index the file and delete but in which application or server this should be used?) 
Good afternoon! I have six Heartbeat messages coming from the system. All messages from the chain are connected by one: "srcMsgId". Messages have a certain interval, if the interval between messages... See more...
Good afternoon! I have six Heartbeat messages coming from the system. All messages from the chain are connected by one: "srcMsgId". Messages have a certain interval, if the interval between messages is higher, that is, one message in a chain (out of six messages) is late, say six seconds (the normal interval is five seconds), then an alarm is triggered. Can you tell me how to do it? I tried something like this but it's not exactly what you need: index="bl_logging" sourcetype="testsystem-2" srcMsgId="rwfsdfsfqwe121432gsgsfgd80" | transaction maxpause=5m srcMsgId Correlation_srcMsgId messageId | table _time srcMsgId Correlation_srcMsgId messageId duration eventcount | sort srcMsgId _time | streamstats current=f window=1 values(_time) as prevTime by subject | eval timeDiff=_time-prevTime | delta _time as timediff | where (timediff)>6 This will only show me the lagging message. Messages arrive one after another, so we can see their interval and, in theory, take this opportunity to create an alarm when the interval is increased. Please tell me how can I do this. Alas I don't selenium in splunk.  
| regex "message.message"="Total count XXXXXX: |Total rows YYYYYY: " | rex field="message.message" max_match=0 "^(?<msg1>[^:]*)\:(?<msg2>[^:]*)\:(?<msg3>[^:]*)\:(?<msg4>[^:]*)($|\{)" | eval dtonly=st... See more...
| regex "message.message"="Total count XXXXXX: |Total rows YYYYYY: " | rex field="message.message" max_match=0 "^(?<msg1>[^:]*)\:(?<msg2>[^:]*)\:(?<msg3>[^:]*)\:(?<msg4>[^:]*)($|\{)" | eval dtonly=strftime(_time, "%Y%m%d") | chart first(msg4) OVER dtonly BY msg3 I get the stats but not the visualization.   Thanks
Has anyone experience issues with Splunk AOB on Splunk version 9.0 not showing any outputs?  
Hi All , Is there any way to rename splunk saved searches and Dashboards with rest API?  Best Regards
Hi all, I am using Splunk Enterprise 8.1. Recently, we had configured alert actions as "Email notification action" and it works fine. Moreover, we would like to send those alert message to SYSLOG... See more...
Hi all, I am using Splunk Enterprise 8.1. Recently, we had configured alert actions as "Email notification action" and it works fine. Moreover, we would like to send those alert message to SYSLOG server. The manual said "script alert action is deprecated". Any other way to achieve it? Thanks.
Hello, i have a link list that works like a charm in use with highlights of selected option. However upon Dashboard load the link list will select its default value, however it will not highlight i... See more...
Hello, i have a link list that works like a charm in use with highlights of selected option. However upon Dashboard load the link list will select its default value, however it will not highlight it. Any chance to get that working?   Kind regards,   Mike
When editing a dashboard in source code, I add comments using <!-- -->,  but after saving the comments move location. It is not consistent -- they generally bunch together near the top, but sometimes... See more...
When editing a dashboard in source code, I add comments using <!-- -->,  but after saving the comments move location. It is not consistent -- they generally bunch together near the top, but sometimes others are left in place. This has been an issue for some time and is referenced here - https://community.splunk.com/t5/Dashboards-Visualizations/Commented-code-moves-in-splunk-Dashboard/m-p/498902  I have tried moving the comments within different stanzas (<row>, <panel>, etc), and also played with the tab levels in case that was involved, however have not had luck. It drives me batty. Has this been figured out? My Edits:     ... <!-- Some Comment --> <row> <!-- Another Comment --> <panel> ... ... </panel> </row> ...       Becomes:     ... ... <!-- Some Comment --> <!-- Another Comment --> <row> <panel> ... ... </panel> </row> ...          
I have two log generator sending logs to same index, how can we Trigger an alert when same type of error generated from both log generator
How to clear the quiz history to redo the quiz?
Hi There, I have a universal forwarder that is installed on a Syslog Server and is reading all the logs received on the Syslog Server and forwarding them onwards to a single indexer.  Network Dev... See more...
Hi There, I have a universal forwarder that is installed on a Syslog Server and is reading all the logs received on the Syslog Server and forwarding them onwards to a single indexer.  Network Devices --> Syslog Server (UF Deployed) --> Single x Indexer  However, now I want to configure the UF to forward the mirror copies of some specific log paths to another indexer group as well. Network Devices --> Syslog Server (UF Deployed) --> (all log sources) Single x Indexer                                                                                                         --> (some log sources) Multiple x Indexers This has been configured by configuring two output groups in outputs.conf and then in the monitor stanza's adding the _TCP_ROUTING to specify which log paths to forward to both indexer groups.      # outputs.conf # BASE SETTINGS [tcpout] defaultGroup = groupA forceTimebasedAutoLB = true maxQueueSize = 7MB useACK = true forwardedindex.2.whitelist = (_audit|_introspection|_internal) [tcpout:groupA] server = x.x.x.x:9999 sslVersions = TLS1.2 [tcpout:groupB] server = x.x.x.x:9997, x.x.x.x:9997, x.x.x.x:9997, x.x.x.x:9997       #inputs.conf [monitor:///logs/x_x/.../*.log] disabled = false index = x sourcetype = x _TCP_ROUTING = groupA, groupB ignoreOlderThan = 2h     The issue is that the logs are not being received properly in the groupB (Multiple Indexers). Is there any misconfiguration I've made in the above or is there any way to check what the issue can be? Kind regards
Hello. we have gotten a request by our security team to tighten up the access to the logs in our splunk deployment. currently we log everything into a limited amount of indexers based on what ty... See more...
Hello. we have gotten a request by our security team to tighten up the access to the logs in our splunk deployment. currently we log everything into a limited amount of indexers based on what type of log it is. this means that for example all win_event logs are gathered together.  security has expressed an interest in restricting access to logs based on what service the host operates to the relevant users that are service operators. This isn't that much of an issue as we have other systems that has that info but that info is not present within splunk. so I've thought of a few things but don't quite know how to implement these. either restrict searches based on hosts through the role manager. but this seems messy as hosts change all the time and must be manually kept up to date (afaik) another way would be to tag the hosts, but how would I go about doing that. could that be done on the forwarder, can the indexer do that, how would i go about referencing outside systems for this info ( i do have the code to actually supply the info, just don't know where to put it). finally is there any way to do this retroactively?
Hello, please I would like to know if the best approach to "move" a single Search Head  to a Search Head Cluster is to deploy first a deployer, than a single-member search head cluster (connected t... See more...
Hello, please I would like to know if the best approach to "move" a single Search Head  to a Search Head Cluster is to deploy first a deployer, than a single-member search head cluster (connected to the deployer and elected as captain), with replication factor to 1, and then add other members to the Search Head Cluster.   Info here: Deploy a single-member search head cluster - Splunk Documentation   Thanks and kind regards.
I downloaded splunk-9.0.1-82c987350fde-linux-2.6-x86_64.rpm and installed it on a fresh centos7 VM. Then I ran the following commands: # yum install centos-release-scl # yum install rh-postgresql... See more...
I downloaded splunk-9.0.1-82c987350fde-linux-2.6-x86_64.rpm and installed it on a fresh centos7 VM. Then I ran the following commands: # yum install centos-release-scl # yum install rh-postgresql96-postgresql-libs devtoolset-9 devtoolset-9-gcc openssl-devel net-tools libffi-devel After that, I opened tcp ports to allow traffic to pass through the local firewall: # firewall-cmd --add-port=8000/tcp –permanent       # firewall-cmd --add-port=8089/tcp --permanent # firewall-cmd –reload and started the Splunk app by running the following command: # /opt/splunk/bin/splunk start Then I changed the “license group” to “free license” and restarted the splunk: # /opt/splunk/bin/splunk restart After restart, I made two modifications: I forced splunk to use phyton3 as by default it uses Python 2:               # Vi /opt/splunk/etc/system/local/server  and added the following line to the section titled [general]: # python.version = force_python3 Then restarted the splunk again: # /opt/splunk/bin/splunk restart 2. I ran the following command because I needed Splunk to start automatically when the machine booted: # /opt/splunk/bin/splunk enable boot-start -systemd-managed 1 -user admin But I faced the following error: “splunk is currently ranning, please stop it before ranning enable/disable boot-start” I stopped the splunk and ran the command for the second time: # /opt/splunk/bin/splunk stop # /opt/splunk/bin/splunk enable boot-start -systemd-managed 1 -user admin The output was: “Could not find user admin” Then I ran just the first part of the command as below. # /opt/splunk/bin/splunk enable boot-start The output was: “Init script installed at /etc/init.d/splunk.” “Init script is configured to ran at boot.” I ran the compelete command again: # /opt/splunk/bin/splunk enable boot-start -systemd-managed 1 -user admin The output was: “Initd script /etc/init.d/splunk exists. splunk is currently enabled as init.d bootstart service.” I logged out of the VM and logged in via ssh connection as root, but the splunk did not run automatically as I had wished. I would be grateful if you could help me to solve it.
Hi I just started a free trial to evalulate the product for monitoring our k8s cluster, however I can't find the "snippet" mentioned here anywhere in my dashboard. https://docs.appdynamics.com/app... See more...
Hi I just started a free trial to evalulate the product for monitoring our k8s cluster, however I can't find the "snippet" mentioned here anywhere in my dashboard. https://docs.appdynamics.com/appd-cloud/kubernetes-and-app-service-monitoring/install-kubernetes-and-app-service-monitoring#InstallKubernetesandAppServiceMonitoring-InstallAppDynamicsCloudUsingHelmCharts I suspect that "AppDynamics Cloud" might not be product I have a trial for?
Hello there, I am trying to implement some access control with DB Connect. I want to do something basic like: - users from role_a can only query db_a - users from role_b can only query db_b So I... See more...
Hello there, I am trying to implement some access control with DB Connect. I want to do something basic like: - users from role_a can only query db_a - users from role_b can only query db_b So I have meta files below: default.meta:     [] access = read [ admin , role_a, role_b ]     local.meta:     [db_connections/db_a] access = read [ role_a ] [identities/db_a] access = read [ role_a ] [db_connections/db_b] access = read [ role_b ] [identities/db_b] access = read [ role_b ]     As a result, when logged in as a user from role_a, as expected, I cannot see db_b connection/identity. However, I am still able to retrieve data from db_b using dbxquery:     | dbxquery "select ..." connection=db_b     It will still work despite not having read access to db_b connection/identity objects. Is there an additional metadata entry to limit dbxquery access to specified connections/identities, or dbxquery command does not take care of object permissions at all?
Hi All,   We are facing issues with AWS cloudtrail logs ingested through SQS-S3 method using AWS Splunk Add-on. The add-on installed and inputs are configured on Splunk Cloud Search head. We ... See more...
Hi All,   We are facing issues with AWS cloudtrail logs ingested through SQS-S3 method using AWS Splunk Add-on. The add-on installed and inputs are configured on Splunk Cloud Search head. We were getting logs properly in Splunk, however from last we can observe huge latency in logs The logs are getting delayed in Splunk. We did  not  observed any  internal errors in Splunk. Please help us and suggest how we can mitigate this issue.
Hello Everyone, I was working on Alert creation for License Expire. And I got Search Query for the same. But Can you please help me to elaborate each line.  Thanks in Advance. Like why | rest, | j... See more...
Hello Everyone, I was working on Alert creation for License Expire. And I got Search Query for the same. But Can you please help me to elaborate each line.  Thanks in Advance. Like why | rest, | join and rest all commands are used. (1) : | rest splunk_server_group=local /services/licenser/licenses | join type=outer group_id splunk_server [ rest splunk_server_group=local /services/licenser/groups | where is_active = 1 | rename title AS group_id | fields is_active group_id splunk_server] | where is_active = 1 | eval days_left = floor((expiration_time - now()) / 86400) | where NOT (quota = 1048576 OR label == "Splunk Enterprise Reset Warnings" OR label == "Splunk Lite Reset Warnings") | eventstats max(eval(if(days_left >= 14, 1, 0))) as has_valid_license by splunk_server | where has_valid_license == 0 AND (status == "EXPIRED" OR days_left < 15) | eval expiration_status = case(days_left >= 14, days_left." days left", days_left < 14 AND days_left >= 0, "Expires soon: ".days_left." days left", days_left < 0, "Expired") | eval total_gb=round(quota/1024/1024/1024,3) | fields splunk_server label license_hash type group_id total_gb expiration_time expiration_status | convert ctime(expiration_time) | rename splunk_server AS Instance label AS "Label" license_hash AS "License Hash" type AS Type group_id AS Group total_gb AS Size expiration_time AS "Expires On" expiration_status AS Status ---- (2) | rest /services/licenser/licenses/ | eval now=now() | eval expire_in_days=(expiration_time-now)/86400 | eval expiration_time=strftime(expiration_time, "%Y-%m-%d %H:%M:%S") | table group_id expiration_time expire_in_days
Hi, I am integrated Jenkins to Splunk,  I Visible logs everything, but didn't  see tools reports. How to get that report in Splunk. 
I am trying to setup OpenTelemetry Collector for Kubernetes in Splunk Cloud. For this I followed the following article Splunk OpenTelemetry Collector for Kubernetes  I am using google Kubernete... See more...
I am trying to setup OpenTelemetry Collector for Kubernetes in Splunk Cloud. For this I followed the following article Splunk OpenTelemetry Collector for Kubernetes  I am using google Kubernetes engine. As mentioned in article I tried below two helm to  install Splunk OpenTelemetry Collector in a Kubernetes cluster. None of these worked and I keep getting error message.   helm install my-splunk-otel-collector --set="splunkPlatform.endpoint=https://http-inputs-prd-p-19lmt.splunkcloud.com:8088/services/collector/event,splunkPlatform.token=XXXX-XXXX-XXXX-XXXX-XXXX,splunkPlatform.index=main,clusterName=spluk-kub" splunk-otel-collector-chart/splunk-otel-collector   helm install my-splunk-otel-collector --set="splunkPlatform.endpoint=https://prd-p-19lmt.splunkcloud.com:8088/services/collector/event,splunkPlatform.token=XXXX-XXXX-XXXX-XXXX-XXXX,splunkPlatform.index=main,clusterName=spluk-kub" splunk-otel-collector-chart/splunk-otel-collector     Here is how the error log loooks like   2022-09-01T04:43:43.953Z info exporterhelper/queued_retry.go:427 Exporting failed. Will retry the request after interval. {"kind": "exporter", "data_type": "logs", "name": "splunk_hec/platform_logs", "error": "Post \"https://http-inputs-prd-p-19lmt.splunkcloud.com:8088/services/collector/event\": dial tcp: lookup http-inputs-prd-p-19lmt.splunkcloud.com on 10.4.0.10:53: no such host", "interval": "32.444167474s"} 2022-09-01T04:43:52.374Z error exporterhelper/queued_retry.go:176 Exporting failed. No more retries left. Dropping data. {"kind": "exporter", "data_type": "logs", "name": "splunk_hec/platform_logs", "error": "max elapsed time expired Post \"https://http-inputs-prd-p-19lmt.splunkcloud.com:8088/services/collector/event\": dial tcp: lookup http-inputs-prd-p-19lmt.splunkcloud.com on 10.4.0.10:53: no such host", "dropped_items": 1} go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).onTemporaryFailure   Please advice . Thank you.