All Topics

Top

All Topics

Hello, i have a link list that works like a charm in use with highlights of selected option. However upon Dashboard load the link list will select its default value, however it will not highlight i... See more...
Hello, i have a link list that works like a charm in use with highlights of selected option. However upon Dashboard load the link list will select its default value, however it will not highlight it. Any chance to get that working?   Kind regards,   Mike
When editing a dashboard in source code, I add comments using <!-- -->,  but after saving the comments move location. It is not consistent -- they generally bunch together near the top, but sometimes... See more...
When editing a dashboard in source code, I add comments using <!-- -->,  but after saving the comments move location. It is not consistent -- they generally bunch together near the top, but sometimes others are left in place. This has been an issue for some time and is referenced here - https://community.splunk.com/t5/Dashboards-Visualizations/Commented-code-moves-in-splunk-Dashboard/m-p/498902  I have tried moving the comments within different stanzas (<row>, <panel>, etc), and also played with the tab levels in case that was involved, however have not had luck. It drives me batty. Has this been figured out? My Edits:     ... <!-- Some Comment --> <row> <!-- Another Comment --> <panel> ... ... </panel> </row> ...       Becomes:     ... ... <!-- Some Comment --> <!-- Another Comment --> <row> <panel> ... ... </panel> </row> ...          
I have two log generator sending logs to same index, how can we Trigger an alert when same type of error generated from both log generator
How to clear the quiz history to redo the quiz?
Hi There, I have a universal forwarder that is installed on a Syslog Server and is reading all the logs received on the Syslog Server and forwarding them onwards to a single indexer.  Network Dev... See more...
Hi There, I have a universal forwarder that is installed on a Syslog Server and is reading all the logs received on the Syslog Server and forwarding them onwards to a single indexer.  Network Devices --> Syslog Server (UF Deployed) --> Single x Indexer  However, now I want to configure the UF to forward the mirror copies of some specific log paths to another indexer group as well. Network Devices --> Syslog Server (UF Deployed) --> (all log sources) Single x Indexer                                                                                                         --> (some log sources) Multiple x Indexers This has been configured by configuring two output groups in outputs.conf and then in the monitor stanza's adding the _TCP_ROUTING to specify which log paths to forward to both indexer groups.      # outputs.conf # BASE SETTINGS [tcpout] defaultGroup = groupA forceTimebasedAutoLB = true maxQueueSize = 7MB useACK = true forwardedindex.2.whitelist = (_audit|_introspection|_internal) [tcpout:groupA] server = x.x.x.x:9999 sslVersions = TLS1.2 [tcpout:groupB] server = x.x.x.x:9997, x.x.x.x:9997, x.x.x.x:9997, x.x.x.x:9997       #inputs.conf [monitor:///logs/x_x/.../*.log] disabled = false index = x sourcetype = x _TCP_ROUTING = groupA, groupB ignoreOlderThan = 2h     The issue is that the logs are not being received properly in the groupB (Multiple Indexers). Is there any misconfiguration I've made in the above or is there any way to check what the issue can be? Kind regards
Hello. we have gotten a request by our security team to tighten up the access to the logs in our splunk deployment. currently we log everything into a limited amount of indexers based on what ty... See more...
Hello. we have gotten a request by our security team to tighten up the access to the logs in our splunk deployment. currently we log everything into a limited amount of indexers based on what type of log it is. this means that for example all win_event logs are gathered together.  security has expressed an interest in restricting access to logs based on what service the host operates to the relevant users that are service operators. This isn't that much of an issue as we have other systems that has that info but that info is not present within splunk. so I've thought of a few things but don't quite know how to implement these. either restrict searches based on hosts through the role manager. but this seems messy as hosts change all the time and must be manually kept up to date (afaik) another way would be to tag the hosts, but how would I go about doing that. could that be done on the forwarder, can the indexer do that, how would i go about referencing outside systems for this info ( i do have the code to actually supply the info, just don't know where to put it). finally is there any way to do this retroactively?
Hello, please I would like to know if the best approach to "move" a single Search Head  to a Search Head Cluster is to deploy first a deployer, than a single-member search head cluster (connected t... See more...
Hello, please I would like to know if the best approach to "move" a single Search Head  to a Search Head Cluster is to deploy first a deployer, than a single-member search head cluster (connected to the deployer and elected as captain), with replication factor to 1, and then add other members to the Search Head Cluster.   Info here: Deploy a single-member search head cluster - Splunk Documentation   Thanks and kind regards.
I downloaded splunk-9.0.1-82c987350fde-linux-2.6-x86_64.rpm and installed it on a fresh centos7 VM. Then I ran the following commands: # yum install centos-release-scl # yum install rh-postgresql... See more...
I downloaded splunk-9.0.1-82c987350fde-linux-2.6-x86_64.rpm and installed it on a fresh centos7 VM. Then I ran the following commands: # yum install centos-release-scl # yum install rh-postgresql96-postgresql-libs devtoolset-9 devtoolset-9-gcc openssl-devel net-tools libffi-devel After that, I opened tcp ports to allow traffic to pass through the local firewall: # firewall-cmd --add-port=8000/tcp –permanent       # firewall-cmd --add-port=8089/tcp --permanent # firewall-cmd –reload and started the Splunk app by running the following command: # /opt/splunk/bin/splunk start Then I changed the “license group” to “free license” and restarted the splunk: # /opt/splunk/bin/splunk restart After restart, I made two modifications: I forced splunk to use phyton3 as by default it uses Python 2:               # Vi /opt/splunk/etc/system/local/server  and added the following line to the section titled [general]: # python.version = force_python3 Then restarted the splunk again: # /opt/splunk/bin/splunk restart 2. I ran the following command because I needed Splunk to start automatically when the machine booted: # /opt/splunk/bin/splunk enable boot-start -systemd-managed 1 -user admin But I faced the following error: “splunk is currently ranning, please stop it before ranning enable/disable boot-start” I stopped the splunk and ran the command for the second time: # /opt/splunk/bin/splunk stop # /opt/splunk/bin/splunk enable boot-start -systemd-managed 1 -user admin The output was: “Could not find user admin” Then I ran just the first part of the command as below. # /opt/splunk/bin/splunk enable boot-start The output was: “Init script installed at /etc/init.d/splunk.” “Init script is configured to ran at boot.” I ran the compelete command again: # /opt/splunk/bin/splunk enable boot-start -systemd-managed 1 -user admin The output was: “Initd script /etc/init.d/splunk exists. splunk is currently enabled as init.d bootstart service.” I logged out of the VM and logged in via ssh connection as root, but the splunk did not run automatically as I had wished. I would be grateful if you could help me to solve it.
Hi I just started a free trial to evalulate the product for monitoring our k8s cluster, however I can't find the "snippet" mentioned here anywhere in my dashboard. https://docs.appdynamics.com/app... See more...
Hi I just started a free trial to evalulate the product for monitoring our k8s cluster, however I can't find the "snippet" mentioned here anywhere in my dashboard. https://docs.appdynamics.com/appd-cloud/kubernetes-and-app-service-monitoring/install-kubernetes-and-app-service-monitoring#InstallKubernetesandAppServiceMonitoring-InstallAppDynamicsCloudUsingHelmCharts I suspect that "AppDynamics Cloud" might not be product I have a trial for?
Hello there, I am trying to implement some access control with DB Connect. I want to do something basic like: - users from role_a can only query db_a - users from role_b can only query db_b So I... See more...
Hello there, I am trying to implement some access control with DB Connect. I want to do something basic like: - users from role_a can only query db_a - users from role_b can only query db_b So I have meta files below: default.meta:     [] access = read [ admin , role_a, role_b ]     local.meta:     [db_connections/db_a] access = read [ role_a ] [identities/db_a] access = read [ role_a ] [db_connections/db_b] access = read [ role_b ] [identities/db_b] access = read [ role_b ]     As a result, when logged in as a user from role_a, as expected, I cannot see db_b connection/identity. However, I am still able to retrieve data from db_b using dbxquery:     | dbxquery "select ..." connection=db_b     It will still work despite not having read access to db_b connection/identity objects. Is there an additional metadata entry to limit dbxquery access to specified connections/identities, or dbxquery command does not take care of object permissions at all?
Hi All,   We are facing issues with AWS cloudtrail logs ingested through SQS-S3 method using AWS Splunk Add-on. The add-on installed and inputs are configured on Splunk Cloud Search head. We ... See more...
Hi All,   We are facing issues with AWS cloudtrail logs ingested through SQS-S3 method using AWS Splunk Add-on. The add-on installed and inputs are configured on Splunk Cloud Search head. We were getting logs properly in Splunk, however from last we can observe huge latency in logs The logs are getting delayed in Splunk. We did  not  observed any  internal errors in Splunk. Please help us and suggest how we can mitigate this issue.
Hello Everyone, I was working on Alert creation for License Expire. And I got Search Query for the same. But Can you please help me to elaborate each line.  Thanks in Advance. Like why | rest, | j... See more...
Hello Everyone, I was working on Alert creation for License Expire. And I got Search Query for the same. But Can you please help me to elaborate each line.  Thanks in Advance. Like why | rest, | join and rest all commands are used. (1) : | rest splunk_server_group=local /services/licenser/licenses | join type=outer group_id splunk_server [ rest splunk_server_group=local /services/licenser/groups | where is_active = 1 | rename title AS group_id | fields is_active group_id splunk_server] | where is_active = 1 | eval days_left = floor((expiration_time - now()) / 86400) | where NOT (quota = 1048576 OR label == "Splunk Enterprise Reset Warnings" OR label == "Splunk Lite Reset Warnings") | eventstats max(eval(if(days_left >= 14, 1, 0))) as has_valid_license by splunk_server | where has_valid_license == 0 AND (status == "EXPIRED" OR days_left < 15) | eval expiration_status = case(days_left >= 14, days_left." days left", days_left < 14 AND days_left >= 0, "Expires soon: ".days_left." days left", days_left < 0, "Expired") | eval total_gb=round(quota/1024/1024/1024,3) | fields splunk_server label license_hash type group_id total_gb expiration_time expiration_status | convert ctime(expiration_time) | rename splunk_server AS Instance label AS "Label" license_hash AS "License Hash" type AS Type group_id AS Group total_gb AS Size expiration_time AS "Expires On" expiration_status AS Status ---- (2) | rest /services/licenser/licenses/ | eval now=now() | eval expire_in_days=(expiration_time-now)/86400 | eval expiration_time=strftime(expiration_time, "%Y-%m-%d %H:%M:%S") | table group_id expiration_time expire_in_days
Hi, I am integrated Jenkins to Splunk,  I Visible logs everything, but didn't  see tools reports. How to get that report in Splunk. 
I am trying to setup OpenTelemetry Collector for Kubernetes in Splunk Cloud. For this I followed the following article Splunk OpenTelemetry Collector for Kubernetes  I am using google Kubernete... See more...
I am trying to setup OpenTelemetry Collector for Kubernetes in Splunk Cloud. For this I followed the following article Splunk OpenTelemetry Collector for Kubernetes  I am using google Kubernetes engine. As mentioned in article I tried below two helm to  install Splunk OpenTelemetry Collector in a Kubernetes cluster. None of these worked and I keep getting error message.   helm install my-splunk-otel-collector --set="splunkPlatform.endpoint=https://http-inputs-prd-p-19lmt.splunkcloud.com:8088/services/collector/event,splunkPlatform.token=XXXX-XXXX-XXXX-XXXX-XXXX,splunkPlatform.index=main,clusterName=spluk-kub" splunk-otel-collector-chart/splunk-otel-collector   helm install my-splunk-otel-collector --set="splunkPlatform.endpoint=https://prd-p-19lmt.splunkcloud.com:8088/services/collector/event,splunkPlatform.token=XXXX-XXXX-XXXX-XXXX-XXXX,splunkPlatform.index=main,clusterName=spluk-kub" splunk-otel-collector-chart/splunk-otel-collector     Here is how the error log loooks like   2022-09-01T04:43:43.953Z info exporterhelper/queued_retry.go:427 Exporting failed. Will retry the request after interval. {"kind": "exporter", "data_type": "logs", "name": "splunk_hec/platform_logs", "error": "Post \"https://http-inputs-prd-p-19lmt.splunkcloud.com:8088/services/collector/event\": dial tcp: lookup http-inputs-prd-p-19lmt.splunkcloud.com on 10.4.0.10:53: no such host", "interval": "32.444167474s"} 2022-09-01T04:43:52.374Z error exporterhelper/queued_retry.go:176 Exporting failed. No more retries left. Dropping data. {"kind": "exporter", "data_type": "logs", "name": "splunk_hec/platform_logs", "error": "max elapsed time expired Post \"https://http-inputs-prd-p-19lmt.splunkcloud.com:8088/services/collector/event\": dial tcp: lookup http-inputs-prd-p-19lmt.splunkcloud.com on 10.4.0.10:53: no such host", "dropped_items": 1} go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).onTemporaryFailure   Please advice . Thank you.
Case Scenario: Dashboard A is clicked, thus sending a token whose value is hostname ($hostnameToken$) to Dashboard B. Dashboard B with the following query has received $hostnameToken$ , then u... See more...
Case Scenario: Dashboard A is clicked, thus sending a token whose value is hostname ($hostnameToken$) to Dashboard B. Dashboard B with the following query has received $hostnameToken$ , then used on | search host_name , when search | search query returns “Results not Found”.         index=S score>=7.0         | lookup A.csv IP Address as ip OUTPUTNEW Squad         | lookup B.csv IP as ip OUTPUTNEW PIC, Email         | lookup C.csv ip as ip OUTPUTNEW host_name            IF        (true)                 | search host_name="$hostnameToken$"        THEN DO THIS:                  | stats values(plugin) as Plugin values(solution) as Solution values(PIC) as pic values(Email) as email                    values(Squad) as squad by ip         ELSE   (false)                   | eval hostToken="$hostnameToken$"                   | lookup CortexHostIp2.csv host_name as hostToken OUTPUTNEW ip                   | search ip=ip               THEN DO THIS:                        | stats values(plugin) as Plugin values(solution) as Solution values(PIC) as pic values(Email)                                   as email values(Squad) as squad by ip   The next search is carried out by converting the hostname token value to IP via eval and lookup. If both ELSE conditions are not met (value is False), then the search stops.   Question: How to implement conditional statements into the above query? What is the right query to use?
1日1回のスケジュールで、全件洗い替えするサマリーインデックスを作成しています。 レポートに対し、「サマリーインデックスの編集」で設定し、「スケジュールの編集」でスケジュール実行されるように設定しています。   savedsearches.conf [si_summary-detail] action.summary_index = 1 action.summary_index.... See more...
1日1回のスケジュールで、全件洗い替えするサマリーインデックスを作成しています。 レポートに対し、「サマリーインデックスの編集」で設定し、「スケジュールの編集」でスケジュール実行されるように設定しています。   savedsearches.conf [si_summary-detail] action.summary_index = 1 action.summary_index._name = summary-detail action.summary_index._type = event cron_schedule = 30 0 * * * enableSched = 1 search = ... SPL ...   indexes.conf [summary-detail] frozenTimePeriodInSecs = 43200 quarantinePastSecs = 473385600 maxDataSize = auto_high_volume   スケジュール実行した場合、次のコマンドが追加されていました。実行した結果、サマリーインデックスに入るデータが不足しています。 試しに、サーチクエリーを作成し、実行してみましたが、スケジュール実行した場合と同じでした。 2015-03-04T02:17:07Zから2022-06-03T08:57:42Zまでの14,099,234件しかサマリーインデックスに入りません。 | summaryindex spool=t uselb=t addtime=t index="summary-detail" file="summary-detail.stash_new" name="si_summary-detail" marker=""   「file="summary-detail.stash_new"」を除去し、サーチクエリーを実行した場合は、正常な結果が得られました。 2015-03-04T02:17:07Zから2022-08-25T04:01:36Zまでの19,972,598件がサマリーインデックスに入ります。 | summaryindex spool=t uselb=t addtime=t index="summary-detail" name="si_summary-detail" marker=""   fileオプションの指定を外すと上手く行くため、[stash_new]スタンザの設定が影響しているのでしょうか? [stash_new]スタンザは、デフォルト設定のままで、変更を加えていません。   サマリーインデックスに全件入れるため、確認すべきポイントがあれば教えてください。
I have a csv file that is created by a shell script on a Linux server and runs every minute.  I am running a forwarder on the server to send the data to splunk.  The csv file has a header line contai... See more...
I have a csv file that is created by a shell script on a Linux server and runs every minute.  I am running a forwarder on the server to send the data to splunk.  The csv file has a header line containing the field names.  Some of the data has units along with the number.  What I mean is if the number is a percent, the number might look like "10%".  If a number is measured in microseconds, the number might look like "10us".  Is that the best way to ingest the data?  Should I remove the units?  Does Splunk see "10us" as a number?   Thanks!
Hello,  I have splunk starting up with systemd, and running as user splunk.    I went to run the performance tasks on my indexers.  Each of them failed.  under triggered collectors, it reads the co... See more...
Hello,  I have splunk starting up with systemd, and running as user splunk.    I went to run the performance tasks on my indexers.  Each of them failed.  under triggered collectors, it reads the collector stack trace failed. I logged into the system in question, and looked at the splunk_rapid_diag.log file.   tools_collector ERROR 139880958523200 - Error occurred for collector tcpdump while running `/usr/sbin/tcpdump -i any -w /tmp/tmpbkxib485/tcpdump_All_All.pcap` Process finished with code=1 how do I run diagnostic tools without root access?   I expect this would affect any collectors using strace as well.   --jason      
Over the past two years, we have been working hard to create the best experience for Splunk Observability Cloud customers. We feel that we have reached an inflection point where the Splunk Observabil... See more...
Over the past two years, we have been working hard to create the best experience for Splunk Observability Cloud customers. We feel that we have reached an inflection point where the Splunk Observability user experience far surpasses that of the classic SignalFx version (which is still accessible). That is why starting Sep 30, 2022, the “Switch to Classic” functionality will be removed. Access to the original user interface will no longer be available.                       What that means for you, the customer: There are no major changes to the experience for users other than not being able to access the “Classic” user interface There is no action required of admins or users of the product Any URLs containing “../classic/#/home…” will be redirected to the new URL path; which means nothing will break if you have classic URLs bookmarked. You will simply be led to the new user interface Thank you for being a customer of Splunk Observability Cloud and providing the feedback to continue making the product industry-leading. Have questions or need more clarification? Please reach out via the Support Portal. — Courtney Gannon, Product Marketing Manager, OpenSource & Observability
Hey Splunkers,   I am working on a search but I have encountered a road block in my search. I am attempting to change a UTC time zone to CST within the search. I was able to change the EPOCH time... See more...
Hey Splunkers,   I am working on a search but I have encountered a road block in my search. I am attempting to change a UTC time zone to CST within the search. I was able to change the EPOCH times to CST but I am struggling to locate any documentation on how I can convert the UTC time to match the same as my CST results. I need to change my time to match the other time zones.         2022-08-31T21:04:52Z       needs to be converted to the same format as       08/31/2022 16:21:16