All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi There, I have a universal forwarder that is installed on a Syslog Server and is reading all the logs received on the Syslog Server and forwarding them onwards to a single indexer.  Network Dev... See more...
Hi There, I have a universal forwarder that is installed on a Syslog Server and is reading all the logs received on the Syslog Server and forwarding them onwards to a single indexer.  Network Devices --> Syslog Server (UF Deployed) --> Single x Indexer  However, now I want to configure the UF to forward the mirror copies of some specific log paths to another indexer group as well. Network Devices --> Syslog Server (UF Deployed) --> (all log sources) Single x Indexer                                                                                                         --> (some log sources) Multiple x Indexers This has been configured by configuring two output groups in outputs.conf and then in the monitor stanza's adding the _TCP_ROUTING to specify which log paths to forward to both indexer groups.      # outputs.conf # BASE SETTINGS [tcpout] defaultGroup = groupA forceTimebasedAutoLB = true maxQueueSize = 7MB useACK = true forwardedindex.2.whitelist = (_audit|_introspection|_internal) [tcpout:groupA] server = x.x.x.x:9999 sslVersions = TLS1.2 [tcpout:groupB] server = x.x.x.x:9997, x.x.x.x:9997, x.x.x.x:9997, x.x.x.x:9997       #inputs.conf [monitor:///logs/x_x/.../*.log] disabled = false index = x sourcetype = x _TCP_ROUTING = groupA, groupB ignoreOlderThan = 2h     The issue is that the logs are not being received properly in the groupB (Multiple Indexers). Is there any misconfiguration I've made in the above or is there any way to check what the issue can be? Kind regards
Hello. we have gotten a request by our security team to tighten up the access to the logs in our splunk deployment. currently we log everything into a limited amount of indexers based on what ty... See more...
Hello. we have gotten a request by our security team to tighten up the access to the logs in our splunk deployment. currently we log everything into a limited amount of indexers based on what type of log it is. this means that for example all win_event logs are gathered together.  security has expressed an interest in restricting access to logs based on what service the host operates to the relevant users that are service operators. This isn't that much of an issue as we have other systems that has that info but that info is not present within splunk. so I've thought of a few things but don't quite know how to implement these. either restrict searches based on hosts through the role manager. but this seems messy as hosts change all the time and must be manually kept up to date (afaik) another way would be to tag the hosts, but how would I go about doing that. could that be done on the forwarder, can the indexer do that, how would i go about referencing outside systems for this info ( i do have the code to actually supply the info, just don't know where to put it). finally is there any way to do this retroactively?
Hello, please I would like to know if the best approach to "move" a single Search Head  to a Search Head Cluster is to deploy first a deployer, than a single-member search head cluster (connected t... See more...
Hello, please I would like to know if the best approach to "move" a single Search Head  to a Search Head Cluster is to deploy first a deployer, than a single-member search head cluster (connected to the deployer and elected as captain), with replication factor to 1, and then add other members to the Search Head Cluster.   Info here: Deploy a single-member search head cluster - Splunk Documentation   Thanks and kind regards.
I downloaded splunk-9.0.1-82c987350fde-linux-2.6-x86_64.rpm and installed it on a fresh centos7 VM. Then I ran the following commands: # yum install centos-release-scl # yum install rh-postgresql... See more...
I downloaded splunk-9.0.1-82c987350fde-linux-2.6-x86_64.rpm and installed it on a fresh centos7 VM. Then I ran the following commands: # yum install centos-release-scl # yum install rh-postgresql96-postgresql-libs devtoolset-9 devtoolset-9-gcc openssl-devel net-tools libffi-devel After that, I opened tcp ports to allow traffic to pass through the local firewall: # firewall-cmd --add-port=8000/tcp –permanent       # firewall-cmd --add-port=8089/tcp --permanent # firewall-cmd –reload and started the Splunk app by running the following command: # /opt/splunk/bin/splunk start Then I changed the “license group” to “free license” and restarted the splunk: # /opt/splunk/bin/splunk restart After restart, I made two modifications: I forced splunk to use phyton3 as by default it uses Python 2:               # Vi /opt/splunk/etc/system/local/server  and added the following line to the section titled [general]: # python.version = force_python3 Then restarted the splunk again: # /opt/splunk/bin/splunk restart 2. I ran the following command because I needed Splunk to start automatically when the machine booted: # /opt/splunk/bin/splunk enable boot-start -systemd-managed 1 -user admin But I faced the following error: “splunk is currently ranning, please stop it before ranning enable/disable boot-start” I stopped the splunk and ran the command for the second time: # /opt/splunk/bin/splunk stop # /opt/splunk/bin/splunk enable boot-start -systemd-managed 1 -user admin The output was: “Could not find user admin” Then I ran just the first part of the command as below. # /opt/splunk/bin/splunk enable boot-start The output was: “Init script installed at /etc/init.d/splunk.” “Init script is configured to ran at boot.” I ran the compelete command again: # /opt/splunk/bin/splunk enable boot-start -systemd-managed 1 -user admin The output was: “Initd script /etc/init.d/splunk exists. splunk is currently enabled as init.d bootstart service.” I logged out of the VM and logged in via ssh connection as root, but the splunk did not run automatically as I had wished. I would be grateful if you could help me to solve it.
Hi I just started a free trial to evalulate the product for monitoring our k8s cluster, however I can't find the "snippet" mentioned here anywhere in my dashboard. https://docs.appdynamics.com/app... See more...
Hi I just started a free trial to evalulate the product for monitoring our k8s cluster, however I can't find the "snippet" mentioned here anywhere in my dashboard. https://docs.appdynamics.com/appd-cloud/kubernetes-and-app-service-monitoring/install-kubernetes-and-app-service-monitoring#InstallKubernetesandAppServiceMonitoring-InstallAppDynamicsCloudUsingHelmCharts I suspect that "AppDynamics Cloud" might not be product I have a trial for?
Hello there, I am trying to implement some access control with DB Connect. I want to do something basic like: - users from role_a can only query db_a - users from role_b can only query db_b So I... See more...
Hello there, I am trying to implement some access control with DB Connect. I want to do something basic like: - users from role_a can only query db_a - users from role_b can only query db_b So I have meta files below: default.meta:     [] access = read [ admin , role_a, role_b ]     local.meta:     [db_connections/db_a] access = read [ role_a ] [identities/db_a] access = read [ role_a ] [db_connections/db_b] access = read [ role_b ] [identities/db_b] access = read [ role_b ]     As a result, when logged in as a user from role_a, as expected, I cannot see db_b connection/identity. However, I am still able to retrieve data from db_b using dbxquery:     | dbxquery "select ..." connection=db_b     It will still work despite not having read access to db_b connection/identity objects. Is there an additional metadata entry to limit dbxquery access to specified connections/identities, or dbxquery command does not take care of object permissions at all?
Hi All,   We are facing issues with AWS cloudtrail logs ingested through SQS-S3 method using AWS Splunk Add-on. The add-on installed and inputs are configured on Splunk Cloud Search head. We ... See more...
Hi All,   We are facing issues with AWS cloudtrail logs ingested through SQS-S3 method using AWS Splunk Add-on. The add-on installed and inputs are configured on Splunk Cloud Search head. We were getting logs properly in Splunk, however from last we can observe huge latency in logs The logs are getting delayed in Splunk. We did  not  observed any  internal errors in Splunk. Please help us and suggest how we can mitigate this issue.
Hello Everyone, I was working on Alert creation for License Expire. And I got Search Query for the same. But Can you please help me to elaborate each line.  Thanks in Advance. Like why | rest, | j... See more...
Hello Everyone, I was working on Alert creation for License Expire. And I got Search Query for the same. But Can you please help me to elaborate each line.  Thanks in Advance. Like why | rest, | join and rest all commands are used. (1) : | rest splunk_server_group=local /services/licenser/licenses | join type=outer group_id splunk_server [ rest splunk_server_group=local /services/licenser/groups | where is_active = 1 | rename title AS group_id | fields is_active group_id splunk_server] | where is_active = 1 | eval days_left = floor((expiration_time - now()) / 86400) | where NOT (quota = 1048576 OR label == "Splunk Enterprise Reset Warnings" OR label == "Splunk Lite Reset Warnings") | eventstats max(eval(if(days_left >= 14, 1, 0))) as has_valid_license by splunk_server | where has_valid_license == 0 AND (status == "EXPIRED" OR days_left < 15) | eval expiration_status = case(days_left >= 14, days_left." days left", days_left < 14 AND days_left >= 0, "Expires soon: ".days_left." days left", days_left < 0, "Expired") | eval total_gb=round(quota/1024/1024/1024,3) | fields splunk_server label license_hash type group_id total_gb expiration_time expiration_status | convert ctime(expiration_time) | rename splunk_server AS Instance label AS "Label" license_hash AS "License Hash" type AS Type group_id AS Group total_gb AS Size expiration_time AS "Expires On" expiration_status AS Status ---- (2) | rest /services/licenser/licenses/ | eval now=now() | eval expire_in_days=(expiration_time-now)/86400 | eval expiration_time=strftime(expiration_time, "%Y-%m-%d %H:%M:%S") | table group_id expiration_time expire_in_days
Hi, I am integrated Jenkins to Splunk,  I Visible logs everything, but didn't  see tools reports. How to get that report in Splunk. 
I am trying to setup OpenTelemetry Collector for Kubernetes in Splunk Cloud. For this I followed the following article Splunk OpenTelemetry Collector for Kubernetes  I am using google Kubernete... See more...
I am trying to setup OpenTelemetry Collector for Kubernetes in Splunk Cloud. For this I followed the following article Splunk OpenTelemetry Collector for Kubernetes  I am using google Kubernetes engine. As mentioned in article I tried below two helm to  install Splunk OpenTelemetry Collector in a Kubernetes cluster. None of these worked and I keep getting error message.   helm install my-splunk-otel-collector --set="splunkPlatform.endpoint=https://http-inputs-prd-p-19lmt.splunkcloud.com:8088/services/collector/event,splunkPlatform.token=XXXX-XXXX-XXXX-XXXX-XXXX,splunkPlatform.index=main,clusterName=spluk-kub" splunk-otel-collector-chart/splunk-otel-collector   helm install my-splunk-otel-collector --set="splunkPlatform.endpoint=https://prd-p-19lmt.splunkcloud.com:8088/services/collector/event,splunkPlatform.token=XXXX-XXXX-XXXX-XXXX-XXXX,splunkPlatform.index=main,clusterName=spluk-kub" splunk-otel-collector-chart/splunk-otel-collector     Here is how the error log loooks like   2022-09-01T04:43:43.953Z info exporterhelper/queued_retry.go:427 Exporting failed. Will retry the request after interval. {"kind": "exporter", "data_type": "logs", "name": "splunk_hec/platform_logs", "error": "Post \"https://http-inputs-prd-p-19lmt.splunkcloud.com:8088/services/collector/event\": dial tcp: lookup http-inputs-prd-p-19lmt.splunkcloud.com on 10.4.0.10:53: no such host", "interval": "32.444167474s"} 2022-09-01T04:43:52.374Z error exporterhelper/queued_retry.go:176 Exporting failed. No more retries left. Dropping data. {"kind": "exporter", "data_type": "logs", "name": "splunk_hec/platform_logs", "error": "max elapsed time expired Post \"https://http-inputs-prd-p-19lmt.splunkcloud.com:8088/services/collector/event\": dial tcp: lookup http-inputs-prd-p-19lmt.splunkcloud.com on 10.4.0.10:53: no such host", "dropped_items": 1} go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).onTemporaryFailure   Please advice . Thank you.
Case Scenario: Dashboard A is clicked, thus sending a token whose value is hostname ($hostnameToken$) to Dashboard B. Dashboard B with the following query has received $hostnameToken$ , then u... See more...
Case Scenario: Dashboard A is clicked, thus sending a token whose value is hostname ($hostnameToken$) to Dashboard B. Dashboard B with the following query has received $hostnameToken$ , then used on | search host_name , when search | search query returns “Results not Found”.         index=S score>=7.0         | lookup A.csv IP Address as ip OUTPUTNEW Squad         | lookup B.csv IP as ip OUTPUTNEW PIC, Email         | lookup C.csv ip as ip OUTPUTNEW host_name            IF        (true)                 | search host_name="$hostnameToken$"        THEN DO THIS:                  | stats values(plugin) as Plugin values(solution) as Solution values(PIC) as pic values(Email) as email                    values(Squad) as squad by ip         ELSE   (false)                   | eval hostToken="$hostnameToken$"                   | lookup CortexHostIp2.csv host_name as hostToken OUTPUTNEW ip                   | search ip=ip               THEN DO THIS:                        | stats values(plugin) as Plugin values(solution) as Solution values(PIC) as pic values(Email)                                   as email values(Squad) as squad by ip   The next search is carried out by converting the hostname token value to IP via eval and lookup. If both ELSE conditions are not met (value is False), then the search stops.   Question: How to implement conditional statements into the above query? What is the right query to use?
1日1回のスケジュールで、全件洗い替えするサマリーインデックスを作成しています。 レポートに対し、「サマリーインデックスの編集」で設定し、「スケジュールの編集」でスケジュール実行されるように設定しています。   savedsearches.conf [si_summary-detail] action.summary_index = 1 action.summary_index.... See more...
1日1回のスケジュールで、全件洗い替えするサマリーインデックスを作成しています。 レポートに対し、「サマリーインデックスの編集」で設定し、「スケジュールの編集」でスケジュール実行されるように設定しています。   savedsearches.conf [si_summary-detail] action.summary_index = 1 action.summary_index._name = summary-detail action.summary_index._type = event cron_schedule = 30 0 * * * enableSched = 1 search = ... SPL ...   indexes.conf [summary-detail] frozenTimePeriodInSecs = 43200 quarantinePastSecs = 473385600 maxDataSize = auto_high_volume   スケジュール実行した場合、次のコマンドが追加されていました。実行した結果、サマリーインデックスに入るデータが不足しています。 試しに、サーチクエリーを作成し、実行してみましたが、スケジュール実行した場合と同じでした。 2015-03-04T02:17:07Zから2022-06-03T08:57:42Zまでの14,099,234件しかサマリーインデックスに入りません。 | summaryindex spool=t uselb=t addtime=t index="summary-detail" file="summary-detail.stash_new" name="si_summary-detail" marker=""   「file="summary-detail.stash_new"」を除去し、サーチクエリーを実行した場合は、正常な結果が得られました。 2015-03-04T02:17:07Zから2022-08-25T04:01:36Zまでの19,972,598件がサマリーインデックスに入ります。 | summaryindex spool=t uselb=t addtime=t index="summary-detail" name="si_summary-detail" marker=""   fileオプションの指定を外すと上手く行くため、[stash_new]スタンザの設定が影響しているのでしょうか? [stash_new]スタンザは、デフォルト設定のままで、変更を加えていません。   サマリーインデックスに全件入れるため、確認すべきポイントがあれば教えてください。
I have a csv file that is created by a shell script on a Linux server and runs every minute.  I am running a forwarder on the server to send the data to splunk.  The csv file has a header line contai... See more...
I have a csv file that is created by a shell script on a Linux server and runs every minute.  I am running a forwarder on the server to send the data to splunk.  The csv file has a header line containing the field names.  Some of the data has units along with the number.  What I mean is if the number is a percent, the number might look like "10%".  If a number is measured in microseconds, the number might look like "10us".  Is that the best way to ingest the data?  Should I remove the units?  Does Splunk see "10us" as a number?   Thanks!
Hello,  I have splunk starting up with systemd, and running as user splunk.    I went to run the performance tasks on my indexers.  Each of them failed.  under triggered collectors, it reads the co... See more...
Hello,  I have splunk starting up with systemd, and running as user splunk.    I went to run the performance tasks on my indexers.  Each of them failed.  under triggered collectors, it reads the collector stack trace failed. I logged into the system in question, and looked at the splunk_rapid_diag.log file.   tools_collector ERROR 139880958523200 - Error occurred for collector tcpdump while running `/usr/sbin/tcpdump -i any -w /tmp/tmpbkxib485/tcpdump_All_All.pcap` Process finished with code=1 how do I run diagnostic tools without root access?   I expect this would affect any collectors using strace as well.   --jason      
Hey Splunkers,   I am working on a search but I have encountered a road block in my search. I am attempting to change a UTC time zone to CST within the search. I was able to change the EPOCH time... See more...
Hey Splunkers,   I am working on a search but I have encountered a road block in my search. I am attempting to change a UTC time zone to CST within the search. I was able to change the EPOCH times to CST but I am struggling to locate any documentation on how I can convert the UTC time to match the same as my CST results. I need to change my time to match the other time zones.         2022-08-31T21:04:52Z       needs to be converted to the same format as       08/31/2022 16:21:16        
Hello, I've been tasked to optimize a former colleague's saved searches and found that the query had a lot of rex command going at the same field and decided to compact into one REGEX As such, i'... See more...
Hello, I've been tasked to optimize a former colleague's saved searches and found that the query had a lot of rex command going at the same field and decided to compact into one REGEX As such, i've applied the following REGEX: From Regex101, i've had the query with a whopping 6.5k steps which is a bit too much, and i've been trying to reduce it as much as i can but i've lack knowledge in that department in order to optimize further the query. One of the things that i want to keep only are the capture groups but the rest i want to ignore altogether. Is there a way of doing that and reducing the steps? https://regex101.com/r/qDy1Lr/4
I have a sourcetype which contains raw SNMP data which looks like this (port definitions for network switches): timestamp=1661975375 IF-MIB::ifAlias.1 = "ServerA Port 1"  timestamp=1661975375 IF-... See more...
I have a sourcetype which contains raw SNMP data which looks like this (port definitions for network switches): timestamp=1661975375 IF-MIB::ifAlias.1 = "ServerA Port 1"  timestamp=1661975375 IF-MIB::ifDescr.1 = "Unit: 1 Slot: 0 Port: 1 Gigabit - Level"  timestamp=1661975375 IF-MIB::ifAlias.53 = "ServerA Port 2"  timestamp=1661975375 IF-MIB::ifDescr.53 = "Unit: 2 Slot: 0 Port: 1 Gigabit - Level"  timestamp=1661971775 IF-MIB::ifAlias.626 = "ServerA LAG"  timestamp=1661971775 IF-MIB::ifDescr.626 = " Link Aggregate 1"  I want to generate fields when this data is ingested into Splunk.  I do not want to do this during search.  (So probably using transforms.conf and regex).  I think there’s ways to do this with Python as well, but I don’t have the experience or time to go down that path. The result of the above 6 rows of example data would have the following fields for each line respectively: Alias=1, Description=”ServerA Port 1” Alias=1, Unit=1, Port=1 Alias=53, Description=”ServerA Port 2” Alias=53, Unit=2, Port=1 Alias=626, Description=”ServerA LAG” Alias=626, Lag=1 I can build field extractions or a manual regex to do one of these lines individually, but not all together.  I also wonder if pure regex is the way to go here as it seems like it would take many "steps" with this many parameters. Would really appreciate the help from someone with the knowledge and experience of using transforms to get this done.  Thank you in advance for solutions or recommendations.
Picking up my first project for SOAR detections. Asking if anyone knows groups or sites that helped them when they were new. Thanks in advance!
Hi Team, From the below raw JSON string in Splunk, I am trying to display only correlationId column in a table, can someone help with a query on how to achieve this?   Also wanted to know if it... See more...
Hi Team, From the below raw JSON string in Splunk, I am trying to display only correlationId column in a table, can someone help with a query on how to achieve this?   Also wanted to know if it can be achieved from a regular expression.     index= test1, sourcetype=abc { "eventName": “test”, "sourceType”: “ats”, "detail": { "field": “abctest-1”, "trackInformation”: { "correlationId": “12345”, "components": [ { "publisherTimeLog”: "2022-08-31T13:19:18.726", “MetaData”: “cmd”, "executionTimeInMscs”: “2”5, "receiverTimeLog”: "2022-08-31T13:19:18.725" } ] }, "value": “imdb”, "timestamp": 1455677 }, } Output: ______ correlationID ——————— 12345    
I have two message threads, each thread consists of ten messages. I need to request to display these two chains in one. The new thread must consist of ten different messages: five messages from one ... See more...
I have two message threads, each thread consists of ten messages. I need to request to display these two chains in one. The new thread must consist of ten different messages: five messages from one system, five messages from another (backup) system. Messages from the system use the same SrcMsgId value. Each system has a unique SrcMsgId within the same chain. The message chain from the backup system enters the splunk immediately after the messages from the main system. Messages from the standby system also have a Mainsys_srcMsgId value - this value is identical to the main system's SrcMsgId value. Tell me how can I display a chain of all ten messages? Perhaps first messages from the first system (main), then from the second (backup) with the display of the time of arrival at the server.  Specifically, we want to see all ten messages one after the other, in the order in which they arrived at the server. Five messages from the primary, for example: ("srcMsgId": "rwfsdfsfqwe121432gsgsfgd71") and five from the backup: ("srcMsgId": "rwfsdfsfqwe121432gsgsfgd72"). The problem is that messages from other systems also come to the server, all messages are mixed (chaotically), which is why we want to organize all messages from one system and its relative in the search. Messages from the backup system are associated with the main system only by this parameter: "Mainsys_srcMsgId" - using this key, we understand that messages come from the backup system (secondary to the main one). Examples of messages from the primary and secondary system: Main system: { "event": "Sourcetype test please", "sourcetype": "testsystem-2", "host": "some-host-123", "fields": { "messageId": "ED280816-E404-444A-A2D9-FFD2D171F32", "srcMsgId": "rwfsdfsfqwe121432gsgsfgd71", "Mainsys_srcMsgId": "", "baseSystemId": "abc1", "routeInstanceId": "abc2", "routepointID": "abc3", "eventTime": "1985-04-12T23:20:50Z", "messageType": "abc4", .......................................................................................... Message from backup system: { "event": "Sourcetype test please", "sourcetype": "testsystem-2", "host": "some-host-123", "fields": { "messageId": "ED280816-E404-444A-A2D9-FFD2D171F23", "srcMsgId": "rwfsdfsfqwe121432gsgsfgd72", "Mainsys_srcMsgId": "rwfsdfsfqwe121432gsgsfgd71", "baseSystemId": "abc1", "routeInstanceId": "abc2", "routepointID": "abc3", "eventTime": "1985-04-12T23:20:50Z", "messageType": "abc4", "GISGMPRequestID": "PS000BA780816-E404-444A-A2D9-FFD2D1712345", "GISGMPResponseID": "PS000BA780816-E404-444B-A2D9-FFD2D1712345", "resultcode": "abc7", "resultdesc": "abc8" } } When we want to combine in a query only five messages from one chain, related: "srcMsgId". We make the following request: index="bl_logging" sourcetype="testsystem-2" | транзакция maxpause=5m srcMsgId Mainsys_srcMsgId messageId | таблица _time srcMsgId Mainsys_srcMsgId messageId продолжительность eventcount | сортировать srcMsgId_time | streamstats current=f window=1 значения (_time) as prevTime по теме | eval timeDiff=_time-prevTime | delta _time как timediff