All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Here is the original log file: Host availabilty Hashmap is {HKL20167984SIT_13_8225=true, HKL20167984SIT_7_82FB=true, HKL20167984SIT_2_82F6=true, HKL20167984SIT_16_8228=true, HKL20167984SIT_1_82F5=t... See more...
Here is the original log file: Host availabilty Hashmap is {HKL20167984SIT_13_8225=true, HKL20167984SIT_7_82FB=true, HKL20167984SIT_2_82F6=true, HKL20167984SIT_16_8228=true, HKL20167984SIT_1_82F5=true, HKL20167984SIT_11_8223=true, HKL20167984SIT_14_8226=true, HKL20167984SIT_4_82F8=true, HKL20167984SIT_12_8224=false, HKL20167984SIT_3_82F7=true, HKL20167984SIT_15_8227=true, HKL20167984SIT_8_8220=true, HKL20167984SIT_9_8221=true, HKL20167984SIT_6_82FA=true, HKL20167984SIT_5_82F9=true, HKL20167984SIT_10_8222=true} Here's my search command index="hkcivr" source="/appvol/wlp/DIVR01HK-AS01/applogs/wrapup.log*" | rex max_match=0 "_(?<port status>\d{4}\=\w+)" I hope to get the result like below: Time   2022-02-15 07:02 8225=false, 8228=false, 8223=false, 8226=false, 8224=false, 8220=false, 8227=false, 8221=false, 8222=false, 8225=false, 8228=false, 8223=false, 8226=false, 8224=false, 8220=false, 8227=false, 8221=false 8222=false
Hi.  I want to merge data from multiple fields into a single field. If you have a table like the following fieldA, fieldB, fieldC ------------------------------ valueA, valueB, valueC The e... See more...
Hi.  I want to merge data from multiple fields into a single field. If you have a table like the following fieldA, fieldB, fieldC ------------------------------ valueA, valueB, valueC The expected output is as follows. I want to combine them into a single field in the Field = Value format. merge_data = "fieldA = valueA, fieldB = valueB, fieldC = valueC" I think it can be done using multivalue OR foreach, but I don't know how to code it. Thanks in advance!!    
Hi All, I had registered for Splunk Phantom community edition a few days back. However, I am yet to receive a confirmation email from the team. Could someone help me out here?
Hi, Trying to configure the Add-On for Microsoft Defender https://splunkbase.splunk.com/app/4959/ Can anyone confirm what settings are needed for: Login URL Endpoint Resource? Whichever I... See more...
Hi, Trying to configure the Add-On for Microsoft Defender https://splunkbase.splunk.com/app/4959/ Can anyone confirm what settings are needed for: Login URL Endpoint Resource? Whichever I use, I'm getting 401 errors. Have followed https://docs.microsoft.com/en-us/microsoft-365/security/defender-endpoint/api-hello-world?view=o365-worldwide and confirmed the permissions on the App registration are 100% correct.   Cheers  
Hi,  I have Percentage calculated for Compliance and Non Compliance based on the data .Now i need to segregate it based on colors in staked bar chart. Now ,the for all Non Comp % between 90 to 95... See more...
Hi,  I have Percentage calculated for Compliance and Non Compliance based on the data .Now i need to segregate it based on colors in staked bar chart. Now ,the for all Non Comp % between 90 to 95 should be shown in yellow and rest should be in red .The  Non compliance is 95 to 100. Please let me know the search for this .My search | savedsearch_Saved_Search | eval total_count=Compliant+NonCompliant | eval "Compliance %"=round(100*'Compliant'/total_count,2) | eval "Non Compliance %"=round(100*'NonCompliant'/total_count,2) |stats count by msc "Compliance %" "Non Compliance %"
Hi all, I have a query which gives this kind of table. Name        Date              Status           Task          SubGroup A             14-02-22         PASS                 a               ... See more...
Hi all, I have a query which gives this kind of table. Name        Date              Status           Task          SubGroup A             14-02-22         PASS                 a                  a1                                                                          b                  b1                                                                                               b2 The data will come together and which but i want separate rows for all the data. Also there are subgroup for some tasks but with this result it one cannot be able to differentiate between them. I have tried using mvzip like this, ...............| eval tmp=mvzip(mvzip(Name,Task,","),SubGroup,",") | mvexpand tmp | table Name Date Status tmp |eval Name=mvindex(split(tmp,","),0)|eval Task=mvindex(split(tmp,","),1)|eval SubGroup=mvindex(split(tmp,","),2) |table Name Date Status Task SubGroup I am not getting why a error comes in eval command as expected ). I don't know whether it is a small mistake, i have tried alot but not able to solve this.
I think savedsearches.conf contains information about alerts and reports. If you execute the following btool command and check the result, which is the report or the alert? I can't tell. if i use s... See more...
I think savedsearches.conf contains information about alerts and reports. If you execute the following btool command and check the result, which is the report or the alert? I can't tell. if i use splunk btool savedsearches list <Question 1> From the btool results, what parameters can I look at to determine that the stanza is a report? <Question 2> From the btool results, what parameters can I look at to determine that the stanza is an alert? @somesoni2 
When Settings> "Search, Report, Alert" is displayed in SplunkWeb, ○○○ is displayed by default. I want to change the default value of this display as below.. "Type: Alert, App: Search & Reporting... See more...
When Settings> "Search, Report, Alert" is displayed in SplunkWeb, ○○○ is displayed by default. I want to change the default value of this display as below.. "Type: Alert, App: Search & Reporting, Owner: Administrator, Number of displayed pages: 100 / page"   if possible, please tell me the procedure. Thanks @isoutamo  @PickleRick 
Hello. Props.conf is not applying on my universal forwarder. I diagnosed with btool and it seems that the setting are being applied yet when the data gets shipped to my server, it is in raw form.  ... See more...
Hello. Props.conf is not applying on my universal forwarder. I diagnosed with btool and it seems that the setting are being applied yet when the data gets shipped to my server, it is in raw form.  inputs.conf: [batch://C:\Data\*\Cloud-Data\to_splunk\(...)?(?i)*_CloudTrail_*] sourcetype = aws-cloudtrail move_policy = sinkhole index = testindex Props.conf: [aws-cloudtrail] LINE_BREAKER=((?<=}),(?={"eventVersion")) NO_BINARY_CHECK=true CHARSET=UTF-8 KV_MODE=json SEDCMD-remove_prefix=s/{"Records":\[//g SEDCMD-remove_suffix=s/\]}//g TIME_PREFIX=eventTime TRANSFORMS-index = write-index transforms.conf [write-index] SOURCE_KEY = MetaData:Source DEST_KEY = _MetaData:Index REGEX = .*\\Data\\+(?<yeet>.*)\\.*\\to_splunk.* FORMAT = $1
I am running a single instance Splunk Enterprise deployment (v. 8.1.3). On the main GUI dashboard, I am getting a Red Health Status of Splunkd flag. On closer inspection, further detail is showing ... See more...
I am running a single instance Splunk Enterprise deployment (v. 8.1.3). On the main GUI dashboard, I am getting a Red Health Status of Splunkd flag. On closer inspection, further detail is showing as Index Processor>Buckets with root cause "The percentage of small buckets (71%) created over the last hour is high and exceeded the red thresholds (50%) for index=os, and possibly more indexes, on this indexer. At the time this alert fired, total buckets created=11, small buckets=0" What i can't quite figure out is, it is calling this a small bucket alert and yet the number of small buckets created=0. I came across the following search online to do some further checking on this: index=_internal sourcetype=splunkd component=HotBucketRoller "finished moving hot to warm"  | eval bucketSizeMB = round(size / 1024 / 1024, 2)  | table _time splunk_server idx bid bucketSizeMB  | rename idx as index  | join type=left index      [ | rest /services/data/indexes count=0        | rename title as index        | eval maxDataSize = case (maxDataSize == "auto",             750,                                   maxDataSize == "auto_high_volume", 10000,                                   true(),                            maxDataSize)        | table  index updated currentDBSizeMB homePath.maxDataSizeMB maxDataSize maxHotBuckets maxWarmDBCount ]  | eval bucketSizePercent = round(100*(bucketSizeMB/maxDataSize))  | eval isSmallBucket     = if (bucketSizePercent < 10, 1, 0)  | stats sum(isSmallBucket) as num_small_buckets          count              as num_total_buckets          by index splunk_server  | eval  percentSmallBuckets = round(100*(num_small_buckets/num_total_buckets))  | sort  - percentSmallBuckets  | eval isViolation = if (percentSmallBuckets > 30, "Yes", "No") A Search over the last 24 hours is showing 4 buckets created (and no small buckets) A search over the last 7 days is showing: index="os", total buckets=10, number of small buckets=1 index="_internal", total buckets=38, number of small buckets=1 I guess i am a little intrigued as to why I am seeing this alert as i have had 2 small buckets created in the last week (and the percentage small buckets per index is at worst 10%). Are there any other health checks that i should be looking at on my Indexer?
Hello. I need help with DB data input. Among the fields of the mariadb table, the field related to time is divided into two. Both fields are of type varchar.   1) The date field stores today's da... See more...
Hello. I need help with DB data input. Among the fields of the mariadb table, the field related to time is divided into two. Both fields are of type varchar.   1) The date field stores today's date. ex) 2022215 2) The time field stores the time. Leading zeros are omitted. ex) 110203000 (hhmmssSSS) For time fields, leading zeros are omitted.  In the case of 00:02:03, it becomes as follows. ex) 203000  As a side note, I know these configurations aren't common, I didn't create them. In the above situation, to get data through the rising column, the query is structured as follows.   select concat(date,lpad(time,'9','0')) as time from ~~~ where time > 1   If i execute a query on the db connect setting screen, data is imported normally. However, once the setup is complete and the query is run on a schedule, the data will not be indexed. In db connect, if data is artificially processed through a query, can't I get data?   thank you.
ライセンスの利用状況を集計するため、FlexLMのログファイルを取り込んでいるのですが、一部の行が取り込まれない状態で原因究明にご助力頂けたら幸いです。 ログ取得の意図としては下記記載のログ内容の"IN"、"OUT"行を行毎に1イベントで取得し、利用本数としてカウントすることを目的としています。 しかし、ログの記述内容の6~7割しか取り込まれておらず、一部の内容が欠落している状況です。 欠... See more...
ライセンスの利用状況を集計するため、FlexLMのログファイルを取り込んでいるのですが、一部の行が取り込まれない状態で原因究明にご助力頂けたら幸いです。 ログ取得の意図としては下記記載のログ内容の"IN"、"OUT"行を行毎に1イベントで取得し、利用本数としてカウントすることを目的としています。 しかし、ログの記述内容の6~7割しか取り込まれておらず、一部の内容が欠落している状況です。 欠落する内容に完全な規則性は見受けられないのですが、ヘッダー部とIN OUTの記述部の境目で欠落が開始し、同じような箇所で再度取り込みが開始されるような印象があります。(ある箇所で欠落が開始して、ある箇所で取得が再開、これが繰り返される) 欠落の開始終了は必ず改行部で行われます。 同じ取得設定で他ライセンスのログは正常に取り込めているため、ログの記述内容による影響と考えているのですがご意見を頂けないでしょうか。 ◆ログ取得方法 ライセンスサーバ(Linux)のログファイルをForwarder機(Windows)にSSHで定期転送し、転送されたログファイルをモニターしています。 inputs.confには下記の通り記述しており、props.confはデフォルトです。 [monitor://ローカルパス] disabled = 0 index = インデックス sourcetype = ソースタイプ crcSalt = <SOURCE> ◆ソースタイプの取得設定 イベント改行:各行 タイムスタンプ:自動 詳細:CHARSET:UTF-8    NO_BINARY_CHECK:true    SHOULD_LINEMERGE:false    category:カスタム    disabled:false    pulldown_type:true ===============================ログ内容=============================== 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) =============================================== 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) === Last 10 Client Requests Processing Time (in ms) === 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) Time: Thu Apr 01 2021 15:47:36 JST 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) Request processing time, when, #concurrent clients, client info (user, node, FD) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) 00000000 ms at 15:46:18(4/1/2021),#3,(ユーザー名,ホスト名,18) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) 00000000 ms at 15:46:18(4/1/2021),#3,(ユーザー名,ホスト名,18) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) 00000000 ms at 15:46:18(4/1/2021),#3,(ユーザー名,ホスト名,18) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) 00000000 ms at 15:46:18(4/1/2021),#3,(ユーザー名,ホスト名,18) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) 00000000 ms at 15:46:18(4/1/2021),#3,(ユーザー名,ホスト名,18) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) 00000000 ms at 15:46:18(4/1/2021),#3,(ユーザー名,ホスト名,18) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) 00000000 ms at 15:46:18(4/1/2021),#3,(ユーザー名,ホスト名,18) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) 00000000 ms at 15:38:39(4/1/2021),#3,(ユーザー名,ホスト名,18) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) 00000001 ms at 15:38:39(4/1/2021),#3,(ユーザー名,ホスト名,18) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) 00000000 ms at 15:38:39(4/1/2021),#3,(ユーザー名,ホスト名,18) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) === Top 10 Peak Client Requests Processing Time (in ms) === 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) Time: Thu Apr 01 2021 15:47:36 JST 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) Request processing time, when, #concurrent clients, client info (user, node, FD) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) 00000001 ms at 11:47:39(4/1/2021),#3,(ユーザー名,ホスト名,18) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) 00000001 ms at 11:47:43(4/1/2021),#4,(ユーザー名,ホスト名,19) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) 00000001 ms at 11:47:43(4/1/2021),#4,(ユーザー名,ホスト名,19) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) 00000001 ms at 11:48:30(4/1/2021),#3,(ユーザー名,ホスト名,18) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) 00000001 ms at 11:48:53(4/1/2021),#3,(ユーザー名,ホスト名,18) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) 00000001 ms at 11:50:05(4/1/2021),#3,(ユーザー名,ホスト名,18) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) 00000001 ms at 11:50:49(4/1/2021),#3,(ユーザー名,ホスト名,18) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) 00000001 ms at 11:50:51(4/1/2021),#3,(ユーザー名,ホスト名,18) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) 00000001 ms at 11:51:09(4/1/2021),#3,(ユーザー名,ホスト名,18) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) 00000001 ms at 11:51:11(4/1/2021),#3,(ユーザー名,ホスト名,18) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) === Top 10 Peak In-house Operations time (in ms) === 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) NOTE: Peak times greater than 4 seconds get recorded. 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) Time: Thu Apr 01 2021 15:47:36 JST 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) In-house operation time, when, #concurrent clients 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) === Active Connections Info === 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) Peak active connections #15 attempted at Thu Apr 01 2021 13:48:08 JST 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) =============================================== 15:58:21 (ライセンス名) OUT: "フィーチャー名" ユーザー名@ホスト名 15:58:39 (ライセンス名) IN: "フィーチャー名" ユーザー名@ホスト名 16:04:28 (ライセンス名) OUT: "フィーチャー名" ユーザー名@ホスト名 16:04:32 (ライセンス名) IN: "フィーチャー名" ユーザー名@ホスト名 16:04:57 (ライセンス名) OUT: "フィーチャー名" ユーザー名@ホスト名 16:06:36 (ライセンス名) IN: "フィーチャー名" ユーザー名@ホスト名 16:07:39 (ライセンス名) OUT: "フィーチャー名" ユーザー名@ホスト名 16:10:05 (ライセンス名) OUT: "フィーチャー名" ユーザー名@ホスト名 16:10:54 (ライセンス名) IN: "フィーチャー名" ユーザー名@ホスト名 16:11:37 (ライセンス名) OUT: "フィーチャー名" ユーザー名@ホスト名 16:12:00 (ライセンス名) IN: "フィーチャー名" ユーザー名@ホスト名 16:18:23 (ライセンス名) IN: "フィーチャー名" ユーザー名@ホスト名 16:20:27 (ライセンス名) OUT: "フィーチャー名" ユーザー名@ホスト名 16:20:27 (ライセンス名) IN: "フィーチャー名" ユーザー名@ホスト名 16:36:18 (ライセンス名) OUT: "フィーチャー名" ユーザー名@ホスト名 以降は同様の記述内容が続きます(ヘッダー部は一定時間毎に記述されます、IN,OUT部はライセンスが掴まれるたびに記述されます)
I am operating in an environment with a standalone  Splunk Enterprise instance running v8.1.3 on RHEL. In my environment I have around 350 Universal Forwarders that have been up and running for some ... See more...
I am operating in an environment with a standalone  Splunk Enterprise instance running v8.1.3 on RHEL. In my environment I have around 350 Universal Forwarders that have been up and running for some time. I am running SSL on port 9997 between my forwarders and my Indexer. Certs being used are custom.   I recently have had a problem with two Universal Forwarders. They are not forwarding any information into Splunk.   In the Splunk GUI, they are appearing in Forwarder Management (and if I delete their entries, they reappear again), which looks good. I have two deployment apps pushed down to these forwarders as follows: App1 – indexer_config: Sets outputs.conf to point to indexer and defines clientCert and sslRootCAPath cert. App2 – Splunk_TA_Windows: This App configures inputs.conf to monitor some basic win event logs (e.g System, Security, Application).   Both of the troublesome forwarders are on machines in a dmz and were installed by the same person.   I have looked through the logs on one of the forwarders (see attached PDF). From the logs, it would appear: The connection from the Universal Forwarder to the Deployment Server is working well – I can see it phoning home in the logs and I could also see it downloading the two apps mentioned above. The connection from the Universal Forwarder to the Indexer seems to be having issues – it appears to connect with the indexer but then the indexer forcibly closes the connection for some reason.   I can see error message: “TcpOutputProc - The TCP output processor has paused the data flow. Forwarding to host_dest=<indexer_ip> inside output group default-autolb-group from host_src=<UF_server_hostname> has been blocked” which appears to be relevant."   What I am trying to figure out is whether this is an issue with: The config on the Universal forwarder (possibly an issue with the SSL connection being rejected by the indexer?) An Issue back at the Indexer.
Minimum requirements to run JMX apps
Hi Splunkers, I want to send an alert to Slack when someone logs into an AWS account using the root user. I have Splunk add-on for AWS and Slack add-on already installed. How can I achieve this? ... See more...
Hi Splunkers, I want to send an alert to Slack when someone logs into an AWS account using the root user. I have Splunk add-on for AWS and Slack add-on already installed. How can I achieve this? Thanks
Hello, I am trying to join two searches for our AoVPN remote login system that highlights a path from user, machine name, RAS server, to the IP assigned once inside our environment. I have one sear... See more...
Hello, I am trying to join two searches for our AoVPN remote login system that highlights a path from user, machine name, RAS server, to the IP assigned once inside our environment. I have one search that contains nearly all results I need, with the exception of a user ID and user name (from AD). The addition of user ID and name would be for convenience. I have done some field extractions to pull the desired items out for a better dash: index=windows sourcetype=rasl mail=* srcip!=WPDOTRASL0* machine!=null | rename host as rasl_server | table _time, mail, machine, Tunnel_Endpoint, rasl_server, srcip From here, I join another sourcetype (same index) that contains the user ID and user name. Both sourcetypes have the email addresses, so I am attempting to join with "mail" as the focus. This search seems to work fine, but only returns a minimal number of results. index=windows sourcetype=rasl mail=* srcip!=WPDOTRASL0* machine!=null | rename host as rasl_server | table _time, mail, machine, Tunnel_Endpoint, rasl_server, srcip | join mail [| search index=windows sourcetype=ActiveDirectory mail=* | rename sAMAccountName as User_ID | table _time, User_ID, name, mail] Both searches, when ran individually, return 2000+ results. However when I run the joined search, I only see twenty or so results. The results are reliable and seem accurate, just very few of them. I'm learning myself as I go, so may be missing something simple. Thanks for any help.
Why do I keep seeing this type of messages in the splunkd.log? WARN ProcessTracker - executable=splunk-optimize failed to start reason='': Operation not permitted Note: I have already put Proces... See more...
Why do I keep seeing this type of messages in the splunkd.log? WARN ProcessTracker - executable=splunk-optimize failed to start reason='': Operation not permitted Note: I have already put ProcessTracker component in DEBUG logging and still not able to gain any insight to what is causing the messages.
I have two questions for anyone who might be using this add on. 1) We have a variety of kusto queries and by and large they have been ok.  In one case though we find one that seems to only be activa... See more...
I have two questions for anyone who might be using this add on. 1) We have a variety of kusto queries and by and large they have been ok.  In one case though we find one that seems to only be activating every so often when it feels like it.  All the rest are fine and all of them, including this bad one, use the same interval.  I mean you can see they are all the same as its a column when looking over the inputs area.  Im wondering if anyone has ever encountered this behavior and if so if anything fixed it.  Its funny too because I was somewhat concerned about us hitting some limit on log analytics or some such but I have setup some more and they are essentially fine. 2) Our queries are setup to grab the last minute every minute.  I notice we seem to miss some data and my theory is I need to overlap to deal with latencies in the query itself.  The absolute worst query I saw took into the teens to do so I was thinking of grabbing the last 80 seconds every 60 seconds.  Im a little concerned though how splunk might handle that.  Will it recognize input that is exactly the same as recieved before and deduplicate or will we be pushing a bunch of duplicates?
  I tried this :  .... myField IN ( [search ..| table myField]) Where the values passed to the IN operator will be calculate dynamically from another search  But that returns  Unable to parse t... See more...
  I tried this :  .... myField IN ( [search ..| table myField]) Where the values passed to the IN operator will be calculate dynamically from another search  But that returns  Unable to parse the search: Right hand side of IN must be a collection of literals. '((myField = "123") OR (myField = "1234")  How can I do this?
I have the following query :    ... | chart list(time) by request  actor   Where time it returns the time for each actor such as    actor1 actor2 10 20 10           How ... See more...
I have the following query :    ... | chart list(time) by request  actor   Where time it returns the time for each actor such as    actor1 actor2 10 20 10           How can I add a new column with the averages for each one?