All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello. Props.conf is not applying on my universal forwarder. I diagnosed with btool and it seems that the setting are being applied yet when the data gets shipped to my server, it is in raw form.  ... See more...
Hello. Props.conf is not applying on my universal forwarder. I diagnosed with btool and it seems that the setting are being applied yet when the data gets shipped to my server, it is in raw form.  inputs.conf: [batch://C:\Data\*\Cloud-Data\to_splunk\(...)?(?i)*_CloudTrail_*] sourcetype = aws-cloudtrail move_policy = sinkhole index = testindex Props.conf: [aws-cloudtrail] LINE_BREAKER=((?<=}),(?={"eventVersion")) NO_BINARY_CHECK=true CHARSET=UTF-8 KV_MODE=json SEDCMD-remove_prefix=s/{"Records":\[//g SEDCMD-remove_suffix=s/\]}//g TIME_PREFIX=eventTime TRANSFORMS-index = write-index transforms.conf [write-index] SOURCE_KEY = MetaData:Source DEST_KEY = _MetaData:Index REGEX = .*\\Data\\+(?<yeet>.*)\\.*\\to_splunk.* FORMAT = $1
I am running a single instance Splunk Enterprise deployment (v. 8.1.3). On the main GUI dashboard, I am getting a Red Health Status of Splunkd flag. On closer inspection, further detail is showing ... See more...
I am running a single instance Splunk Enterprise deployment (v. 8.1.3). On the main GUI dashboard, I am getting a Red Health Status of Splunkd flag. On closer inspection, further detail is showing as Index Processor>Buckets with root cause "The percentage of small buckets (71%) created over the last hour is high and exceeded the red thresholds (50%) for index=os, and possibly more indexes, on this indexer. At the time this alert fired, total buckets created=11, small buckets=0" What i can't quite figure out is, it is calling this a small bucket alert and yet the number of small buckets created=0. I came across the following search online to do some further checking on this: index=_internal sourcetype=splunkd component=HotBucketRoller "finished moving hot to warm"  | eval bucketSizeMB = round(size / 1024 / 1024, 2)  | table _time splunk_server idx bid bucketSizeMB  | rename idx as index  | join type=left index      [ | rest /services/data/indexes count=0        | rename title as index        | eval maxDataSize = case (maxDataSize == "auto",             750,                                   maxDataSize == "auto_high_volume", 10000,                                   true(),                            maxDataSize)        | table  index updated currentDBSizeMB homePath.maxDataSizeMB maxDataSize maxHotBuckets maxWarmDBCount ]  | eval bucketSizePercent = round(100*(bucketSizeMB/maxDataSize))  | eval isSmallBucket     = if (bucketSizePercent < 10, 1, 0)  | stats sum(isSmallBucket) as num_small_buckets          count              as num_total_buckets          by index splunk_server  | eval  percentSmallBuckets = round(100*(num_small_buckets/num_total_buckets))  | sort  - percentSmallBuckets  | eval isViolation = if (percentSmallBuckets > 30, "Yes", "No") A Search over the last 24 hours is showing 4 buckets created (and no small buckets) A search over the last 7 days is showing: index="os", total buckets=10, number of small buckets=1 index="_internal", total buckets=38, number of small buckets=1 I guess i am a little intrigued as to why I am seeing this alert as i have had 2 small buckets created in the last week (and the percentage small buckets per index is at worst 10%). Are there any other health checks that i should be looking at on my Indexer?
Hello. I need help with DB data input. Among the fields of the mariadb table, the field related to time is divided into two. Both fields are of type varchar.   1) The date field stores today's da... See more...
Hello. I need help with DB data input. Among the fields of the mariadb table, the field related to time is divided into two. Both fields are of type varchar.   1) The date field stores today's date. ex) 2022215 2) The time field stores the time. Leading zeros are omitted. ex) 110203000 (hhmmssSSS) For time fields, leading zeros are omitted.  In the case of 00:02:03, it becomes as follows. ex) 203000  As a side note, I know these configurations aren't common, I didn't create them. In the above situation, to get data through the rising column, the query is structured as follows.   select concat(date,lpad(time,'9','0')) as time from ~~~ where time > 1   If i execute a query on the db connect setting screen, data is imported normally. However, once the setup is complete and the query is run on a schedule, the data will not be indexed. In db connect, if data is artificially processed through a query, can't I get data?   thank you.
ライセンスの利用状況を集計するため、FlexLMのログファイルを取り込んでいるのですが、一部の行が取り込まれない状態で原因究明にご助力頂けたら幸いです。 ログ取得の意図としては下記記載のログ内容の"IN"、"OUT"行を行毎に1イベントで取得し、利用本数としてカウントすることを目的としています。 しかし、ログの記述内容の6~7割しか取り込まれておらず、一部の内容が欠落している状況です。 欠... See more...
ライセンスの利用状況を集計するため、FlexLMのログファイルを取り込んでいるのですが、一部の行が取り込まれない状態で原因究明にご助力頂けたら幸いです。 ログ取得の意図としては下記記載のログ内容の"IN"、"OUT"行を行毎に1イベントで取得し、利用本数としてカウントすることを目的としています。 しかし、ログの記述内容の6~7割しか取り込まれておらず、一部の内容が欠落している状況です。 欠落する内容に完全な規則性は見受けられないのですが、ヘッダー部とIN OUTの記述部の境目で欠落が開始し、同じような箇所で再度取り込みが開始されるような印象があります。(ある箇所で欠落が開始して、ある箇所で取得が再開、これが繰り返される) 欠落の開始終了は必ず改行部で行われます。 同じ取得設定で他ライセンスのログは正常に取り込めているため、ログの記述内容による影響と考えているのですがご意見を頂けないでしょうか。 ◆ログ取得方法 ライセンスサーバ(Linux)のログファイルをForwarder機(Windows)にSSHで定期転送し、転送されたログファイルをモニターしています。 inputs.confには下記の通り記述しており、props.confはデフォルトです。 [monitor://ローカルパス] disabled = 0 index = インデックス sourcetype = ソースタイプ crcSalt = <SOURCE> ◆ソースタイプの取得設定 イベント改行:各行 タイムスタンプ:自動 詳細:CHARSET:UTF-8    NO_BINARY_CHECK:true    SHOULD_LINEMERGE:false    category:カスタム    disabled:false    pulldown_type:true ===============================ログ内容=============================== 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) =============================================== 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) === Last 10 Client Requests Processing Time (in ms) === 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) Time: Thu Apr 01 2021 15:47:36 JST 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) Request processing time, when, #concurrent clients, client info (user, node, FD) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) 00000000 ms at 15:46:18(4/1/2021),#3,(ユーザー名,ホスト名,18) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) 00000000 ms at 15:46:18(4/1/2021),#3,(ユーザー名,ホスト名,18) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) 00000000 ms at 15:46:18(4/1/2021),#3,(ユーザー名,ホスト名,18) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) 00000000 ms at 15:46:18(4/1/2021),#3,(ユーザー名,ホスト名,18) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) 00000000 ms at 15:46:18(4/1/2021),#3,(ユーザー名,ホスト名,18) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) 00000000 ms at 15:46:18(4/1/2021),#3,(ユーザー名,ホスト名,18) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) 00000000 ms at 15:46:18(4/1/2021),#3,(ユーザー名,ホスト名,18) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) 00000000 ms at 15:38:39(4/1/2021),#3,(ユーザー名,ホスト名,18) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) 00000001 ms at 15:38:39(4/1/2021),#3,(ユーザー名,ホスト名,18) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) 00000000 ms at 15:38:39(4/1/2021),#3,(ユーザー名,ホスト名,18) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) === Top 10 Peak Client Requests Processing Time (in ms) === 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) Time: Thu Apr 01 2021 15:47:36 JST 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) Request processing time, when, #concurrent clients, client info (user, node, FD) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) 00000001 ms at 11:47:39(4/1/2021),#3,(ユーザー名,ホスト名,18) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) 00000001 ms at 11:47:43(4/1/2021),#4,(ユーザー名,ホスト名,19) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) 00000001 ms at 11:47:43(4/1/2021),#4,(ユーザー名,ホスト名,19) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) 00000001 ms at 11:48:30(4/1/2021),#3,(ユーザー名,ホスト名,18) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) 00000001 ms at 11:48:53(4/1/2021),#3,(ユーザー名,ホスト名,18) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) 00000001 ms at 11:50:05(4/1/2021),#3,(ユーザー名,ホスト名,18) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) 00000001 ms at 11:50:49(4/1/2021),#3,(ユーザー名,ホスト名,18) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) 00000001 ms at 11:50:51(4/1/2021),#3,(ユーザー名,ホスト名,18) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) 00000001 ms at 11:51:09(4/1/2021),#3,(ユーザー名,ホスト名,18) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) 00000001 ms at 11:51:11(4/1/2021),#3,(ユーザー名,ホスト名,18) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) === Top 10 Peak In-house Operations time (in ms) === 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) NOTE: Peak times greater than 4 seconds get recorded. 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) Time: Thu Apr 01 2021 15:47:36 JST 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) In-house operation time, when, #concurrent clients 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) === Active Connections Info === 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) Peak active connections #15 attempted at Thu Apr 01 2021 13:48:08 JST 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) 15:47:36 (ライセンス名) (@ライセンス名-SLOG@) =============================================== 15:58:21 (ライセンス名) OUT: "フィーチャー名" ユーザー名@ホスト名 15:58:39 (ライセンス名) IN: "フィーチャー名" ユーザー名@ホスト名 16:04:28 (ライセンス名) OUT: "フィーチャー名" ユーザー名@ホスト名 16:04:32 (ライセンス名) IN: "フィーチャー名" ユーザー名@ホスト名 16:04:57 (ライセンス名) OUT: "フィーチャー名" ユーザー名@ホスト名 16:06:36 (ライセンス名) IN: "フィーチャー名" ユーザー名@ホスト名 16:07:39 (ライセンス名) OUT: "フィーチャー名" ユーザー名@ホスト名 16:10:05 (ライセンス名) OUT: "フィーチャー名" ユーザー名@ホスト名 16:10:54 (ライセンス名) IN: "フィーチャー名" ユーザー名@ホスト名 16:11:37 (ライセンス名) OUT: "フィーチャー名" ユーザー名@ホスト名 16:12:00 (ライセンス名) IN: "フィーチャー名" ユーザー名@ホスト名 16:18:23 (ライセンス名) IN: "フィーチャー名" ユーザー名@ホスト名 16:20:27 (ライセンス名) OUT: "フィーチャー名" ユーザー名@ホスト名 16:20:27 (ライセンス名) IN: "フィーチャー名" ユーザー名@ホスト名 16:36:18 (ライセンス名) OUT: "フィーチャー名" ユーザー名@ホスト名 以降は同様の記述内容が続きます(ヘッダー部は一定時間毎に記述されます、IN,OUT部はライセンスが掴まれるたびに記述されます)
I am operating in an environment with a standalone  Splunk Enterprise instance running v8.1.3 on RHEL. In my environment I have around 350 Universal Forwarders that have been up and running for some ... See more...
I am operating in an environment with a standalone  Splunk Enterprise instance running v8.1.3 on RHEL. In my environment I have around 350 Universal Forwarders that have been up and running for some time. I am running SSL on port 9997 between my forwarders and my Indexer. Certs being used are custom.   I recently have had a problem with two Universal Forwarders. They are not forwarding any information into Splunk.   In the Splunk GUI, they are appearing in Forwarder Management (and if I delete their entries, they reappear again), which looks good. I have two deployment apps pushed down to these forwarders as follows: App1 – indexer_config: Sets outputs.conf to point to indexer and defines clientCert and sslRootCAPath cert. App2 – Splunk_TA_Windows: This App configures inputs.conf to monitor some basic win event logs (e.g System, Security, Application).   Both of the troublesome forwarders are on machines in a dmz and were installed by the same person.   I have looked through the logs on one of the forwarders (see attached PDF). From the logs, it would appear: The connection from the Universal Forwarder to the Deployment Server is working well – I can see it phoning home in the logs and I could also see it downloading the two apps mentioned above. The connection from the Universal Forwarder to the Indexer seems to be having issues – it appears to connect with the indexer but then the indexer forcibly closes the connection for some reason.   I can see error message: “TcpOutputProc - The TCP output processor has paused the data flow. Forwarding to host_dest=<indexer_ip> inside output group default-autolb-group from host_src=<UF_server_hostname> has been blocked” which appears to be relevant."   What I am trying to figure out is whether this is an issue with: The config on the Universal forwarder (possibly an issue with the SSL connection being rejected by the indexer?) An Issue back at the Indexer.
Minimum requirements to run JMX apps
Hi Splunkers, I want to send an alert to Slack when someone logs into an AWS account using the root user. I have Splunk add-on for AWS and Slack add-on already installed. How can I achieve this? ... See more...
Hi Splunkers, I want to send an alert to Slack when someone logs into an AWS account using the root user. I have Splunk add-on for AWS and Slack add-on already installed. How can I achieve this? Thanks
Hello, I am trying to join two searches for our AoVPN remote login system that highlights a path from user, machine name, RAS server, to the IP assigned once inside our environment. I have one sear... See more...
Hello, I am trying to join two searches for our AoVPN remote login system that highlights a path from user, machine name, RAS server, to the IP assigned once inside our environment. I have one search that contains nearly all results I need, with the exception of a user ID and user name (from AD). The addition of user ID and name would be for convenience. I have done some field extractions to pull the desired items out for a better dash: index=windows sourcetype=rasl mail=* srcip!=WPDOTRASL0* machine!=null | rename host as rasl_server | table _time, mail, machine, Tunnel_Endpoint, rasl_server, srcip From here, I join another sourcetype (same index) that contains the user ID and user name. Both sourcetypes have the email addresses, so I am attempting to join with "mail" as the focus. This search seems to work fine, but only returns a minimal number of results. index=windows sourcetype=rasl mail=* srcip!=WPDOTRASL0* machine!=null | rename host as rasl_server | table _time, mail, machine, Tunnel_Endpoint, rasl_server, srcip | join mail [| search index=windows sourcetype=ActiveDirectory mail=* | rename sAMAccountName as User_ID | table _time, User_ID, name, mail] Both searches, when ran individually, return 2000+ results. However when I run the joined search, I only see twenty or so results. The results are reliable and seem accurate, just very few of them. I'm learning myself as I go, so may be missing something simple. Thanks for any help.
Why do I keep seeing this type of messages in the splunkd.log? WARN ProcessTracker - executable=splunk-optimize failed to start reason='': Operation not permitted Note: I have already put Proces... See more...
Why do I keep seeing this type of messages in the splunkd.log? WARN ProcessTracker - executable=splunk-optimize failed to start reason='': Operation not permitted Note: I have already put ProcessTracker component in DEBUG logging and still not able to gain any insight to what is causing the messages.
I have two questions for anyone who might be using this add on. 1) We have a variety of kusto queries and by and large they have been ok.  In one case though we find one that seems to only be activa... See more...
I have two questions for anyone who might be using this add on. 1) We have a variety of kusto queries and by and large they have been ok.  In one case though we find one that seems to only be activating every so often when it feels like it.  All the rest are fine and all of them, including this bad one, use the same interval.  I mean you can see they are all the same as its a column when looking over the inputs area.  Im wondering if anyone has ever encountered this behavior and if so if anything fixed it.  Its funny too because I was somewhat concerned about us hitting some limit on log analytics or some such but I have setup some more and they are essentially fine. 2) Our queries are setup to grab the last minute every minute.  I notice we seem to miss some data and my theory is I need to overlap to deal with latencies in the query itself.  The absolute worst query I saw took into the teens to do so I was thinking of grabbing the last 80 seconds every 60 seconds.  Im a little concerned though how splunk might handle that.  Will it recognize input that is exactly the same as recieved before and deduplicate or will we be pushing a bunch of duplicates?
  I tried this :  .... myField IN ( [search ..| table myField]) Where the values passed to the IN operator will be calculate dynamically from another search  But that returns  Unable to parse t... See more...
  I tried this :  .... myField IN ( [search ..| table myField]) Where the values passed to the IN operator will be calculate dynamically from another search  But that returns  Unable to parse the search: Right hand side of IN must be a collection of literals. '((myField = "123") OR (myField = "1234")  How can I do this?
I have the following query :    ... | chart list(time) by request  actor   Where time it returns the time for each actor such as    actor1 actor2 10 20 10           How ... See more...
I have the following query :    ... | chart list(time) by request  actor   Where time it returns the time for each actor such as    actor1 actor2 10 20 10           How can I add a new column with the averages for each one?
I'm trying to combine 2 CSV files, where: - test-ad has the field "EmailAddress" and "Enabled"; - "Composição das tribos - Relação funcionários.csv") has the field "email" and "rh_status"; So, i... See more...
I'm trying to combine 2 CSV files, where: - test-ad has the field "EmailAddress" and "Enabled"; - "Composição das tribos - Relação funcionários.csv") has the field "email" and "rh_status"; So, i can't combine the results: index="teste-ad" OR (index="tst-compliance" AND source="Composição das tribos - Relação funcionários.csv") | eval em=coalesce(EmailAddress,email) | table em Enabled rh_status
Hello everyone, hope you are all well this afternoon. I am trying to combine 2 searches where the outer search passes a value to the inner search and then appends the results. Let me explain: ... See more...
Hello everyone, hope you are all well this afternoon. I am trying to combine 2 searches where the outer search passes a value to the inner search and then appends the results. Let me explain: As of right now, I am searching a set of logs that happens to include people's names and their request type when they call the bank. The one I am focused on is "withdraw inquiry." So we get a list of all people who try to withdraw money based on the following  base search. index=myIndex sourcetype=mySource request_type="withdraw inquiry" | xmlkv DetailXML | stats count, values(phone_number), values(activity_summary), values(request_type) values(email) by acct_num name_last name_first | where count > 1 | sort - count I have made this into a dash board and then subsequently added a drilldown. You click the panel and it then searches a lookup table called Previously_Compromised_Accounts.csv That search is this: | inputlookup Previously_Compromised_Accounts.csv | search name=*$clickValue$* | table date user How can I combine this search? Basically, I would like to add another column "compromisedUser" to the base search. If the base search is ran, then a secondary search would be performed, using the value of "name_last" and search the lookup table, which then appends the results to the base search as "compromisedUser" if no results come from the search of the Previously_Compromised_Accounts.csv, then that 8th coloumn can remain blank.    accountNumber name_first name_last call count values(phone_number) values(activity_summary) values(email) compromisedUser 123456678 Smith John 3 1235550987 withdraw inquiry JohnSmith@company.com 26DEC2021 jsmith001
Hi team, I have the following table with results ID processing time actor 123 20 actor1 123 30 actor2 123 40 actor3   And I'd like to combine them ... See more...
Hi team, I have the following table with results ID processing time actor 123 20 actor1 123 30 actor2 123 40 actor3   And I'd like to combine them as a single result like :  ID actor1 actor2 actor3 123 20 30 40   The list of actors is not known in advance  Is this possible? Thank you
Hello there, I am attempting to install splunk forwarder version 8.2.3-cd0848707637 on a RHEL 7.8 server using an ansible script.  Here is the ansible script:   - hosts: all vars_prompt: ... See more...
Hello there, I am attempting to install splunk forwarder version 8.2.3-cd0848707637 on a RHEL 7.8 server using an ansible script.  Here is the ansible script:   - hosts: all vars_prompt: - name: splunk_idxcluster prompt: What is the indexer password for symmetric key? - name: splunk_uf_admin prompt: What is the admin password? vars: deploymentserver: *************************** clustermaster: **************************** tasks: - name: Install Splunk UF package become: yes become_method: dzdo yum: name: splunkforwarder state: latest register: splunkpackage - name: Check if ftr file exists stat: path: /opt/splunkforwarder/ansible_splunk.ftr register: splunkftr - name: Set ACL to allow splunk user access to log files become: yes become_method: dzdo acl: path: "/opt/splunkforwarder" recursive: yes permissions: rwx entity: splunk etype: group state: present - name: Create user-seed file become: yes become_method: dzdo copy: dest: "/opt/splunkforwarder/etc/system/local/user-seed.conf" owner: splunk group: splunk content: | [user_info] USERNAME = admin PASSWORD = {{ splunk_uf_admin }} register: seedfile when: splunkftr.stat.exists == False - name: First run to accept license become_user: splunk become_method: dzdo command: /opt/splunkforwarder/bin/splunk start --accept-license --no-prompt register: splunklicense when: splunkftr.stat.exists == False - name: Enable UF to start at boot as user splunk shell: /opt/splunkforwarder/bin/splunk enable boot-start -user splunk register: splunkbootstart when: splunkftr.stat.exists == False - name: Build outputs.conf to check-in with indexer cluster master copy: dest: "/opt/splunkforwarder/etc/system/local/outputs.conf" owner: splunk group: splunk content: | [tcpout] defaultGroup = cluster [tcpout:cluster] indexerDiscovery = cluster clientCert = /opt/splunkforwarder/etc/auth/server.pem sslPassword = password sslRootCAPath = /opt/splunkforwarder/etc/auth/cacert.pem sslVerifyServerCert = false useACK = true [indexer_discovery:cluster] pass4SymmKey = {{ splunk_idxcluster }} master_uri = {{ clustermaster }} when: splunkftr.stat.exists == False - name: Configure UF to check-in with the deployment server shell: "sudo -u splunk /opt/splunkforwarder/bin/splunk set deploy-poll {{ deploymentserver}} -auth admin:{{ splunk_uf_admin }}" when: splunkftr.stat.exists == False - name: Create Splunk ftr file to convey Splunk has already been setup before file: path: "/opt/splunkforwarder/ansible_splunk.ftr" state: touch when: splunklicense is succeeded and splunkbootstart is succeeded - name: Set ACL to allow splunk user access to log files acl: path: "/u/log" recursive: yes permissions: rwx entity: splunk etype: group state: present - name: Add /u/log to watched directories shell: /opt/splunkforwarder/bin/splunk add monitor /u/log -index applogs -sourcetype %elasticsearch% ignore_errors: yes   When I get to the "First run to accept license" step, I get an error "Please run 'splunk ftr' as boot-start user".  I have tried running "splunk ftr" as splunk, but then I get the same error, so I'm not sure it's doing anything.  Any help would be greatly appreciated!
Hi  ALL, I  have the below  table  service name      Status /app/test1           Success:90% Warning :8% Failure :2% /app/test2           Success: 80% Warning 2% Failure :18% i want to  col... See more...
Hi  ALL, I  have the below  table  service name      Status /app/test1           Success:90% Warning :8% Failure :2% /app/test2           Success: 80% Warning 2% Failure :18% i want to  color the cell base don  success values  above 90% green 80 to  90% amber etc      
Hello I want to monitor if user run new search in our environment or created new alert  i tried to use this query :   |rest /services/saved/searches | search action.email.to=* OR action.email.... See more...
Hello I want to monitor if user run new search in our environment or created new alert  i tried to use this query :   |rest /services/saved/searches | search action.email.to=* OR action.email.to=* | where disabled=0 | table title , search , updated   the problem is that there is no time field in order to compare the 'updated' value with time to know if there is something new. is there any other way to check new entries ?
Hi. I'm trying to add a new input with the Tenable add-on: https://splunkbase.splunk.com/app/4060/ When adding a new input I can input a "start time" from when the add-on will start collection data... See more...
Hi. I'm trying to add a new input with the Tenable add-on: https://splunkbase.splunk.com/app/4060/ When adding a new input I can input a "start time" from when the add-on will start collection data from Tenable, as opposed to "all time" I suppose, but no matter how I format my timestamp, the add-on won't accept it. See screenshot. I've tried all sort of variations, but all fail. I've also looked at the documentation (https://docs.tenable.com/integrations/Splunk/Content/Splunk2/CreateInput.htm) which suggest another time format than the add-on itself (probably not updated), but that isn't working either. Has anyone gotten this to work, and if so, what is the correct way of formating the timestamp?
Hello all, I have a Japanese language windows server from which I am testing to push the data to Tier1 index. However, although the language settings in the server is Japanese, all the data is pu... See more...
Hello all, I have a Japanese language windows server from which I am testing to push the data to Tier1 index. However, although the language settings in the server is Japanese, all the data is pushed as English. Adding the inputs and props file that is configured as below on the UF in windows server. Please let me know how do I do this. inputs.conf: ###### OS Logs ###### [WinEventLog://Application] disabled = 0 start_from = oldest current_only = 0 checkpointInterval = 5 # only index events with these event IDs. whitelist = 0-2000,2001-10000 index = acn_infra360-wineventlog_default_tier1_idx _TCP_ROUTING = winevent_dev1 renderXml=false [WinEventLog://Security] disabled = 0 start_from = oldest current_only = 0 checkpointInterval = 5 # only index events with these event IDs. whitelist = 0-2000,2001-10000 index = acn_infra360-wineventlog_default_tier1_idx _TCP_ROUTING = winevent_dev1 renderXml=false Props.conf: [WinEventLog://Application] description = Windows Event Monitoring CHARSET = SHIFT-JIS BREAK_ONLY_BEFORE = \d{2}/\d{2}/\d{4} \d{2}:\d{2}:\d{2} TIME_FORMAT = %m-%d-%Y %T sourcetype = WinEventLog:Application [WinEventLog://Security] description = Windows Event Monitoring CHARSET = SHIFT-JIS BREAK_ONLY_BEFORE = \d{2}/\d{2}/\d{4} \d{2}:\d{2}:\d{2} TIME_FORMAT = %m-%d-%Y %T sourcetype = WinEventLog:Security Also, attaching the screenshot of the event viewer from the server.