All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

イベント内に日時の記載はあるものの、検索の際はSplunkに取り込んだ日時を使いたいです。 Splunkのイベントタイムスタンプは、以下に従い付与される認識です。 ①イベント内に日時情報がある場合 props.confで「TIME_FORMAT」が指定されている場合明示された「TIME_FORMAT」を使ってイベント内の時刻や日付を探そうとします。 取り込むデータに対して「TIM... See more...
イベント内に日時の記載はあるものの、検索の際はSplunkに取り込んだ日時を使いたいです。 Splunkのイベントタイムスタンプは、以下に従い付与される認識です。 ①イベント内に日時情報がある場合 props.confで「TIME_FORMAT」が指定されている場合明示された「TIME_FORMAT」を使ってイベント内の時刻や日付を探そうとします。 取り込むデータに対して「TIME_FORMAT」が無かった場合イベント内からタイムスタンプを認識しようとします。 ②イベントに時刻と日付が無い場合 同じソースから取り込んだ直近のタイムスタンプを認識しようとします。 ③どのイベントもソース内に日付情報を持たない場合 Splunkはソース名やファイル名から日付情報を抽出しようとします。 ④ファイル名に日付情報が無い場合 ファイルの最終更新日時をタイムスタンプとして認識しようとします。 ⑤上記1-5でもタイムスタンプを認識できない場合 Splunkサーバーのシステム時刻をタイムスタンプとして認識します。 (取り込んだ時間=そのイベントのタイムスタンプ) イベント内に日時記載があっても、上記⑤のようにイベントのタイムスタンプはSplunkサーバのシステム時刻とすることは可能でしょうか。,イベント内に日時の記載はあるものの、検索の際はSplunkに取り込んだ日時を使いたいです。 Splunkのイベントタイムスタンプは、以下に従い付与される認識です。 ①イベント内に日時情報がある場合 props.confで「TIME_FORMAT」が指定されている場合明示された「TIME_FORMAT」を使ってイベント内の時刻や日付を探そうとします。 取り込むデータに対して「TIME_FORMAT」が無かった場合イベント内からタイムスタンプを認識しようとします。 ②イベントに時刻と日付が無い場合 同じソースから取り込んだ直近のタイムスタンプを認識しようとします。 ③どのイベントもソース内に日付情報を持たない場合 Splunkはソース名やファイル名から日付情報を抽出しようとします。 ④ファイル名に日付情報が無い場合 ファイルの最終更新日時をタイムスタンプとして認識しようとします。 ⑤上記1-5でもタイムスタンプを認識できない場合 Splunkサーバーのシステム時刻をタイムスタンプとして認識します。 (取り込んだ時間=そのイベントのタイムスタンプ) イベント内に日時記載があっても、上記⑤のようにイベントのタイムスタンプはSplunkサーバのシステム時刻とすることは可能でしょうか。
I'm adding ~2k rows to a KVStore table with 14 fields and ~2 million rows. The outputlookup command takes nearly 2 hours. The query is like this: index=myindex earliest=-1d | fields id,a,b,c,d,... See more...
I'm adding ~2k rows to a KVStore table with 14 fields and ~2 million rows. The outputlookup command takes nearly 2 hours. The query is like this: index=myindex earliest=-1d | fields id,a,b,c,d,e,f,g,h,i,j,k,l,m | inputlookup append=true kvtable | dedup id | outputlookup kvtable The initial part of the query typically returns a couple thousand rows. From the job inspector: duration components invocations inputcount outputcount 0.00 command.addinfo 6 2,138 2,138 2.50 command.dedup 49 2,216,938 2,214,898 0.00 command.fields 10 4,276 4,276 17.30 command.inputlookup 1 150,538 2,450,000 6,338.78 command.outputlookup 1 2,414,800 2,414,800 Is this normal? If not, can you suggestion some troubleshooting steps?
How can I change first day of week in splunk For e.g.I want to create report that show this week errors but first day of week is Saturday not Sunday! Any recommendation? Thanks,
Hi All, I am trying to ingest the syslog data into splunk for test POC. In-order to ingests the syslog data, I had followed the below steps 1) created ubuntu 18.04 in Google cloud and opened the ... See more...
Hi All, I am trying to ingest the syslog data into splunk for test POC. In-order to ingests the syslog data, I had followed the below steps 1) created ubuntu 18.04 in Google cloud and opened the firewall port 514. 2) Install syslog-ng on Ubuntu. $ sudo apt-get install syslog-ng -y 3) Verify installed version of syslog-ng root@syslog-ng:~# syslog-ng --version syslog-ng 3 (3.13.2) Config version: 3.13 Installer-Version: 3.13.2 4) Create a Network input (UDP) in the Splunk console. Splunk  Settings  Data inputs  UDP  Add New 5) Below details are configured in syslog-ng.conf. source auth_log { file ("/var/log/secure"); }; destination splunk-demo { network ("35.247.51.122" transport("udp")); }; --> Splunk-demo IP address log { source(auth_log); destination(s) 6) Restart syslog-ng service to effect the changes root@syslog-ng:/etc/syslog-ng# systemctl restart syslog-ng 7) Unable to see the syslog-ng data getting ingested into splunk-demo instance which is the indexer server index = syslog-data Details: Splunk Enterprise 8.0.2 Trial Version Syslog version --> syslog-ng 3 (3.13.2) Splunk indexer IP address -- > 35.247.51.122 Syslog server IP address --> 35.225.250.86 Firewall Port --> 514 is opened for communication Kindly guide me how to troubleshoot this issue and where I can see the error details on why the data is not getting ingested. Thanks in advance.
I have this query which when I run, index=*aws_config* resourceType=TERM("AWS::EC2::Volume") | search ARN="arn:aws:ec2:eu-west-1:848889366260:volume/vol-0ecf419c9cd71857c" | table ARN, "tags.Ge... See more...
I have this query which when I run, index=*aws_config* resourceType=TERM("AWS::EC2::Volume") | search ARN="arn:aws:ec2:eu-west-1:848889366260:volume/vol-0ecf419c9cd71857c" | table ARN, "tags.Genie.ArchPath" | dedup ARN gives following results. Notice that value of field "tags.Genie.ArchPath" is blank. This is what I expect +-----------------------------------------------------------------+---------------------+ | ARN | tags.Genie.ArchPath | +-----------------------------------------------------------------+---------------------+ | arn:aws:ec2:eu-west-1:848889366260:volume/vol-0ecf419c9cd71857c | | +-----------------------------------------------------------------+---------------------+ However when change the query so that dedup is called earlier, I get strange results, index=*aws_config* resourceType=TERM("AWS::EC2::Volume") | dedup ARN | table ARN, "tags.Genie.ArchPath" | search ARN="arn:aws:ec2:eu-west-1:848889366260:volume/vol-0ecf419c9cd71857c" The results I get are as shown below. Now the value of field "tags.Genie.ArchPath" is not blank. It strangely is a pipe(|) separated concatenation of source, host and sourcetype. +----------------------------------+--------------------------------------------------+ | ARN | tags.Adobe.ArchPath | +----------------------------------+--------------------------------------------------+ | arn:aws:ec2:eu-west-1:8488893662 | | | 60:volume/vol-0ecf419c9cd71857c | source::mavl://adobe-mavlink-prod-confi | | | g/AWSLogs/848889366260/Config/eu-west- | | | 1/2020/4/26/ConfigSnapshot/84888936626 | | | 0_Config_eu-west-1_ConfigSnapshot_2020 | | | 0426T110637Z_6375f945-8932-4196-ab9f-27 | | | 1c3333c55a.json.gz|host::840136feca32|aws:config | +----------------------------------+--------------------------------------------------+ I fail to understand shy this is happening. Ideally both the queries should give same results. Would really appreciate if someone can help here. Thanks, Ashish
Environment is single-site with 3 indexers and each indexer with parallelIngestionPipelines = 1(under /opt/splunk/etc/system/default/server.conf). Does above imply that I have TOTAL "three" ingesti... See more...
Environment is single-site with 3 indexers and each indexer with parallelIngestionPipelines = 1(under /opt/splunk/etc/system/default/server.conf). Does above imply that I have TOTAL "three" ingestion pipelines into my environment (one from each indexer) ? If yes, with the setting of "maxHotBuckets=10" , would that mean I will have total limit of 30 hot buckets, considering 10 belonging to each ingestion pipeline ?
Hello i want to write IF statement as part of my query and want it to run on time frame of 30 days or more... the query itself runs on time frame of "all time" but the result i want is from 30 d... See more...
Hello i want to write IF statement as part of my query and want it to run on time frame of 30 days or more... the query itself runs on time frame of "all time" but the result i want is from 30 days or more how can i do it ? thanks
Any Demo video on Splunk ITSI ( Service Analyzer , DeepDives and Glass Tables) creation/setup ?
Hi In my dashboard I update different panels from a dropdown list The name of my dropdown list is "tok_filtersite" As you can see in the screenshot I use a parameter in the exploration editor in o... See more...
Hi In my dashboard I update different panels from a dropdown list The name of my dropdown list is "tok_filtersite" As you can see in the screenshot I use a parameter in the exploration editor in order to retrieve the field "Site" in my drilldown <drilldown> <link target="_blank">/app/SA_XXXXX_sh/usage_m?Site=$tok_filtersite|s$</link> </drilldown> It works perfectly but the issue comes when I use this parameter in the same dashboard with different drillwdowns Instead having Site=$tok_filtersite|s$ , I have Site=$tok_filtersite%7Cs$ Why? Its not possible to use the same token many times in a same dashboard? If yes, how to correct this please??
Trying to add a simple URL table using a td as separator and the output looks promising in preview however in the last step when i try to save the input it fails with the error "The input could not b... See more...
Trying to add a simple URL table using a td as separator and the output looks promising in preview however in the last step when i try to save the input it fails with the error "The input could not be saved: An error occurred" Website is HTTP and doesn't use any auth and there's also no proxy in the way.
Hi Splunkers, My external lookup working just fine and the results are proper. As mentioned in the below screenshot my_test_lookup.py is available in my /etc/apps/[my_app]/bin/my_test_lookup.p... See more...
Hi Splunkers, My external lookup working just fine and the results are proper. As mentioned in the below screenshot my_test_lookup.py is available in my /etc/apps/[my_app]/bin/my_test_lookup.py. I don't have any issue with the external lookup with python script. Is there any way to send a custom message from python script to splunk GUI whenever the search event matches the custom limit number(1000). This limit is not from the splunk configs. this limit has been provided in my external lookup python script. r = csv.DictReader(infile) header = r.fieldnames w = csv.DictWriter(outfile, fieldnames=r.fieldnames) w.writeheader() event_count = 0 search_limit = 1000 for result in r: if result[group_field]: result[field1] = "test1" result[field2] = "testfiedl2" w.writerow(result) event_count += 1 if event_count == search_limit: reach_limit_msg = "Reached Limit %d" % event_count send_message(reach_limit_msg) print("Reached Limit %d" % event_count) logger.warning("Reached Limit %d" % event_count) sys.exit(0) It may be a kind of popup saying your search limit 1000 exceeded or something like below mentioned screenshot. FYI: I have already tried bulletin message by referencing this url . It worked like charm, but it sends a bulletin message to all the users who logged in, but i would like to send a message to only the person who fired the SPL query. Please help, Thanks in advance _/_.
Hi Folks, I have been working on creating a drop-down where 1Q should represent Jan,feb,march data...2Q= April, may, June data etc.....Also when I select 1Q as my drop-down below I should display ... See more...
Hi Folks, I have been working on creating a drop-down where 1Q should represent Jan,feb,march data...2Q= April, may, June data etc.....Also when I select 1Q as my drop-down below I should display Jan , Feb , march either in same panel or different.. Please guide me how I can achieve this?? Thanks in advance
Initially, there was no license rule division by application or teams. There are many applications in a Controller, but now it is very difficult to count how many licenses each application consumes,... See more...
Initially, there was no license rule division by application or teams. There are many applications in a Controller, but now it is very difficult to count how many licenses each application consumes, especially when there are .net applications. So I want to know if AppD has a method to count the number of licenses by application?
Thanks everyone, you have helped me a lot these last few days as I binge learn splunk... This question, im pretty sure, is an easy one, im just missing something obvious...... I grab JSON values that... See more...
Thanks everyone, you have helped me a lot these last few days as I binge learn splunk... This question, im pretty sure, is an easy one, im just missing something obvious...... I grab JSON values that are a multivalue field so usage_info.voltage contains value1,valu2 Im trying to add value1 and value2 to a dashboard as two different single value displays. How can i modify my search to show one value or the other? RAW JSON DATA "voltage":[121.77615356445312,121.19967651367188]
Hi have logs look likes below, and want to define where transaction begin and where finished. for example at ID654321 begin "654321 start" and finish at "654321 Message Received". expected outpu... See more...
Hi have logs look likes below, and want to define where transaction begin and where finished. for example at ID654321 begin "654321 start" and finish at "654321 Message Received". expected output 1 (overall report): number of success transaction 1 654321 number of fail transaction (start without finish) 1 123456 expected output 2 (calculate transaction delay): 654321 2 (m) 2020-01-19 13:20:15,093 INFO ABC.InIT-AppName-123456 [Performance] start[123456789] time[34] tag[ok] 2020-01-19 13:20:15,093 INFO ABC.InIT-AppName-123456 [Processor] AdministrationProcessor Accomplished: A[xxx] B[yyy] C[1A0000] DE[00] F[0000] GH[ABC.OutIT-AppName] Status[PERSIST-LOGOUT,BACKWARD] 2020-01-19 13:20:15,099 INFO ABC.InIT-AppName-123456 [ProcessorService] Message Processed: M[xxx] T[yyy] C[1A0000] DE[00] F[0000] GH[ABC.OutIT-AppName] Status[EXIST-LOGOUT,BACKWARD] 2020-01-19 13:20:15,099 INFO ABC.InIT-AppName-123456 [Performance] start[123456789] time[313] tag[process] 2020-01-19 13:20:15,110 INFO ABC.InIT-AppName-123456 [Manager] Send Message [123456789A123456789*] to [ABC.app.AppName] 2020-01-19 13:20:00,114 INFO ABC.InIT-AppName-654321 [Performance] start[987654321] time[7] tag[send] 2020-01-19 13:20:08,181 INFO ABC.InIT-AppName-654321 [Listener] Receive Message[987654321B123456789*] from [ABC.AppName.app] 2020-01-19 13:22:00,185 INFO ABC.InIT-AppName-654321 [ProcessorService] Normal Message Received: A[111] B[222] NM[123456789] Any recommendation? Thanks,
Hi, I have some issue with transaction command. It works fine. but sometimes endswith pattern appear and startswith pattern in the log. So the transcation command failing to convert that as a transac... See more...
Hi, I have some issue with transaction command. It works fine. but sometimes endswith pattern appear and startswith pattern in the log. So the transcation command failing to convert that as a transaction. For example, Works fine for below log, XXXXXXXXXend XXXXXXXXXstart Occassionally the data reverses like below and trasnaction command doesnt find it XXXXXXXXXstart XXXXXXXXXend Is there a wrok around for this?
We pass messages with rsyslog using the rfc3339 time format. It has microseconds, and it has a timestamp. But noticed a few issues: The time zone is not parsed out of the message. If I remove th... See more...
We pass messages with rsyslog using the rfc3339 time format. It has microseconds, and it has a timestamp. But noticed a few issues: The time zone is not parsed out of the message. If I remove the microseconds from the timestamp, it would work fine. The host does not get parsed out. Seems to be a problem with the syslog-host transform which does not like the timezone. Dropping the timezone fixes this problem but I would rather keep it. What would be the best way to proceed? Modify the syslog source type? Create a new source type? Report the issue and hope for a fix?
Hi, I am trying to spin a UF in Docker with the following: docker run -d --name uf -e "SPLUNK_DEPLOYMENT_SERVER=xxxxxxxx" -e "SPLUNK_START_ARGS=--accept-license" -e "SPLUNK_PASSWORD=XXXXXXX' -e... See more...
Hi, I am trying to spin a UF in Docker with the following: docker run -d --name uf -e "SPLUNK_DEPLOYMENT_SERVER=xxxxxxxx" -e "SPLUNK_START_ARGS=--accept-license" -e "SPLUNK_PASSWORD=XXXXXXX' -e "SPLUNK_ENABLE_LISTEN=9997" -e "SPLUNK_ADD=tcp 1514" splunk/universalforwarder It starts just fine and pulls apps from the deployment server, which among other things, disable the mgmt port. But if I were to restart the container, it will fail with: TASK [splunk_common : Wait for splunkd management port] ************************ fatal: [localhost]: FAILED! => { "changed": false, "elapsed": 300 } MSG: Timeout when waiting for 127.0.0.1:8089 Any solution to go around that? Thanks! Sadek
I am using below query where my A (0012ABC) Component is an alphanumeric and B is a string (ab) but its considering A as numeric only base query |eval details = if(A>1 AND B >1,A,"Alone")|dedup de... See more...
I am using below query where my A (0012ABC) Component is an alphanumeric and B is a string (ab) but its considering A as numeric only base query |eval details = if(A>1 AND B >1,A,"Alone")|dedup details| table details want the results as if in my result set there is a duplicate value occurrence with respect to the combination of A and B then show me that value for A(duplicate values only. E.g. ColumnA Coulmn B 0001AB ab 0002AB pq 0001AB ab expected result set details 0001AB
Environment : Heavy forwarder -> Indexers cluster -> SH ON HWF side : I am fetching logs using the Curl command which goes to directory DIR-A and following files are created : These files gets... See more...
Environment : Heavy forwarder -> Indexers cluster -> SH ON HWF side : I am fetching logs using the Curl command which goes to directory DIR-A and following files are created : These files gets downloaded everyday 10:00 am and before that script clean up all the old files from both DIR-A and DIR-B A1.json B1.json C1.json D1.json Now these files have header and footer which needs to be removed before they indexed as json. so i have another script which schedule to run after 10 min these files are downloaded in DIR-A This script remove the header and footer from these files and copy them to New Dir DIR-B as follow : A2.json B2.json C2.json D2.json till here everything works fine. The issue start when I see 3 files indexed in splunk out of 4 or sometimes 2 out of 4. I dont see any error in internal logs for files which are not indexed. here is my input.conf : [monitor:///home/DIR-B/A2.json] index = test crcSalt = sourcetype = test1 disabled = false [monitor:///home/DIR-B/B2.json] index = test crcSalt = sourcetype = test2 disabled = false [monitor:///home/DIR-B/C2.json] index = test crcSalt = sourcetype = test3 disabled = false [monitor:///home/DIR-B/D2.json] index = test crcSalt = sourcetype = test4 disabled = false props.conf : for all the sourcetype test1,test2,test3,test4 is same as below : DATETIME_CONFIG = CURRENT INDEXED_EXTRACTIONS = json KV_MODE = false AUTO_KV_JSON = false NO_BINARY_CHECK = true category = Structured disabled = false pulldown_type = true ON SH side settings : props.conf for sourcetype test1,test2,test3,test4 KV_MODE = false AUTO_KV_JSON = false ___- The strange part is if i edit the file ( the file which is not indexed) and add something like #test at the beginning of file and restart splunk it will get indexed fine. Here is the pattern of the file which is having issue. [ {"AAA":"ZZZZ-000","lastSeen":XXXX,"hash":"XXXXXXXXXXXX"}, {"BBB":"MMMM-000","lastSeen":XXXX,"hash":"XXXXXXXXXXXX"}, {"CCC":"yyyy-000","lastSeen":XXXX,"hash":"XXXXXXXXXXXX"} ] Please suggest if i need to use batch instead of monitor or any other suggestion ?