All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi, has there been any update since?
There is a recent question about doing this in dashboard.  Using a time selector token would be the cleanest. If you are not doing it in dashboard, but this is a singular use case, I can think of an... See more...
There is a recent question about doing this in dashboard.  Using a time selector token would be the cleanest. If you are not doing it in dashboard, but this is a singular use case, I can think of an ugly map, like this   | makeresults | addinfo | eval start_24h_earlier = relative_time(info_min_time, "-24h") | map start_24h_earlier search="search index=netlte earliest=$start_24h_earlier | dedup ClientIP | fields ClientIP IMEI" | join ClientIP [index=netproxymobility sourcetype="zscalernss-web" | fields transactionsize responsesize requestsize urlcategory serverip ClientIP hostname appname appclass urlclass]   Here is a proof of concept:   | makeresults | addinfo | eval int_start = relative_time(info_min_time, "-24h"), int_end = relative_time(info_max_time, "-4h") | map info_min_time info_max_time search="search index=_audit earliest=$int_start$ latest=$int_end$ | stats min(_time) as in_begin max(_time) as in_end by action" | join action [search index = _audit | stats min(_time) as out_begin max(_time) as out_end by action] | fieldformat in_begin = strftime(in_begin, "%F %T") | fieldformat in_end = strftime(in_end, "%F %T") | fieldformat out_begin = strftime(out_begin, "%F %T") | fieldformat out_end = strftime(out_end, "%F %T")   My output looks like action in_begin in_end out_begin out_end expired_session_token 2024-03-24 23:23:20 2024-03-25 12:25:33 2024-03-25 22:33:51 2024-03-25 23:19:51 login attempt 2024-03-24 22:23:12 2024-03-25 04:25:11 2024-03-25 21:33:26 2024-03-25 21:33:26 quota 2024-03-24 22:23:16 2024-03-25 04:25:17 2024-03-25 21:41:52 2024-03-25 23:21:45 read_session_token 2024-03-24 19:21:02 2024-03-25 19:18:58 2024-03-25 20:17:30 2024-03-25 23:21:49 search 2024-03-24 19:37:09 2024-03-25 11:29:40 2024-03-25 21:15:35 2024-03-25 23:21:05 update 2024-03-24 19:51:57 2024-03-25 09:15:17 2024-03-25 21:13:23 2024-03-25 23:09:53 validate_token 2024-03-24 19:21:02 2024-03-25 19:18:58 2024-03-25 20:17:30 2024-03-25 23:21:49
Hi @KendallW   Thanks for the reply, but that does not work as I'm plotting this in the line chart.   The data is coming to SignalFx from the StatsD agent.   
Hi SMEs, Seeking help on the below field extraction to capture hostname1, hostname2, hostname3 & hostname4   Mar 22 04:00:01 hostname1 sudo: root : TTY=unknown ; PWD=/home/installer/LOG_Transfer ;... See more...
Hi SMEs, Seeking help on the below field extraction to capture hostname1, hostname2, hostname3 & hostname4   Mar 22 04:00:01 hostname1 sudo: root : TTY=unknown ; PWD=/home/installer/LOG_Transfer ; USER=root ; COMMAND=/bin/bash -c grep -e 2024-03-21 -e Mar\ 21 /var/log/secure Mar 22 04:00:01 hostname2 sudo: root : TTY=unknown ; PWD=/home/installer/LOG_Transfer ; USER=root ; COMMAND=/bin/bash -c grep -e 2024-03-21 -e Mar\ 21 /var/log/secure 2024-03-21T23:59:31.143161+05:30 hostname3 caam: [INVENTORY|CaaM-14a669917c4a02f5|caam|e0ded6f4f97c17132995|Dummy-5|INFO|caam_inventory_controller] Fetching operationexecutions filtering with vn_id CaaM-3ade67652a6a02f5 and tenant caam 2024-03-23T04:00:17.664082+05:30 hostname4 sudo: root : TTY=unknown ; PWD=/home/caam/LOG_Transfer ; USER=root ; COMMAND=/bin/bash -c grep -e 2024-03-22 -e Mar\ 22 /var/log/secure.7.gz  
Alright!!! I found the answer to this question- Modified the below query by changing the time formats of the new fields and then pulling out the difference -- index="abc" sourcetype=openshift_logs ... See more...
Alright!!! I found the answer to this question- Modified the below query by changing the time formats of the new fields and then pulling out the difference -- index="abc" sourcetype=openshift_logs openshift_namespace="qaenv" "a9ecdae5-45t6-abcd*" | rex field=_raw "\"Application-ID\"\:\s\"(?<appid>.*?)\"" | rex field=_raw "\"stepType\"\:\s\"(?<steptype>.*?)\"" | rex field=_raw "\"flowname\"\:\s\"(?<flowname>.*?)\"" | rex field=_raw "INFO ((?<infotime>\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2},\d{3}))" |stats latest(eval(if(steptype="EndNBflow",max(infotime),0))) AS endNBflow , latest(eval(if(steptype="Deserialized payload",infotime,0))) AS endPayLoad , count(eval(match(_raw,"error"))) as error_count, dc(steptype) as unique_steptypes BY appid|where unique_steptypes >= 16 AND error_count=0|eval endNBflowtime=strptime(endNBflow, "%Y-%m-%d %H:%M:%S,%3N") | eval endPayLoadtime=strptime(endPayLoad, "%Y-%m-%d %H:%M:%S,%3N") | eval time_difference = endNBflowtime - endPayLoadtime | table appid,endNBflow,endPayLoad, endNBflowtime, endPayLoadtime, time_difference   Results are like this : Appid endNBflow Endpayload endNBflowtime Endpayloadtime responsetime Abcd1 2024-03-04 16:10:50,007 2024-03-04 16:10:49,886 1709529050.007000 1709529049.886000 0.121000
Thanks a lot yuanli,    That worked , you are a genuis. I thought I could never structure it in splunk.  I did speak to the dev team , apparently the json is structured in that way to feed a parti... See more...
Thanks a lot yuanli,    That worked , you are a genuis. I thought I could never structure it in splunk.  I did speak to the dev team , apparently the json is structured in that way to feed a particular dashboard system that they use. So it has to be in that structure for that system to consume. However they have agreed to update the structure in the next release which could be a few months away (6 months atleast) . So in the mean time I could work with this bad json until then.    Thanks  a lot again. I had searched in splunk for something like this before and havent seen anything. 
check permission issue in log files splunkd ... mongod logs Check file ownership and permission for server.pem, server.key
Hi @dongwonn a few things to check -check the host field in Splunk matches the host:: stanza in your props.conf -Since you are not explicitly specifying a lot of configs, they may be taking default... See more...
Hi @dongwonn a few things to check -check the host field in Splunk matches the host:: stanza in your props.conf -Since you are not explicitly specifying a lot of configs, they may be taking default values from other places. Use btool to check the full props settings being applied to this host: $SPLUNK_HOME/bin/splunk cmd btool props list host::x.x.x.21 -Update your TIME_PREFIX to capture the full string before the timestamp beginning at the start of the event, so that Splunk will definitely exclude the preceding timestamps. Example: TIME_PREFIX=^\w{3}\s\d\d\s(\d{2}\:?){3}\s(\d{0,3}\.?){4}\s\w{3}\s\d\d\s(\d{2}\:?){3}\s[\w\s]+\-:\s\[  
Hi @sks if you want just the percentage of misses and the percentage of hits, you can do this with eval: | eval Bperc=('B'/(B+C))*100 | eval Cperc=('C'/(B+C))*100 If you want to show this in a char... See more...
Hi @sks if you want just the percentage of misses and the percentage of hits, you can do this with eval: | eval Bperc=('B'/(B+C))*100 | eval Cperc=('C'/(B+C))*100 If you want to show this in a chart (e.g. pie chart) you don't need to calculate the percentage as Splunk will do this for you, but you will need to get the values of B and C in the same column using transpose. Example:  
Hi Giuseppe, I want to view the results in the below format. I also want the diff time in human readable format like 10sec, 15 mins etc. Appid Responsetime(Diff) In my usecase- I have ... See more...
Hi Giuseppe, I want to view the results in the below format. I also want the diff time in human readable format like 10sec, 15 mins etc. Appid Responsetime(Diff) In my usecase- I have more that 5000 messages, each successful message has 16 steptypes, so I have put the query in this way.- index="abc" sourcetype=openshift_logs openshift_namespace="qaenv" "a9ecdae5-45t6-abcd*" | rex field=_raw "\"Application-ID\"\:\s\"(?<appid>.*?)\"" | rex field=_raw "\"stepType\"\:\s\"(?<steptype>.*?)\"" | rex field=_raw "\"flowname\"\:\s\"(?<flowname>.*?)\"" | rex field=_raw "INFO ((?<infotime>\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2},\d{3}))" |stats latest(eval(if(steptype="EndNBflow",max(infotime),0))) AS endNBflow latest(eval(if(steptype="Deserialized payload",infotime,0))) AS endPayLoad dc(steptype) as unique_steptypes by appid|where unique_steptypes >= 16 |eval diff=endNBflow-endPayLoad   My earlier code included-  | rex field=_raw "INFO  ((?<infotime>\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2},\d{3}))" | stats max(infotime) as maxinfotime, min(infotime) as mininfotime,count(eval(match(_raw, "error"))) as error_count, dc(steptype) as unique_steptypes by appid | where error_count = 0 | eval maxtime=strptime(maxinfotime,"%Y-%m-%d %H:%M:%S,%3N") | eval mintime=strptime(mininfotime,"%Y-%m-%d %H:%M:%S,%3N") |  eval TimeDiff=maxtime-mintime | eval TimeDiff_formated = strftime(TimeDiff,"%H:%M:%S,%3N")| where unique_steptypes >= 16|sort steptype | table appid, mininfotime, maxinfotime, mintime, maxtime, TimeDiff_formated, unique_steptypes, flowname I am unable to club these two and get the expected output. 
Check if this is the same issue: https://community.splunk.com/t5/All-Apps-and-Add-ons/Splunk-TA-New-Relic-Insight-not-ingesting-data/m-p/528756 
that's what I was wondering but not sure how to reference the earliest values from within the join
As mentioned by @hrawat the workaround is to disable the splunk_internal_metrics on the forwarder: $SPLUNK_HOME/etc/apps/splunk_internal_metrics/local/app.conf   [install] state = disabled ... See more...
As mentioned by @hrawat the workaround is to disable the splunk_internal_metrics on the forwarder: $SPLUNK_HOME/etc/apps/splunk_internal_metrics/local/app.conf   [install] state = disabled   The bug (SPL-251434) will be fixed in the following releases: 9.2.2/9.1.5/9.0.10 This app: Clones all metrics.log events into a metric_log sourcetype. The metrics_log sourcetype does the necessary conversion from log to metric event, and redirects the events to the _metrics index.
How to identify if you are hitting this bug: 1.) do you have persistentQueueSize set over 20MB in inputs.conf of the forwarder: Ie: etc/system/local/inputs.conf [splunktcp-ssl:9996] persiste... See more...
How to identify if you are hitting this bug: 1.) do you have persistentQueueSize set over 20MB in inputs.conf of the forwarder: Ie: etc/system/local/inputs.conf [splunktcp-ssl:9996] persistentQueueSize = 120GB   2.) is Splunk crashing after a restart where the crashing thread is Crashing thread: typing_0? 3.) you see in splunkd_stderr.log logs like: 2024-01-11 21:16:19.669 +0000 splunkd started (build 050c9bca8588) pid=2506904 Fatal thread error: pthread_mutex_lock: Invalid argument; 117 threads active, in typing_0
Hi @Bujji2023 , I had experienced similar issues when saving the lookup csv file in MS Excel before uploading. This can be fixed by saving the csv file from a text editor instead, making sure to remo... See more...
Hi @Bujji2023 , I had experienced similar issues when saving the lookup csv file in MS Excel before uploading. This can be fixed by saving the csv file from a text editor instead, making sure to remove any superfluous characters (e.g. quotation marks) 
Hi @hassan1214, here's a few things to check to begin troubleshooting this issue: -Are you running the search in Fast Mode? If so, try running it in Smart Mode. -Are any of the winfw fields being e... See more...
Hi @hassan1214, here's a few things to check to begin troubleshooting this issue: -Are you running the search in Fast Mode? If so, try running it in Smart Mode. -Are any of the winfw fields being extracted? Or only Splunk internal fields? -Check for any parsing issues in the splunkd.log :  index=_internal sourcetype=splunkd log_level!=INFO source=*splunkd.log *winfw*   The TA uses the following transforms.conf stanza to extract fields. Please check the content of your pfirewall.log matches this format: DELIMS = " " FIELDS = date,time,win_action,transport,src,dest,src_port,dest_port,size,tcp_flag,tcpsyn,tcpack,tcpwin,icmptype,icmpcode,info,win_direction  
I understand that you want to visualize service limitations using a dashboard in Splunk, specifically related to the service details provided in the Splunk Cloud Platform Service Details documentatio... See more...
I understand that you want to visualize service limitations using a dashboard in Splunk, specifically related to the service details provided in the Splunk Cloud Platform Service Details documentation. Here’s some background information: There are numerous service limitations in areas such as bundle size and the number of SourceTypes. Some customers have experienced operational impacts due to these limitations, often without being aware of them. Regardless of the environment, Splunk users unknowingly carry the risk of affecting business operations. While documentation specifies the limitations for each service, the upper limits can vary based on the environment. Checking configuration files (such as conf files) for this information can be operationally burdensome, so you’d like to proactively visualize the settings and usage on a dashboard. Currently, you’re interested in visualizing the following four aspects and would like to retrieve limitation information using SPL (Search Processing Language): Are Limitations Set in Configuration Files? You want to know whether limitations are configured in files (such as conf files) or if this information is only accessible through REST or SPL commands. Regarding “Knowledge Bundle Replication Size”: If there’s an SPL query or other method to determine the size state (e.g., size over the past few days), please share it. IP Allow List Limitation: Is there a way to check the number of IP addresses registered in the IP Allow List (e.g., via REST)?   Splunkで下記リンクのサービス制限状況をダッシュボードで可視化したいと考えています。 Splunk Cloud Platform Service Details - Splunk Documentation ■背景 ・バンドルサイズやSourceType数など、経験がないとわからない領域でのサービス制限が多数ある ・これを知らずにSplunkを運用した結果、業務に影響を及ぼしたお客さんがいた ・環境に関わらず、Splunk利用者は知らず知らずのうちに業務影響をきたすリスクを抱えている ・ドキュメントにどのサービスにどれくらいの制限があるのかが記載されているが、上限は環境によって異なる。 ・Confファイルを見に行くのは運用負荷が高いので予め設定値と使用状況をダッシュボードで可視化したい   現時点では下記の4つを可視化したいと考えております。     しかし、ダッシュボード化するために、LimitationをSPLで取得したいと考えております。 【Question】 1. Limitationはconfファイル等に設定されているのでしょうか。restやSPL等のコマンドで取得できる ファイルに記載されているのか、Splunk社でしか取得できない情報なのかを知りたいです。 2.「Knowledge Bundle replication size」に関して、 サイズの状態(過去数日間のサイズ)がわかるSPL等あったら教えてください。 3.「IP Allow List」のLimitationに関して IP Allow Listに登録してあるIPアドレスの数を確認する方法はありますか?(restで取得できるかなど) お願いします。
Hello,  Log  : Mar 22 10:50:51 x.x.x.21 Mar 22 11:55:00 Device version -: [2024-03-22 11:54:12] Event : , IP : , MAC : , Desc :   Props : [host::x.x.x.21] CHARSET = utf8 TIME_PREFIX = \-:\s\[ ... See more...
Hello,  Log  : Mar 22 10:50:51 x.x.x.21 Mar 22 11:55:00 Device version -: [2024-03-22 11:54:12] Event : , IP : , MAC : , Desc :   Props : [host::x.x.x.21] CHARSET = utf8 TIME_PREFIX = \-:\s\[ TIME_FORMAT = %Y-%m-%d %H:%M:%S   When I check _time field, value is still 2021-03-22 10:50:51. I think Device's IP is x.x.x.21. So it seems that 21 is recognized as the year and I config props. But props is not working... Help me Thank you.  
Are you able to check which process is using the inputs.conf file with lsof? You may need to stop Splunk, update the file, then start Splunk again. 
Hiding those elements is a function of each dashboard, not of the navigation menu. <dashboard hideChrome="true" version="1.1"> ... </dashboard> See https://docs.splunk.com/Documentation/Splunk/9.2.... See more...
Hiding those elements is a function of each dashboard, not of the navigation menu. <dashboard hideChrome="true" version="1.1"> ... </dashboard> See https://docs.splunk.com/Documentation/Splunk/9.2.0/Viz/PanelreferenceforSimplifiedXML#dashboard_or_form for the available options.