All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

As mentioned by @hrawat the workaround is to disable the splunk_internal_metrics on the forwarder: $SPLUNK_HOME/etc/apps/splunk_internal_metrics/local/app.conf   [install] state = disabled ... See more...
As mentioned by @hrawat the workaround is to disable the splunk_internal_metrics on the forwarder: $SPLUNK_HOME/etc/apps/splunk_internal_metrics/local/app.conf   [install] state = disabled   The bug (SPL-251434) will be fixed in the following releases: 9.2.2/9.1.5/9.0.10 This app: Clones all metrics.log events into a metric_log sourcetype. The metrics_log sourcetype does the necessary conversion from log to metric event, and redirects the events to the _metrics index.
How to identify if you are hitting this bug: 1.) do you have persistentQueueSize set over 20MB in inputs.conf of the forwarder: Ie: etc/system/local/inputs.conf [splunktcp-ssl:9996] persiste... See more...
How to identify if you are hitting this bug: 1.) do you have persistentQueueSize set over 20MB in inputs.conf of the forwarder: Ie: etc/system/local/inputs.conf [splunktcp-ssl:9996] persistentQueueSize = 120GB   2.) is Splunk crashing after a restart where the crashing thread is Crashing thread: typing_0? 3.) you see in splunkd_stderr.log logs like: 2024-01-11 21:16:19.669 +0000 splunkd started (build 050c9bca8588) pid=2506904 Fatal thread error: pthread_mutex_lock: Invalid argument; 117 threads active, in typing_0
Hi @Bujji2023 , I had experienced similar issues when saving the lookup csv file in MS Excel before uploading. This can be fixed by saving the csv file from a text editor instead, making sure to remo... See more...
Hi @Bujji2023 , I had experienced similar issues when saving the lookup csv file in MS Excel before uploading. This can be fixed by saving the csv file from a text editor instead, making sure to remove any superfluous characters (e.g. quotation marks) 
Hi @hassan1214, here's a few things to check to begin troubleshooting this issue: -Are you running the search in Fast Mode? If so, try running it in Smart Mode. -Are any of the winfw fields being e... See more...
Hi @hassan1214, here's a few things to check to begin troubleshooting this issue: -Are you running the search in Fast Mode? If so, try running it in Smart Mode. -Are any of the winfw fields being extracted? Or only Splunk internal fields? -Check for any parsing issues in the splunkd.log :  index=_internal sourcetype=splunkd log_level!=INFO source=*splunkd.log *winfw*   The TA uses the following transforms.conf stanza to extract fields. Please check the content of your pfirewall.log matches this format: DELIMS = " " FIELDS = date,time,win_action,transport,src,dest,src_port,dest_port,size,tcp_flag,tcpsyn,tcpack,tcpwin,icmptype,icmpcode,info,win_direction  
I understand that you want to visualize service limitations using a dashboard in Splunk, specifically related to the service details provided in the Splunk Cloud Platform Service Details documentatio... See more...
I understand that you want to visualize service limitations using a dashboard in Splunk, specifically related to the service details provided in the Splunk Cloud Platform Service Details documentation. Here’s some background information: There are numerous service limitations in areas such as bundle size and the number of SourceTypes. Some customers have experienced operational impacts due to these limitations, often without being aware of them. Regardless of the environment, Splunk users unknowingly carry the risk of affecting business operations. While documentation specifies the limitations for each service, the upper limits can vary based on the environment. Checking configuration files (such as conf files) for this information can be operationally burdensome, so you’d like to proactively visualize the settings and usage on a dashboard. Currently, you’re interested in visualizing the following four aspects and would like to retrieve limitation information using SPL (Search Processing Language): Are Limitations Set in Configuration Files? You want to know whether limitations are configured in files (such as conf files) or if this information is only accessible through REST or SPL commands. Regarding “Knowledge Bundle Replication Size”: If there’s an SPL query or other method to determine the size state (e.g., size over the past few days), please share it. IP Allow List Limitation: Is there a way to check the number of IP addresses registered in the IP Allow List (e.g., via REST)?   Splunkで下記リンクのサービス制限状況をダッシュボードで可視化したいと考えています。 Splunk Cloud Platform Service Details - Splunk Documentation ■背景 ・バンドルサイズやSourceType数など、経験がないとわからない領域でのサービス制限が多数ある ・これを知らずにSplunkを運用した結果、業務に影響を及ぼしたお客さんがいた ・環境に関わらず、Splunk利用者は知らず知らずのうちに業務影響をきたすリスクを抱えている ・ドキュメントにどのサービスにどれくらいの制限があるのかが記載されているが、上限は環境によって異なる。 ・Confファイルを見に行くのは運用負荷が高いので予め設定値と使用状況をダッシュボードで可視化したい   現時点では下記の4つを可視化したいと考えております。     しかし、ダッシュボード化するために、LimitationをSPLで取得したいと考えております。 【Question】 1. Limitationはconfファイル等に設定されているのでしょうか。restやSPL等のコマンドで取得できる ファイルに記載されているのか、Splunk社でしか取得できない情報なのかを知りたいです。 2.「Knowledge Bundle replication size」に関して、 サイズの状態(過去数日間のサイズ)がわかるSPL等あったら教えてください。 3.「IP Allow List」のLimitationに関して IP Allow Listに登録してあるIPアドレスの数を確認する方法はありますか?(restで取得できるかなど) お願いします。
Hello,  Log  : Mar 22 10:50:51 x.x.x.21 Mar 22 11:55:00 Device version -: [2024-03-22 11:54:12] Event : , IP : , MAC : , Desc :   Props : [host::x.x.x.21] CHARSET = utf8 TIME_PREFIX = \-:\s\[ ... See more...
Hello,  Log  : Mar 22 10:50:51 x.x.x.21 Mar 22 11:55:00 Device version -: [2024-03-22 11:54:12] Event : , IP : , MAC : , Desc :   Props : [host::x.x.x.21] CHARSET = utf8 TIME_PREFIX = \-:\s\[ TIME_FORMAT = %Y-%m-%d %H:%M:%S   When I check _time field, value is still 2021-03-22 10:50:51. I think Device's IP is x.x.x.21. So it seems that 21 is recognized as the year and I config props. But props is not working... Help me Thank you.  
Are you able to check which process is using the inputs.conf file with lsof? You may need to stop Splunk, update the file, then start Splunk again. 
Hiding those elements is a function of each dashboard, not of the navigation menu. <dashboard hideChrome="true" version="1.1"> ... </dashboard> See https://docs.splunk.com/Documentation/Splunk/9.2.... See more...
Hiding those elements is a function of each dashboard, not of the navigation menu. <dashboard hideChrome="true" version="1.1"> ... </dashboard> See https://docs.splunk.com/Documentation/Splunk/9.2.0/Viz/PanelreferenceforSimplifiedXML#dashboard_or_form for the available options.
Are there any sourcetype parsing issues in the splunkd.log on the receiving indexer/forwarder? index=_internal host=<receiving indexer/forwarder> log_level!=INFO "test"
https://community.splunk.com/t5/Security/Certificate-generation-failed-Splunkd-port-communication-will/m-p/318926#M12902
Would adding "earliest=<24 hours prior to the search time window>" in the subsearch fix this?
I'm struggling to figure this one out. We have data coming in via an HEC endpoint that is JSON based, with the HEC endpoint setting sourcetype to _json.  This is splunk cloud. Minor bit of backgroun... See more...
I'm struggling to figure this one out. We have data coming in via an HEC endpoint that is JSON based, with the HEC endpoint setting sourcetype to _json.  This is splunk cloud. Minor bit of background on our data: All of the data we send to splunk has an "event" field, which is a number, that indicates a specific type of thing that happened in our system. There's one index where this data goes into with a 45d retention period. Some of this data we want to keep around longer, so we use collect to copy the data over for longer retention. We have a scheduled search that runs regularly that does an "index=ourIndex event IN (1,2,3,4,5,6) | collect index=longTerm output_format=hec" We use output_format=hec because without it the data isn't searchable: "index=longTerm event=3" never shows anything. There's a bunch of _raw, but that's it. Also, for the sake of completeness, this data is being sent by cribl. Our application normally logs CSV style data with the first 15 or so columns fixed in their meaning (everything has those common fields), the 16th column contains a description with parenthesis around a semicolon list of additional parameter/fields, where each additional CSV column has a value corresponding to that field name in that list. Sometimes that value is JSON data logged as a string. For the sake of not sending JSON data as a string in an actual JSON payload - we have cribl detect that, and expand that JSON field and construct it as a native part of the payload. So: 1,2024-03-01 00:00:00,user1,...12 other columns ...,User did something (didClick;details),1,{"where":"submit"%2c"page":"home"} gets sent to the HEC endpoint as: {"event":1,"_time":"2024-03-01 00:00:00","userID":"user1",... other stuff ..., "didClick":1,"details":{"where":"submit","page":"home"}} The data that ends up missing is always the extrapolated JSON data. Anything that seems to be part of the base JSON document always seems to be fine. Now, here's the weird part. If I run the search query that does the collect to ONLY look for a specific event and do a collect on that - things actually seem fine, data is never lost. When I introduce additional events that I want to do a collect on, some of those fields are missing for some, but not all of those events. The more events I add into the IN() clause, the more those fields go missing for those events that have extrapolated JSON in them. For each event that has missing fields, all extrapolated JSON fields are missing. When I've tried to use the _raw field, use spath on that, then pipe that to collect - that seems to work reliably, but also seems like an unnecessary hack. There are dozens of these events, so breaking them out into their own discreet searches isn't something I'm particularly keen on. Any ideas or suggestions?
Hi I have two sets of data, one is proxy logs (index=netproxy) and the other is an extract of LTE Logs which is logs every time the device joins. I'd like to cross reference the proxy logs with the... See more...
Hi I have two sets of data, one is proxy logs (index=netproxy) and the other is an extract of LTE Logs which is logs every time the device joins. I'd like to cross reference the proxy logs with the LTE data so I can extract the IMEI number but the IMEI number could exist in logs outside of the search time window. The below search works but only if the timeframe is big enough that it includes the device in the proxy logs. Is there a way I can maybe extend the earliest time for 24 hours prior to the search time window? I don't want to do "all time" on the subsearch because the IP Address allocations will change over time and then be matched against the wrong IMEI. index=netproxymobility sourcetype="zscalernss-web" | fields transactionsize responsesize requestsize urlcategory serverip ClientIP hostname appname appclass urlclass type=left ClientIP [ search index=netlte | dedup ClientIP | fields ClientIP IMEI ] thanks
How is this data being input to Splunk?  You might start by checking the splunkd.log for any parsing errors or warnings. You can also check which props settings are applied to the specific source... See more...
How is this data being input to Splunk?  You might start by checking the splunkd.log for any parsing errors or warnings. You can also check which props settings are applied to the specific sourcetype using btool on the receiving Splunk indexer/forwarder: $SPLUNK_HOME/bin/splunk cmd btool props list <sourcetype>
First, it might be better to just share the KO to global permissions so it can be seen by users of both apps, rather than to copy the KO to the other app - depending on your use-case.  In case that ... See more...
First, it might be better to just share the KO to global permissions so it can be seen by users of both apps, rather than to copy the KO to the other app - depending on your use-case.  In case that is not feasible, to copy a KO to other apps you have a few options: If there are just a few KOs to be copied, you can do this from the GUI: -Click Settings -> Searches, reports and alerts -search for your KO and click Edit -> Clone, then you can select the app to clone to from the App dropdown list. In case you need to copy KOs in bulk, it is easier to copy the config from the .conf file from one app to the other. You can also use REST to POST configs.
Hi @whitecat001, if you're speaking by GUI, you should clone the Knowledge Object and then move it. If you're speaking by CLI, it depends on the KO: dashboards can be copied, reports, alerts, field... See more...
Hi @whitecat001, if you're speaking by GUI, you should clone the Knowledge Object and then move it. If you're speaking by CLI, it depends on the KO: dashboards can be copied, reports, alerts, fields eventtypes and the other KO can be copied from the original file (e.g. savedsearches.conf or eventtypes.conf in the original app to the new one. Ciao. Giuseppe
Hi @allidoiswinboom , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Still the same, result whether its eval or fieldformat format, result is same. index=MyPCF | fields server instance cpu_percentage | eval cpu_percentage=round(cpu_percentage,2) | eval server_instan... See more...
Still the same, result whether its eval or fieldformat format, result is same. index=MyPCF | fields server instance cpu_percentage | eval cpu_percentage=round(cpu_percentage,2) | eval server_instance = 'server' + "_" + 'instance' | timechart mAX(cpu_percentage) as CPU_Percentage by server_instance usenull=true limit=0 | foreach * [| eval "<<FILED>>"=round('<<FIELD>>',2)."%"] Then changed to the following and tried, same result | foreach * [| eval "<<FILED>>"= "<<FIELD>>" ."%"] | foreach * [| eval "<<FILED>>"= '<<FIELD>>' ."%"] | foreach * [| eval "<<FILED>>"= <<FIELD>> ."%"]   _time server_1 server_2 server_3 server_4 2024-03-25T16:00:00.000-0400 5.18 3 4.62 3.18 2024-03-25T16:05:00.000-0400 5.46 3.13 3.99 2.94 2024-03-25T16:10:00.000-0400 5.55 54.16 3.93 51.89 2024-03-25T16:15:00.000-0400 4.76 4.59 4.4 2.84 2024-03-25T16:20:00.000-0400 5.54 3.84 4.55 2.95 2024-03-25T16:25:00.000-0400 4.11 3.76 3.52 3.31 2024-03-25T16:30:00.000-0400 4.36 3.92 3.58 2.91 2024-03-25T16:35:00.000-0400 3.88 3.68 3.7 4.08 2024-03-25T16:40:00.000-0400 3.89 3.32 4.33 3.32 2024-03-25T16:45:00.000-0400 4.33 27.56 3.94 39.48
Exactly that way. So you must select which one those are and based on that select SEDCMD or transforms.
More words please. What is your business case. What "security events" do you want to "forward" from Splunk. Do you want the same events ingested in Splunk and Elastic/Kafka/whatever or maybe you want... See more...
More words please. What is your business case. What "security events" do you want to "forward" from Splunk. Do you want the same events ingested in Splunk and Elastic/Kafka/whatever or maybe you want to just generate an event in case some alert is triggered in Splunk?