All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi SMEs, Seeking help on the below field extraction to capture hostname1, hostname2, hostname3 & hostname4   Mar 22 04:00:01 hostname1 sudo: root : TTY=unknown ; PWD=/home/installer/LOG_Transfer ;... See more...
Hi SMEs, Seeking help on the below field extraction to capture hostname1, hostname2, hostname3 & hostname4   Mar 22 04:00:01 hostname1 sudo: root : TTY=unknown ; PWD=/home/installer/LOG_Transfer ; USER=root ; COMMAND=/bin/bash -c grep -e 2024-03-21 -e Mar\ 21 /var/log/secure Mar 22 04:00:01 hostname2 sudo: root : TTY=unknown ; PWD=/home/installer/LOG_Transfer ; USER=root ; COMMAND=/bin/bash -c grep -e 2024-03-21 -e Mar\ 21 /var/log/secure 2024-03-21T23:59:31.143161+05:30 hostname3 caam: [INVENTORY|CaaM-14a669917c4a02f5|caam|e0ded6f4f97c17132995|Dummy-5|INFO|caam_inventory_controller] Fetching operationexecutions filtering with vn_id CaaM-3ade67652a6a02f5 and tenant caam 2024-03-23T04:00:17.664082+05:30 hostname4 sudo: root : TTY=unknown ; PWD=/home/caam/LOG_Transfer ; USER=root ; COMMAND=/bin/bash -c grep -e 2024-03-22 -e Mar\ 22 /var/log/secure.7.gz  
I understand that you want to visualize service limitations using a dashboard in Splunk, specifically related to the service details provided in the Splunk Cloud Platform Service Details documentatio... See more...
I understand that you want to visualize service limitations using a dashboard in Splunk, specifically related to the service details provided in the Splunk Cloud Platform Service Details documentation. Here’s some background information: There are numerous service limitations in areas such as bundle size and the number of SourceTypes. Some customers have experienced operational impacts due to these limitations, often without being aware of them. Regardless of the environment, Splunk users unknowingly carry the risk of affecting business operations. While documentation specifies the limitations for each service, the upper limits can vary based on the environment. Checking configuration files (such as conf files) for this information can be operationally burdensome, so you’d like to proactively visualize the settings and usage on a dashboard. Currently, you’re interested in visualizing the following four aspects and would like to retrieve limitation information using SPL (Search Processing Language): Are Limitations Set in Configuration Files? You want to know whether limitations are configured in files (such as conf files) or if this information is only accessible through REST or SPL commands. Regarding “Knowledge Bundle Replication Size”: If there’s an SPL query or other method to determine the size state (e.g., size over the past few days), please share it. IP Allow List Limitation: Is there a way to check the number of IP addresses registered in the IP Allow List (e.g., via REST)?   Splunkで下記リンクのサービス制限状況をダッシュボードで可視化したいと考えています。 Splunk Cloud Platform Service Details - Splunk Documentation ■背景 ・バンドルサイズやSourceType数など、経験がないとわからない領域でのサービス制限が多数ある ・これを知らずにSplunkを運用した結果、業務に影響を及ぼしたお客さんがいた ・環境に関わらず、Splunk利用者は知らず知らずのうちに業務影響をきたすリスクを抱えている ・ドキュメントにどのサービスにどれくらいの制限があるのかが記載されているが、上限は環境によって異なる。 ・Confファイルを見に行くのは運用負荷が高いので予め設定値と使用状況をダッシュボードで可視化したい   現時点では下記の4つを可視化したいと考えております。     しかし、ダッシュボード化するために、LimitationをSPLで取得したいと考えております。 【Question】 1. Limitationはconfファイル等に設定されているのでしょうか。restやSPL等のコマンドで取得できる ファイルに記載されているのか、Splunk社でしか取得できない情報なのかを知りたいです。 2.「Knowledge Bundle replication size」に関して、 サイズの状態(過去数日間のサイズ)がわかるSPL等あったら教えてください。 3.「IP Allow List」のLimitationに関して IP Allow Listに登録してあるIPアドレスの数を確認する方法はありますか?(restで取得できるかなど) お願いします。
Hello,  Log  : Mar 22 10:50:51 x.x.x.21 Mar 22 11:55:00 Device version -: [2024-03-22 11:54:12] Event : , IP : , MAC : , Desc :   Props : [host::x.x.x.21] CHARSET = utf8 TIME_PREFIX = \-:\s\[ ... See more...
Hello,  Log  : Mar 22 10:50:51 x.x.x.21 Mar 22 11:55:00 Device version -: [2024-03-22 11:54:12] Event : , IP : , MAC : , Desc :   Props : [host::x.x.x.21] CHARSET = utf8 TIME_PREFIX = \-:\s\[ TIME_FORMAT = %Y-%m-%d %H:%M:%S   When I check _time field, value is still 2021-03-22 10:50:51. I think Device's IP is x.x.x.21. So it seems that 21 is recognized as the year and I config props. But props is not working... Help me Thank you.  
I'm struggling to figure this one out. We have data coming in via an HEC endpoint that is JSON based, with the HEC endpoint setting sourcetype to _json.  This is splunk cloud. Minor bit of backgroun... See more...
I'm struggling to figure this one out. We have data coming in via an HEC endpoint that is JSON based, with the HEC endpoint setting sourcetype to _json.  This is splunk cloud. Minor bit of background on our data: All of the data we send to splunk has an "event" field, which is a number, that indicates a specific type of thing that happened in our system. There's one index where this data goes into with a 45d retention period. Some of this data we want to keep around longer, so we use collect to copy the data over for longer retention. We have a scheduled search that runs regularly that does an "index=ourIndex event IN (1,2,3,4,5,6) | collect index=longTerm output_format=hec" We use output_format=hec because without it the data isn't searchable: "index=longTerm event=3" never shows anything. There's a bunch of _raw, but that's it. Also, for the sake of completeness, this data is being sent by cribl. Our application normally logs CSV style data with the first 15 or so columns fixed in their meaning (everything has those common fields), the 16th column contains a description with parenthesis around a semicolon list of additional parameter/fields, where each additional CSV column has a value corresponding to that field name in that list. Sometimes that value is JSON data logged as a string. For the sake of not sending JSON data as a string in an actual JSON payload - we have cribl detect that, and expand that JSON field and construct it as a native part of the payload. So: 1,2024-03-01 00:00:00,user1,...12 other columns ...,User did something (didClick;details),1,{"where":"submit"%2c"page":"home"} gets sent to the HEC endpoint as: {"event":1,"_time":"2024-03-01 00:00:00","userID":"user1",... other stuff ..., "didClick":1,"details":{"where":"submit","page":"home"}} The data that ends up missing is always the extrapolated JSON data. Anything that seems to be part of the base JSON document always seems to be fine. Now, here's the weird part. If I run the search query that does the collect to ONLY look for a specific event and do a collect on that - things actually seem fine, data is never lost. When I introduce additional events that I want to do a collect on, some of those fields are missing for some, but not all of those events. The more events I add into the IN() clause, the more those fields go missing for those events that have extrapolated JSON in them. For each event that has missing fields, all extrapolated JSON fields are missing. When I've tried to use the _raw field, use spath on that, then pipe that to collect - that seems to work reliably, but also seems like an unnecessary hack. There are dozens of these events, so breaking them out into their own discreet searches isn't something I'm particularly keen on. Any ideas or suggestions?
Hi I have two sets of data, one is proxy logs (index=netproxy) and the other is an extract of LTE Logs which is logs every time the device joins. I'd like to cross reference the proxy logs with the... See more...
Hi I have two sets of data, one is proxy logs (index=netproxy) and the other is an extract of LTE Logs which is logs every time the device joins. I'd like to cross reference the proxy logs with the LTE data so I can extract the IMEI number but the IMEI number could exist in logs outside of the search time window. The below search works but only if the timeframe is big enough that it includes the device in the proxy logs. Is there a way I can maybe extend the earliest time for 24 hours prior to the search time window? I don't want to do "all time" on the subsearch because the IP Address allocations will change over time and then be matched against the wrong IMEI. index=netproxymobility sourcetype="zscalernss-web" | fields transactionsize responsesize requestsize urlcategory serverip ClientIP hostname appname appclass urlclass type=left ClientIP [ search index=netlte | dedup ClientIP | fields ClientIP IMEI ] thanks
Dear Splunkers,    My goal is to expose only some dashboards to external customer. Created a dedicated role and user with minimal access to a single app where these dashboards are placed. However, ... See more...
Dear Splunkers,    My goal is to expose only some dashboards to external customer. Created a dedicated role and user with minimal access to a single app where these dashboards are placed. However, I'm struggling with hiding Splunk bar/navigation menu. I.e. the customer can still use "find" window to search for some reports and dashboards he is not obliged to see. Could you please lead me on how to hide it?  The navigation menu looks like below:   <nav search_view="search"> <view name="search" /> <view name="datasets" /> <view hideSplunkBar="true" /> <view hideAppBar="true" /> <view hideChrome="true" /> <view name="reports" /> <view name="alerts" /> <view name="dashboards" default='true'/> </nav>     regards, Sz
How do you copy a knowledge object from one app to another in Splunk
I have a lookup table that looks like this (: Column 1 Column 2 Column 3 Column 4 Value 1 - - 15 Value 1 - - 60 Value 2 - - 75 Value 2 - - N/A Value 2 - - 5  ... See more...
I have a lookup table that looks like this (: Column 1 Column 2 Column 3 Column 4 Value 1 - - 15 Value 1 - - 60 Value 2 - - 75 Value 2 - - N/A Value 2 - - 5   I want to calculate the average for all of the values in Column 4 (that aren't N/A) that have the same value in Column 1. Then I want to output that as a table: Column 1 Column 2 Value 1 37.5 Value 2 40
Hi,  am creation a dashboard using dashboard studio, and i want to run a query with subsearch. i want to use the time from the global time for sub search and a different time for main search how do... See more...
Hi,  am creation a dashboard using dashboard studio, and i want to run a query with subsearch. i want to use the time from the global time for sub search and a different time for main search how do i do it ? i have configured an input field for time with token - global_time my query looks like this  index=xyz query1 earliest=global_time.earliest latest=now() [search index=xyz query2 earliest=global_time.earliest latest=global_time.latest] this is not working - can you suggest how to make this work
This is more of an advisory than a question.  I hope it helps. If you are a Splunk Cloud customer I strongly suggest you run this search to ensure that Splunk Cloud is not dropping events.  This in... See more...
This is more of an advisory than a question.  I hope it helps. If you are a Splunk Cloud customer I strongly suggest you run this search to ensure that Splunk Cloud is not dropping events.  This info is not being presented in the Splunk Cloud monitoring console and is an indicator that indexed events are being dropped. index=_internal host=idx* sourcetype=splunkd log_level IN(ERROR,WARN) component=SQSSmartbusInputWorker "Error parsing events from message content" | eval bytesRemaining=trim(bytesRemaining,":") | stats sum(bytesRemaining) as bytesNotIndexed What these errors are telling us is that some SQSSmartbusInputWorker process is parsing events and that there is some type of invalid field, or value in the data, in our case _subsecond.  When this process hits the invalid value, it appears to drop everything else in the stream (i.e. bytesRemaining).  So this is also to say that bytesRemaining contains events that were sent to Splunk Cloud, but not indexed.   When this error occurs,  Splunk cloud writes the failed info to an SQS DLQ in S3 which can be observed using: index=_internal host=idx* sourcetype=splunkd log_level IN(ERROR,WARN) component=SQSSmartbusInputWorker "Successfully sent a SQS DLQ message to S3 with location" Curious if anyone else out there is experiencing the same issue.  SQSSmartbusInputWorker  doesn't appear in any of the indexing documents, but does appear to be very important to the ingest process.
Hi Everyone, i need an help about the following problem: during the analysis of some logs, we found that for a specific Index the Sourcetype had the only value Unknown. The first question we asked ... See more...
Hi Everyone, i need an help about the following problem: during the analysis of some logs, we found that for a specific Index the Sourcetype had the only value Unknown. The first question we asked ourselves was that there could have been some App or Add-on that probably did not match the data well, but neither was present. Subsequently we tried to see if there could be some missing value at the files.conf level, but even in this case we found no problems. So what could be the reason why for that specific Index the Sourcetype only has that value?
I seem to be close on trying to find the statistics to be able to pull unique users per day but I know I'm missing something. Goal: Have a stat/chart/search that has the unique user attribute per d... See more...
I seem to be close on trying to find the statistics to be able to pull unique users per day but I know I'm missing something. Goal: Have a stat/chart/search that has the unique user attribute per day for a span of 1 week / 1 month / 1 year search. Search queries trialed: EventCode=4624 user=* stats count by user | stats dc(user) EventCode=4624 user=* | timechart span1d count as count_user by user | stats count by user So the login event 4624 would be a successful log in code and then trying to get it to give me a stat number of the unique values of user names that get it each day for a time span. Am I close? Any help would be appreciated!
I'm using the Cisco FireAMP app to return the trajectory of an endpoint, and the data includes a list of all running tasks/files.  For my test there are 500 items returned, with 9 marked as 'Maliciou... See more...
I'm using the Cisco FireAMP app to return the trajectory of an endpoint, and the data includes a list of all running tasks/files.  For my test there are 500 items returned, with 9 marked as 'Malicious'.  I'm trying to filter for those and write the details to a note.  But the note always contains all 500 items, not just the 9. My filter block (filter_2) is this:   if get_device_trajectory_2:action_result.data.*.events.*.file.disposition == Malicious     My format block (format_3) is this:   %% File Name: {0} - File Path: {1} - Hash: {2} - Category: {4} - Parent: {3} %%   where each of the variables refer to the filter block e.g.:   0: filtered-data:filter_2:condition_1:get_device_trajectory_2:action_result.data.*.events.*.file.file_name 1: filtered-data:filter_2:condition_1:get_device_trajectory_2:action_result.data.*.events.*.file.file_path 2: filtered-data:filter_2:condition_1:get_device_trajectory_2:action_result.data.*.events.*.file.identity.sha256 3: filtered-data:filter_2:condition_1:get_device_trajectory_2:action_result.data.*.events.*.file.parent.file_name 4: filtered-data:filter_2:condition_1:get_device_trajectory_2:action_result.data.*.events.*.detection     Finally, I use a Utility block to add the note.  The Utility block contents reference the format block:   format_3:formatted_data.*     The debugger shows this when running the filter block:   Mar 25, 13:52:54 : filter_2() called Mar 25, 13:52:54 : phantom.condition(): called with 1 condition(s) '[['get_device_trajectory_2:action_result.data.*.events.*.file.disposition', '==', 'Malicious']]', operator : 'or', scope: 'new' Mar 25, 13:52:54 : phantom.get_action_results() called for action name: get_device_trajectory_2 action run id: 0 app_run_id: 0 Mar 25, 13:52:54 : phantom.condition(): condition 1 to evaluate: LHS: get_device_trajectory_2:action_result.data.*.events.*.file.disposition OPERATOR: == RHS: Malicious Mar 25, 13:52:54 : phantom.condition(): condition loop: condition 1, 'None' '==' 'Malicious' => result:False Mar 25, 13:52:54 : phantom.condition(): condition loop: condition 1, 'None' '==' 'Malicious' => result:False Mar 25, 13:52:54 : phantom.condition(): condition loop: condition 1, 'None' '==' 'Malicious' => result:False Mar 25, 13:52:54 : phantom.condition(): condition loop: condition 1, 'None' '==' 'Malicious' => result:False Mar 25, 13:52:54 : phantom.condition(): condition loop: condition 1, 'Unknown' '==' 'Malicious' => result:False Mar 25, 13:52:54 : phantom.condition(): condition loop: condition 1, 'None' '==' 'Malicious' => result:False Mar 25, 13:52:54 : phantom.condition(): condition loop: condition 1, 'Unknown' '==' 'Malicious' => result:False Mar 25, 13:52:54 : phantom.condition(): condition loop: condition 1, 'Unknown' '==' 'Malicious' => result:False Mar 25, 13:52:54 : phantom.condition(): condition loop: condition 1, 'Clean' '==' 'Malicious' => result:False Mar 25, 13:52:55 : phantom.condition(): condition loop: condition 1, 'Unknown' '==' 'Malicious' => result:False Mar 25, 13:52:55 : phantom.condition(): condition loop: condition 1, 'Malicious' '==' 'Malicious' => result:True Mar 25, 13:52:55 : phantom.condition(): condition loop: condition 1, 'Malicious' '==' 'Malicious' => result:True Mar 25, 13:52:55 : phantom.condition(): condition loop: condition 1, 'Malicious' '==' 'Malicious' => result:True Mar 25, 13:52:55 : phantom.condition(): condition loop: condition 1, 'Malicious' '==' 'Malicious' => result:True Mar 25, 13:52:55 : phantom.condition(): condition loop: condition 1, 'Malicious' '==' 'Malicious' => result:True Mar 25, 13:52:55 : phantom.condition(): condition loop: condition 1, 'Malicious' '==' 'Malicious' => result:True Mar 25, 13:52:55 : phantom.condition(): condition loop: condition 1, 'Malicious' '==' 'Malicious' => result:True Mar 25, 13:52:55 : phantom.condition(): condition loop: condition 1, 'Malicious' '==' 'Malicious' => result:True Mar 25, 13:52:55 : phantom.condition(): condition loop: condition 1, 'Malicious' '==' 'Malicious' => result:True Mar 25, 13:52:55 : phantom.condition(): condition loop: condition 1, 'Unknown' '==' 'Malicious' => result:False Mar 25, 13:52:55 : phantom.condition(): condition loop: condition 1, 'Unknown' '==' 'Malicious' => result:False Mar 25, 13:52:55 : phantom.condition(): condition loop: condition 1, 'None' '==' 'Malicious' => result:False Mar 25, 13:52:55 : phantom.condition(): condition loop: condition 1, 'Unknown' '==' 'Malicious' => result:False   so it looks like it's correctly identifying the malicious files.  The debugger shows this when running the format block:   Mar 25, 13:52:55 : format_3() called Mar 25, 13:52:55 : phantom.collect2(): called for datapath['filtered-data:filter_2:condition_1:get_device_trajectory_2:action_result.data.*.events.*.file.file_name'], scope: new and filter_artifacts: [] Mar 25, 13:52:55 : phantom.get_run_data() called for key filtered-data:filter_2:condition_1 Mar 25, 13:52:55 : phantom.collect2(): Classified datapaths as [<DatapathClassification.NAMED_FILTERED_ACTION_RESULT: 9>] Mar 25, 13:52:55 : phantom.collect2(): called for datapath['filtered-data:filter_2:condition_1:get_device_trajectory_2:action_result.data.*.events.*.file.file_path'], scope: new and filter_artifacts: [] Mar 25, 13:52:55 : phantom.get_run_data() called for key filtered-data:filter_2:condition_1 Mar 25, 13:52:55 : phantom.collect2(): Classified datapaths as [<DatapathClassification.NAMED_FILTERED_ACTION_RESULT: 9>] Mar 25, 13:52:55 : phantom.collect2(): called for datapath['filtered-data:filter_2:condition_1:get_device_trajectory_2:action_result.data.*.events.*.file.identity.sha256'], scope: new and filter_artifacts: [] Mar 25, 13:52:55 : phantom.get_run_data() called for key filtered-data:filter_2:condition_1 Mar 25, 13:52:55 : phantom.collect2(): Classified datapaths as [<DatapathClassification.NAMED_FILTERED_ACTION_RESULT: 9>] Mar 25, 13:52:55 : phantom.collect2(): called for datapath['filtered-data:filter_2:condition_1:get_device_trajectory_2:action_result.data.*.events.*.file.parent.file_name'], scope: new and filter_artifacts: [] Mar 25, 13:52:55 : phantom.get_run_data() called for key filtered-data:filter_2:condition_1 Mar 25, 13:52:55 : phantom.collect2(): Classified datapaths as [<DatapathClassification.NAMED_FILTERED_ACTION_RESULT: 9>] Mar 25, 13:52:55 : phantom.collect2(): called for datapath['filtered-data:filter_2:condition_1:get_device_trajectory_2:action_result.data.*.events.*.detection'], scope: new and filter_artifacts: [] Mar 25, 13:52:55 : phantom.get_run_data() called for key filtered-data:filter_2:condition_1 Mar 25, 13:52:56 : phantom.collect2(): Classified datapaths as [<DatapathClassification.NAMED_FILTERED_ACTION_RESULT: 9>] Mar 25, 13:52:56 : save_run_data() saving 136.29 KB with key format_3:formatted_data_ Mar 25, 13:52:56 : save_run_data() saving 140.23 KB with key format_3__as_list:formatted_data_   there are 9 malicious files and it looks like that's what it's saying in the debugger, so again it seems like it's using the filtered data correctly.   But my note always has 500 items in it.  I'm not sure what I'm doing wrong.  Can anyone offer any help, because I'm stuck.  Thanks.        
Hi all,  I was wondering if anyone could help with hopefully a simple question. I have a dashboard that is used to power a report that sends a pdf to a number of individuals via email but we're look... See more...
Hi all,  I was wondering if anyone could help with hopefully a simple question. I have a dashboard that is used to power a report that sends a pdf to a number of individuals via email but we're looking to extract some further data and I was wondering if I just simply edit the existing dashboard with a few more searches will that reflect in the report?    Cheers,
Good morning, I hope you can help me, we maintain an infrastructure with splunk enterprise with SIEM and we must forward the security events to an elastic and kafka, I would like to know how I could... See more...
Good morning, I hope you can help me, we maintain an infrastructure with splunk enterprise with SIEM and we must forward the security events to an elastic and kafka, I would like to know how I could forward the events and if this will consume license.
Hello splunk community,  I have this query but I would also like to retrieve the index to which the sourcetype belongs index=_internal splunk_server=* source=*splunkd.log* sourcetype=splunkd... See more...
Hello splunk community,  I have this query but I would also like to retrieve the index to which the sourcetype belongs index=_internal splunk_server=* source=*splunkd.log* sourcetype=splunkd (component=AggregatorMiningProcessor OR component=LineBreakingProcessor OR component=DateParserVerbose OR component=MetricSchemaProcessor OR component=MetricsProcessor) (log_level=WARN OR log_level=ERROR OR log_level=FATAL) | rex field=event_message "\d*\|(?<st>[\w\d:-]*)\|\d*" | eval data_sourcetype=coalesce(data_sourcetype, st) | rename data_sourcetype as sourcetype | table sourcetype event_message component thread_name _time _raw | stats first(event_message) as event_message by sourcetype component any ideas ? thx in advance
Hello, I'm facing a problem with my lookup command. Here is the context : I'v 1 csv : pattern type *ABC* 1 *DEF* 2 *xxx* 3 And logs with "url". Ex : "xxxxabcxxxxx.google.co... See more...
Hello, I'm facing a problem with my lookup command. Here is the context : I'v 1 csv : pattern type *ABC* 1 *DEF* 2 *xxx* 3 And logs with "url". Ex : "xxxxabcxxxxx.google.com" I need to search if, in my url field of my log, all the possibilities of my lookup are present. If yes, how much matches with this field. My expected result is : url type count(type) xxxxabcxxxxx.google.com 1 3 2   How can i do ? -"| lookup" command don't take into account the "*" symbol. Only space or comma with "WIDLCARD" config. -"| inputlookup" command works but can't display the field "type" because it only exists in my csv. So, i can't count either. Thank's for your answers
Hello Expert Splunk Community , I am struggling with a JSON extraction . Need help/advice on how to do this operation Data Sample :   [ { "orderTypesTotal": [ { "orderType": "Purchase", "total... See more...
Hello Expert Splunk Community , I am struggling with a JSON extraction . Need help/advice on how to do this operation Data Sample :   [ { "orderTypesTotal": [ { "orderType": "Purchase", "totalFailedTransactions": 0, "totalSuccessfulTransactions": 0, "totalTransactions": 0 }, { "orderType": "Sell", "totalFailedTransactions": 0, "totalSuccessfulTransactions": 0, "totalTransactions": 0 }, { "orderType": "Cancel", "totalFailedTransactions": 0, "totalSuccessfulTransactions": 1, "totalTransactions": 1 } ], "totalTransactions": [ { "totalFailedTransactions": 0, "totalSuccessfulTransactions": 1, "totalTransactions": 1 } ] } ]     [ { "orderTypesTotal": [ { "orderType": "Purchase", "totalFailedTransactions": 10, "totalSuccessfulTransactions": 2, "totalTransactions": 12 }, { "orderType": "Sell", "totalFailedTransactions": 1, "totalSuccessfulTransactions": 2, "totalTransactions": 3 }, { "orderType": "Cancel", "totalFailedTransactions": 0, "totalSuccessfulTransactions": 1, "totalTransactions": 1 } ], "totalTransactions": [ { "totalFailedTransactions": 11, "totalSuccessfulTransactions": 5, "totalTransactions": 16 } ] } ]   I have the above event coming inside a field in _raw events . using json(field) i have validated that the above is a valid json . UseCase : I need to have the total of all the different ordertypes using totalFailedTransactions": , "totalSuccessfulTransactions": , "totalTransactions": numbers into a table .   totalFailedTransactions totalSuccessfulTransactions totalTransactions Purchase 10 2 12 Sell 1 2 3 Cancel 0 2 2   Thanks in advance! Sam  
Hi, I need to find errors/exceptions which has been raised within a timestamp and as per the request_id field mentioned in the logs(with every row) , need to fetch relevant logs in splunk  for that ... See more...
Hi, I need to find errors/exceptions which has been raised within a timestamp and as per the request_id field mentioned in the logs(with every row) , need to fetch relevant logs in splunk  for that request_id and send this link to slack channel. I am able to fetch all the errors/exception within timestamp and able to send to slack but I am not able to generate the relevant logs for the request_id mentioned with error/exception as it is dynamic in nature. I am new to splunk so would like to understand, is this possible? if yes then could you please share relevant documentation so that I can understand it better.   Thank you so much.
Hi! Filtering data from an amount of hosts looking for downtime durations. I get a "forensic" use view with this search string: index=myindex host=* | rex "to state\s(?<mystate>.*)" | search my... See more...
Hi! Filtering data from an amount of hosts looking for downtime durations. I get a "forensic" use view with this search string: index=myindex host=* | rex "to state\s(?<mystate>.*)" | search mystate="DOWN " OR mystate="UP | transaction by host startswith=mystate="DOWN " endswith=mystate="*UP " | table host,duration,_time | sort by duration | reverse ...where I REX for the specific patterns of "to state " (host transition into another state, in this example "DOWN" or "UP"), I had do do another "search" to get only the specific ones as there are more than DOWN/UP states (due to my anonymization of the data). I then can retrieve the duration between transitions using "duration" and sorting it as I please. My question - if I'd like to look into ongoing, "at-this-moment-active" hosts in state "DOWN" ie. replace "endswith" with a nominal time value ("NOW"). Where there yet has not been any "endswith" match, just counting the duration from "startswith" to the present moment - any tips on how I can formulate that properly?