All Topics

Top

All Topics

Hi, I have a scenario where I want to calculate the duration between 1st and last event. The thing is these events can happen multiple times for the same session.  The 1st event can happen multiple ... See more...
Hi, I have a scenario where I want to calculate the duration between 1st and last event. The thing is these events can happen multiple times for the same session.  The 1st event can happen multiple times and everytime it is the exact same thing but I only want the transaction to start from very first event so that we know what is the exact duration. Sample events below - See the last 2 events where one says MatchPending and another one says MatchCompleted. What I want is to calculate the duration between 1st event and last event where it says MatchCompleted   2024-08-16 13:43:34,232|catalina-exec-192|INFO|LoggingClientHttpRequestInterceptor|Sending GET request to https://myapi.com/test 2024-08-16 13:43:38,630|catalina-exec-192|INFO|LoggingClientHttpRequestInterceptor|Response Received in 114 milliseconds "200 OK" response for GET request to https://myapi.com/test: "status":"MatchPending" 2024-08-16 13:43:50,516|catalina-exec-192|INFO|LoggingClientHttpRequestInterceptor|Sending GET request to https://myapi.com/test 2024-08-16 13:43:57,630|catalina-exec-192|INFO|LoggingClientHttpRequestInterceptor|Response Received in 114 milliseconds "200 OK" response for GET request to https://myapi.com/test: "status":"MatchPending" 2024-08-16 13:44:15,516|catalina-exec-192|INFO|LoggingClientHttpRequestInterceptor|Sending GET request to https://myapi.com/test 2024-08-16 13:43:50,510|catalina-exec-192|INFO|LoggingClientHttpRequestInterceptor|Response Received in 114 milliseconds "200 OK" response for GET request to https://myapi.com/test: "status":"MatchCompleted"     Any help is appreciated.  Best Regards, Shashanlk
In 9.21 How to change width of multiselect inputs in Dashboard Studio
Hi, I am looking to have the sum of users per vlan, for example vlan=xxx is used by username=A, B, C so I would have a table with VLAN = xxx and sum of users = 3, Thx
Hi, Need some help with the following JSON data. ModifiedProperties: [ [-] { [-] Name: Group.ObjectID NewValue: 111111-2222222-333333-444444 OldValue: } { [-] ... See more...
Hi, Need some help with the following JSON data. ModifiedProperties: [ [-] { [-] Name: Group.ObjectID NewValue: 111111-2222222-333333-444444 OldValue: } { [-] Name: Group.DisplayName NewValue: Group A OldValue: } { [-] Name: Group.WellKnownObjectName NewValue: OldValue: } ] I want to extract the 2nd set of values for each event such that Group.DisplayName can become a field in itself, e.g. Group.DisplayName.NewValue=A, Group.DisplayName.OldValue=B. But right now, default extraction is doing something like this     How can I create KV pairs for Group.DisplayName within this JSON array? I tried few combinations using spath but was not successful.   Thank you
Hi Team, In a Dashboard we have 30 Panels, i want to do the pagination, lets take under page i should view 5 Panels, Pls help how to proceed/query for that. Below is the example, under 1st dot page... See more...
Hi Team, In a Dashboard we have 30 Panels, i want to do the pagination, lets take under page i should view 5 Panels, Pls help how to proceed/query for that. Below is the example, under 1st dot page I should view 5 panel, 2nd page should have 5 page so on.  
Dear Splunkers, I would like ask your advice in order to complete following search result. My table checks for consecutive level breaches events in window of 3 counts. ACC CR count 0 ... See more...
Dear Splunkers, I would like ask your advice in order to complete following search result. My table checks for consecutive level breaches events in window of 3 counts. ACC CR count 0 0 1 0 0 2 0 0 3 1 1 1 1 0 2 1 0 3 2 1 1 3 1 2 4 1 3         If there is a level breach CR column will change to 1 and the ACC column will change to upcoming number. Now I would like to create an alert if 3 consecutive levels breached as shown in bolded example in bold.  Can you suggest how to complete the query and display only 3 consecutive results so that I can create an Alert? Thank you
Hi, I can't connect in my splunk enterprise account, i am having this errore; connection failure And there is no way to recover the account , i need help please
Hello to everyone! I am in the process of trying to fetch vulnerability information from the national vulnerability database. I found an app that can do this task via API - this is NVD-CVE-Fetcher-... See more...
Hello to everyone! I am in the process of trying to fetch vulnerability information from the national vulnerability database. I found an app that can do this task via API - this is NVD-CVE-Fetcher-App. The app link is here: https://splunkbase.splunk.com/app/7121?ref=hub.metronlabs.com The problem is that using NAT isn't allowed in our organization, so I was forced to use a proxy. I tried to use a system proxy, but the application ignored the system setting and tried to access the API URL directly. So, two questions: 1. Did anyone try to use the NVD-CVE-Fetcher-App in the proxy-acess scenario? 2. Did anyone resolve a similar task using other approaches? For example, another app or handmade script
Hello, I ran the following code - from __future__ import print_function import urllib.request, urllib.parse, urllib.error import httplib2 from xml.dom import minidom baseurl = '<url>' userName =... See more...
Hello, I ran the following code - from __future__ import print_function import urllib.request, urllib.parse, urllib.error import httplib2 from xml.dom import minidom baseurl = '<url>' userName = '<username>' password = '<password>' searchQuery = <query> # Authenticate with server. # Disable SSL cert validation. Splunk certs are self-signed. serverContent = httplib2.Http(disable_ssl_certificate_validation=True).request(baseurl + '/services/auth/login', 'POST', headers={}, body=urllib.parse.urlencode({'username':userName, 'password':password}))[1] sessionKey = minidom.parseString(serverContent).getElementsByTagName('sessionKey')[0].childNodes[0].nodeValue # Remove leading and trailing whitespace from the search searchQuery = searchQuery.strip() # If the query doesn't already start with the 'search' operator or another # generating command (e.g. "| inputcsv"), then prepend "search " to it. if not (searchQuery.startswith('search') or searchQuery.startswith("|")): searchQuery = 'search ' + searchQuery print(searchQuery) # Run the search. # Again, disable SSL cert validation. print(httplib2.Http(disable_ssl_certificate_validation=True).request(baseurl + '/services/search/jobs','POST', headers={'Authorization': 'Splunk %s' % sessionKey},body=urllib.parse.urlencode({'search': searchQuery}))[1]) I get this error - "TimeoutError: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond" Is my url format wrong? Thanks
Hello,  How can I get my eval case like to match all values  except a  specific value ? I have below values for a field called rule_name MMT01_windows_brute_force MMT02_linux_root_login MMT03... See more...
Hello,  How can I get my eval case like to match all values  except a  specific value ? I have below values for a field called rule_name MMT01_windows_brute_force MMT02_linux_root_login MMT03_Aws_guardduty_alert How to get eval to match everything except anything with AWS in the name ? I need to use wildcard % for the matching part because there r many matches but just exclude AWS ones. I  found a similar post here where the answer was to user AND! To exclude  But that syntax is no longer supported it seems. | eval rule_type= case(like(rule_name,"MHE0%"),onprem,cloud) Expected result: rule_type should end up having 2 values for MMT01 and 02  using a wildcard and MMT03 should be  considered as cloud
Hello, I send a GET request to Postman as follows - curl -u <username> -k https://<url>.net:8089/services/jobs/export -d search="<query>" Why does it fail? "Cloud Agent Error: Couldn't resolve hos... See more...
Hello, I send a GET request to Postman as follows - curl -u <username> -k https://<url>.net:8089/services/jobs/export -d search="<query>" Why does it fail? "Cloud Agent Error: Couldn't resolve host. Make sure the domain is publicly accessible or select a different agent." And a variation passes but while I add "-d output_mode csv" at the end, I do not get any csv. Where can I see the same result as I see inside Splunk (enterprise) i.e tabular output? Thanks
Hi,   I have a table with dynamic fields, some of these fields contain no value or NULL, how do I remove these fields when I dont know the field name beforehand?   The field names are never the s... See more...
Hi,   I have a table with dynamic fields, some of these fields contain no value or NULL, how do I remove these fields when I dont know the field name beforehand?   The field names are never the same so I cannot simply do | fields - name1, name2 etc..   Is there are way to remove every field containing no value in a table?
I am trying to ingest data from Cortex via API, the API works 100% but getting the following script errors in splunkd.log Also attached the log from my partners environment where we need to comple... See more...
I am trying to ingest data from Cortex via API, the API works 100% but getting the following script errors in splunkd.log Also attached the log from my partners environment where we need to complete the integration.  8-14-2024 10:30:27.459 +0200 ERROR ScriptRunner [12760 TcpChannelThread] - stderr from 'C:\Program Files\Splunk\bin\Python3.exe C:\Program Files\Splunk\bin\runScript.py execute':    return func(*args, **kwargs) 08-14-2024 10:30:28.269 +0200 ERROR PersistentScript [20724 PersistentScriptIo] - From {"C:\Program Files\Splunk\bin\Python3.exe" "C:\Program Files\Splunk\etc\apps\Splunk_TA_paloalto\bin\Splunk_TA_paloalto_rh_settings.py" persistent}: WARNING:root:Run function: get_password failed: Traceback (most recent call last): 08-14-2024 10:30:28.269 +0200 ERROR PersistentScript [20724 PersistentScriptIo] - From {"C:\Program Files\Splunk\bin\Python3.exe" "C:\Program Files\Splunk\etc\apps\Splunk_TA_paloalto\bin\Splunk_TA_paloalto_rh_settings.py" persistent}:   File "C:\Program Files\Splunk\etc\apps\Splunk_TA_paloalto\bin\splunk_ta_paloalto\aob_py3\solnlib\utils.py", line 153, in wrapper 08-14-2024 10:30:28.269 +0200 ERROR PersistentScript [20724 PersistentScriptIo] - From {"C:\Program Files\Splunk\bin\Python3.exe" "C:\Program Files\Splunk\etc\apps\Splunk_TA_paloalto\bin\Splunk_TA_paloalto_rh_settings.py" persistent}:     return func(*args, **kwargs) 08-14-2024 10:30:28.269 +0200 ERROR PersistentScript [20724 PersistentScriptIo] - From {"C:\Program Files\Splunk\bin\Python3.exe" "C:\Program Files\Splunk\etc\apps\Splunk_TA_paloalto\bin\Splunk_TA_paloalto_rh_settings.py" persistent}:   File "C:\Program Files\Splunk\etc\apps\Splunk_TA_paloalto\bin\splunk_ta_paloalto\aob_py3\solnlib\credentials.py", line 137, in get_password 08-14-2024 10:30:28.269 +0200 ERROR PersistentScript [20724 PersistentScriptIo] - From {"C:\Program Files\Splunk\bin\Python3.exe" "C:\Program Files\Splunk\etc\apps\Splunk_TA_paloalto\bin\Splunk_TA_paloalto_rh_settings.py" persistent}:     f"Failed to get password of realm={self._realm}, user={user}." 08-14-2024 10:30:28.269 +0200 ERROR PersistentScript [20724 PersistentScriptIo] - From {"C:\Program Files\Splunk\bin\Python3.exe" "C:\Program Files\Splunk\etc\apps\Splunk_TA_paloalto\bin\Splunk_TA_paloalto_rh_settings.py" persistent}: solnlib.credentials.CredentialNotExistException: Failed to get password of realm=__REST_CREDENTIAL__#Splunk_TA_paloalto#configs/conf-splunk_ta_paloalto_settings, user=proxy. 08-14-2024 10:30:28.269 +0200 ERROR PersistentScript [20724 PersistentScriptIo] - From {"C:\Program Files\Splunk\bin\Python3.exe" "C:\Program Files\Splunk\etc\apps\Splunk_TA_paloalto\bin\Splunk_TA_paloalto_rh_settings.py" persistent}: . 08-14-2024 10:30:28.361 +0200 ERROR PersistentScript [20724 PersistentScriptIo] - From {"C:\Program Files\Splunk\bin\Python3.exe" "C:\Program Files\Splunk\etc\apps\Splunk_TA_paloalto\bin\Splunk_TA_paloalto_rh_settings.py" persistent}: WARNING:root:Run function: get_password failed: Traceback (most recent call last): 08-14-2024 10:30:28.361 +0200 ERROR PersistentScript [20724 PersistentScriptIo] - From {"C:\Program Files\Splunk\bin\Python3.exe" "C:\Program Files\Splunk\etc\apps\Splunk_TA_paloalto\bin\Splunk_TA_paloalto_rh_settings.py" persistent}:   File "C:\Program Files\Splunk\etc\apps\Splunk_TA_paloalto\bin\splunk_ta_paloalto\aob_py3\solnlib\utils.py", line 153, in wrapper 08-14-2024 10:30:28.361 +0200 ERROR PersistentScript [20724 PersistentScriptIo] - From {"C:\Program Files\Splunk\bin\Python3.exe" "C:\Program Files\Splunk\etc\apps\Splunk_TA_paloalto\bin\Splunk_TA_paloalto_rh_settings.py" persistent}:     return func(*args, **kwargs) 08-14-2024 10:30:28.361 +0200 ERROR PersistentScript [20724 PersistentScriptIo] - From {"C:\Program Files\Splunk\bin\Python3.exe" "C:\Program Files\Splunk\etc\apps\Splunk_TA_paloalto\bin\Splunk_TA_paloalto_rh_settings.py" persistent}:   File "C:\Program Files\Splunk\etc\apps\Splunk_TA_paloalto\bin\splunk_ta_paloalto\aob_py3\solnlib\credentials.py", line 137, in get_password 08-14-2024 10:30:28.361 +0200 ERROR PersistentScript [20724 PersistentScriptIo] - From {"C:\Program Files\Splunk\bin\Python3.exe" "C:\Program Files\Splunk\etc\apps\Splunk_TA_paloalto\bin\Splunk_TA_paloalto_rh_settings.py" persistent}:     f"Failed to get password of realm={self._realm}, user={user}." 08-14-2024 10:30:28.361 +0200 ERROR PersistentScript [20724 PersistentScriptIo] - From {"C:\Program Files\Splunk\bin\Python3.exe" "C:\Program Files\Splunk\etc\apps\Splunk_TA_paloalto\bin\Splunk_TA_paloalto_rh_settings.py" persistent}: solnlib.credentials.CredentialNotExistException: Failed to get password of realm=__REST_CREDENTIAL__#Splunk_TA_paloalto#configs/conf-splunk_ta_paloalto_settings, user=additional_parameters. Please advise.  Palo Alto Cortex XDR Palo Alto Networks Add-on for Splunk 
Hi  We have successfully configure dashboard for the ups monitoring , however the dashboard was working fine with no issue. From 1st /08/2024 no data are showing up in the tile.    Checked UF & s... See more...
Hi  We have successfully configure dashboard for the ups monitoring , however the dashboard was working fine with no issue. From 1st /08/2024 no data are showing up in the tile.    Checked UF & services - All working with no issue, restarted the service issue not resolved Checked splunk index , can find the latest event is 17 days not sure what is the problem    could you please advice the issue 
Hi all, index=sky sourcetype=sky_trade_wss_timestamp | rex field=_raw "trade_id=\"(?<trade_id>\X+)\", event_id" | rex field=_raw "mx_status=\"(?<status>\X+)\", operation" | rex field=_raw "op... See more...
Hi all, index=sky sourcetype=sky_trade_wss_timestamp | rex field=_raw "trade_id=\"(?<trade_id>\X+)\", event_id" | rex field=_raw "mx_status=\"(?<status>\X+)\", operation" | rex field=_raw "operation=\"(?<operation>\X+)\", action" | rex field=_raw " action=\"(?<action>\X+)\", tradebooking_sgp" | rex field=_raw " eventtime_sgp=\"(?<distributor_timestamp>\X+)\", sky_to_mq" | rex field=_raw "distributor_latency=\"(?<distributor_latency>[^\"]+)\".*\bportfolio_name=\"(?<portfolio_name>[^\"]+)\".*\bportfolio_entity=\"(?<portfolio_entity>[^\"]+)\".*\btrade_type=\"(?<trade_type>[^\"]+)" | join event_id [ search index=sky sourcetype=Sky_WSS_EP_Logs "Successfully processed event" | rex field=_raw "INFO: (?<ep_timestamp>\d{8} \d{2}:\d{2}:\d{2}.\d{3})" | rex field=_raw "Successfully processed event: (?<event_id>\X+), action" | eval ep_timestamp = strftime(strptime(ep_timestamp."+0800", "%Y%d%m %H:%M:%S.%N%z"), "%Y-%m-%d %H:%M:%S.%3N")] | join type=left sky_id [ search index=sky "New trades in amendment" "*pnl*" | rex "Trade Events (?<trades>.*)" | rex max_match=0 field=trades "(?<both_id>\d+:\d+)" | mvexpand both_id | rex field=both_id ":(?<sky_id>\d+)" | rex max_match=1 field=_raw "(?<booking_pnl_timestamp>\d{4}+-\d{2}+-\d{2} \d{2}:\d{2}:\d{2},\d{3})"] | rex field=tradebooking_sgp "(?<booking_timestamp>\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2})" | rex field=distributor_timestamp "(?<distributor_timestamp>\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2}\.\d{3})" | eval booking_pnl_timestamp = booking_pnl_timestamp."+0800" | eval ep_latency = strptime(ep_timestamp, "%Y-%m-%d %H:%M:%S.%N%z") - strptime(booking_timestamp, "%Y/%m/%d %H:%M:%S.%N%z") | eval pnl_latency = strptime(booking_pnl_timestamp, "%Y-%m-%d %H:%M:%S,%N%z") - strptime(booking_timestamp, "%Y/%m/%d %H:%M:%S.%N%z") | search trade_id = "*" | search sky_id = "*" | search event_id = "*" | search action = "*" | search mx_status = "live" | search operation = "*" | table trade_id, sky_id, event_id, booking_timestamp, distributor_timestamp, ep_timestamp, mx_status, operation, action, distributor_latency, ep_latency, portfolio_name, portfolio_entity | sort ep_timestamp | join type=left sky_id [ search index=sky sourcetype=sky_cashfx_catchup_logs "[WSS] - Done incremental update" | rex field=_raw "Max Skylib TradeID: (?<sky_id>\d+)" | rex field=_raw "^(?<catchup_updated_time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2},\d{3})" | eval catchup_updated_time = strftime(strptime(catchup_updated_time, "%Y-%m-%d %H:%M:%S,%3N"), "%Y/%m/%d %H:%M:%S.%3N") | table sky_id, catchup_updated_time, _raw, ] | eval booking_timestamp_unix = strptime(booking_timestamp, "%Y/%m/%d %H:%M:%S") | eval catchup_unix_time = strptime(catchup_updated_time, "%Y/%m/%d %H:%M:%S.%3N") | eval wss_to_sky_latency = catchup_unix_time - booking_timestamp_unix | eval distributor_timestamp_unix = strptime(distributor_timestamp, "%Y/%m/%d %H:%M:%S.%3N") | eval distributor_latency = distributor_timestamp_unix - booking_timestamp_unix | eval ep_timestamp_unix = strptime(ep_timestamp, "%Y/%m/%d %H:%M:%S.%3N") | eval ep_latency = ep_timestamp_unix - distributor_timestamp_unix | eval mq_to_sky_update_latency = catchup_unix_time - distributor_timestamp_unix | eval ep_timestamp = strftime(strptime(ep_timestamp, "%Y-%m-%d %H:%M:%S.%3N"), "%Y/%m/%d %H:%M:%S.%3N") | eval distributor_timestamp = strftime(strptime(distributor_timestamp, "%Y/%m/%d %H:%M:%S.%4N"), "%Y/%m/%d %H:%M:%S.%4N") | eval ep_timestamp_unix = strptime(ep_timestamp, "%Y/%m/%d %H:%M:%S.%3N") | eval distributor_timestamp_unix = strptime(distributor_timestamp, "%Y/%m/%d %H:%M:%S.%4N") | eval ep_latency = ep_timestamp_unix - distributor_timestamp_unix | table trade_id, portfolio_name, portfolio_entity, sky_id, event_id, booking_timestamp, booking_timestamp_unix, distributor_timestamp, distributor_timestamp_unix, ep_timestamp, distributor_latency, ep_latency, catchup_updated_time, wss_to_sky_latency, mq_to_sky_update_latency, mx_status, operation, action, catchup_unix_time | dedup sky_id | sort booking_timestamp | rex field=trade_id "^\w+ (?<dealnumber>\d+)$" | join type=left dealnumber [ search index=wss "Sending message" source="/proj/flowfx/wss/FFXWS01P/log/MQ1.log" ```Exclude Far Legs of Swap Trades for first Iteration of Dash``` NOT "<swap_leg>2</swap_leg>" ```Exclude Cancels, Amends, Auxiliaries, Allocations, Blocks - allocated ``` NOT "<status>" ```Exclude MM Deals ``` NOT "<WSSMMTRADE>" | rex "\<transaction\>(?P<tid>.*?)\<\/transaction\>" | rex "\<deal_number\>(?P<dealnumber>.*?)\<\/deal_number\>" | rex "\<external_deal\>(?P<sourcesystemid>.*?)\<\/external_deal\>" | rex "\<cust_type\>(?P<custType>.*?)\<\/cust_type\>" | eval region=case(host == "pffxa01z", "Global", host == "pffxa02z", "China") | eval wssSendingTime=strftime(_time,"%Y-%m-%d %H:%M:%S.%Q") | table tid,dealnumber,region,custType,sourcesystemid,wssSendingTime, booking_timestamp, booking_timestamp_unix, distributor_timestamp_unix, catchup_unix_time | join type=left tid [ search index=wss source="/proj/flowfx/ffx/log/flowfx-processor.log" "INFO SLA FFX-Processor received" | rex "transactionId\=(?P<tid>.*?)\," | eval flowfxincomingtime=strftime(_time,"%Y-%m-%d %H:%M:%S.%Q") | table flowfxincomingtime,tid, booking_timestamp, booking_timestamp_unix, distributor_timestamp_unix, catchup_unix_time ] | table tid,dealnumber,region,custType,sourcesystemid,wssSendingTime,flowfxincomingtime | eval wssSendingTimeUnix=strptime(wssSendingTime,"%Y-%m-%d %H:%M:%S.%Q") | eval flowfxincomingtimeUnix=strptime(flowfxincomingtime,"%Y-%m-%d %H:%M:%S.%Q") | eval timebetweenWssFlowfx = flowfxincomingtimeUnix - wssSendingTimeUnix | table tid,dealnumber,region,custType,sourcesystemid,wssSendingTime,flowfxincomingtime,timebetweenWssFlowfx,wssSendingTimeUnix,flowfxincomingtimeUnix, booking_timestamp, booking_timestamp_unix, distributor_timestamp_unix, catchup_unix_time | join type=left tid [ search index=wss source="/proj/flowfx/ffx/log/flowfx-trade-sender-cim.log" "INFO SLA FFX-Trade-Sender sent" | rex "nearTransactionId\=(?P<tid>.*?)\," | eval CIMsendingTime=strftime(_time,"%Y/%m/%d %H:%M:%S.%Q") | eval MQ_available_time=strftime(_time - 7200, "%Y-%m-%d %H:%M:%S.%Q") | table CIMsendingTime,tid,MQ_available_time,booking_timestamp, booking_timestamp_unix, distributor_timestamp_unix ] | table tid,dealnumber,region,custType,sourcesystemid,wssSendingTime,flowfxincomingtime,timebetweenWssFlowfx,wssSendingTimeUnix,flowfxincomingtimeUnix,CIMsendingTime, MQ_available_time, booking_timestamp, booking_timestamp_unix, distributor_timestamp_unix, catchup_unix_time ] | eval CIMsendingTime_unix = strptime(CIMsendingTime, "%Y/%m/%d %H:%M:%S.%3Q") | eval cim_latency = CIMsendingTime_unix - booking_timestamp_unix | eval distributor_latency = distributor_timestamp_unix - CIMsendingTime_unix | eval distributor_to_sky_latency = catchup_unix_time - CIMsendingTime_unix | where len(CIMsendingTime) > 0 | eval cim_latency = round(cim_latency * 1000,0) | eval distributor_latency = round(distributor_latency * 1000,0) | eval ep_latency = round(ep_latency * 1000,0) | eval wss_to_sky_latency = round(wss_to_sky_latency * 1000,0) | eval mq_to_sky_update_latency = round(mq_to_sky_update_latency * 1000,0) | eval distributor_to_sky_latency = round(distributor_to_sky_latency * 1000,0) | table trade_id, portfolio_name, sky_id, booking_timestamp,CIMsendingTime, distributor_timestamp, ep_timestamp, catchup_updated_time, wss_to_sky_latency, cim_latency, distributor_latency, ep_latency, mq_to_sky_update_latency, distributor_to_sky_latency, mx_status, operation, action My above current search query but i get more events and less statistics results in last 24 hours period and compared to last 4 hours period.
Hi, Let's say I have sample data below all being ingested to index="characters". How do I create two separate sub-indexes "superheroes" and "villains" such that for events where archetype="superher... See more...
Hi, Let's say I have sample data below all being ingested to index="characters". How do I create two separate sub-indexes "superheroes" and "villains" such that for events where archetype="superhero", the index "superheroes" will contain only events with id=superman, batman and archetype="villain" will only contain event with id="joker"? The reasoning is I want to set permissions on the sub-indexes so only specific users can see the index (e.g. only people with role "good guys" can see superhero data).  I have tried summary indexing with the following query, scheduled the search, and enabled summary indexing but it doesn't capture the original fields in the data.  index=characters | fields id, strengths, archetype | where archetype="superhero" | eventstats count as total_superheroes | table id, strengths, archetype Sample Json Data: [ { "id": "superman", "strengths": "super strength, flight, and heat vision", "archetype": "superhero" }, { "id": "batman", "strengths": "exceptional martial arts skills, detective abilities, and psychic abilities", "archetype": "superhero" }, { "id": "joker", "strengths": "cunning and unpredictable personality", "archetype": "villain" } ]
Hi all:           I'm a rookie user ask for help, I want to extract all vault in one _raw data(CLI command log as below photo),           I want have Location,Card,Type,Mnemonic,Part Number,Serial ... See more...
Hi all:           I'm a rookie user ask for help, I want to extract all vault in one _raw data(CLI command log as below photo),           I want have Location,Card,Type,Mnemonic,Part Number,Serial Number,CLEI,Pmax(W),Imax(A) these fields all vault, can some one help me please, thank you very much.  
Does anyone know how does Cluster Manager populate dmc_forwarder_assets input lookup csv table? I have an issue where my UF forwarder reports show hosts with os containing repeated entries of Window... See more...
Does anyone know how does Cluster Manager populate dmc_forwarder_assets input lookup csv table? I have an issue where my UF forwarder reports show hosts with os containing repeated entries of Windows hundreds and even 000's of times. I'd like to check how this data table is being populated by CM?
I can see below status for the scheduled savedsearches. status="deferred" status="continued" What is the difference between the two and which one will get skipped later on(status="skipped") Are... See more...
I can see below status for the scheduled savedsearches. status="deferred" status="continued" What is the difference between the two and which one will get skipped later on(status="skipped") Are there any "failed" status as well?
Hi, Are there plans to upgrade the html to be compatible with Splunk 9.1?   https://lantern.splunk.com/Splunk_Platform/Product_Tips/Searching_and_Reporting/Updating_deprecated_HTML_dashboards