All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi all, index=sky sourcetype=sky_trade_wss_timestamp | rex field=_raw "trade_id=\"(?<trade_id>\X+)\", event_id" | rex field=_raw "mx_status=\"(?<status>\X+)\", operation" | rex field=_raw "op... See more...
Hi all, index=sky sourcetype=sky_trade_wss_timestamp | rex field=_raw "trade_id=\"(?<trade_id>\X+)\", event_id" | rex field=_raw "mx_status=\"(?<status>\X+)\", operation" | rex field=_raw "operation=\"(?<operation>\X+)\", action" | rex field=_raw " action=\"(?<action>\X+)\", tradebooking_sgp" | rex field=_raw " eventtime_sgp=\"(?<distributor_timestamp>\X+)\", sky_to_mq" | rex field=_raw "distributor_latency=\"(?<distributor_latency>[^\"]+)\".*\bportfolio_name=\"(?<portfolio_name>[^\"]+)\".*\bportfolio_entity=\"(?<portfolio_entity>[^\"]+)\".*\btrade_type=\"(?<trade_type>[^\"]+)" | join event_id [ search index=sky sourcetype=Sky_WSS_EP_Logs "Successfully processed event" | rex field=_raw "INFO: (?<ep_timestamp>\d{8} \d{2}:\d{2}:\d{2}.\d{3})" | rex field=_raw "Successfully processed event: (?<event_id>\X+), action" | eval ep_timestamp = strftime(strptime(ep_timestamp."+0800", "%Y%d%m %H:%M:%S.%N%z"), "%Y-%m-%d %H:%M:%S.%3N")] | join type=left sky_id [ search index=sky "New trades in amendment" "*pnl*" | rex "Trade Events (?<trades>.*)" | rex max_match=0 field=trades "(?<both_id>\d+:\d+)" | mvexpand both_id | rex field=both_id ":(?<sky_id>\d+)" | rex max_match=1 field=_raw "(?<booking_pnl_timestamp>\d{4}+-\d{2}+-\d{2} \d{2}:\d{2}:\d{2},\d{3})"] | rex field=tradebooking_sgp "(?<booking_timestamp>\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2})" | rex field=distributor_timestamp "(?<distributor_timestamp>\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2}\.\d{3})" | eval booking_pnl_timestamp = booking_pnl_timestamp."+0800" | eval ep_latency = strptime(ep_timestamp, "%Y-%m-%d %H:%M:%S.%N%z") - strptime(booking_timestamp, "%Y/%m/%d %H:%M:%S.%N%z") | eval pnl_latency = strptime(booking_pnl_timestamp, "%Y-%m-%d %H:%M:%S,%N%z") - strptime(booking_timestamp, "%Y/%m/%d %H:%M:%S.%N%z") | search trade_id = "*" | search sky_id = "*" | search event_id = "*" | search action = "*" | search mx_status = "live" | search operation = "*" | table trade_id, sky_id, event_id, booking_timestamp, distributor_timestamp, ep_timestamp, mx_status, operation, action, distributor_latency, ep_latency, portfolio_name, portfolio_entity | sort ep_timestamp | join type=left sky_id [ search index=sky sourcetype=sky_cashfx_catchup_logs "[WSS] - Done incremental update" | rex field=_raw "Max Skylib TradeID: (?<sky_id>\d+)" | rex field=_raw "^(?<catchup_updated_time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2},\d{3})" | eval catchup_updated_time = strftime(strptime(catchup_updated_time, "%Y-%m-%d %H:%M:%S,%3N"), "%Y/%m/%d %H:%M:%S.%3N") | table sky_id, catchup_updated_time, _raw, ] | eval booking_timestamp_unix = strptime(booking_timestamp, "%Y/%m/%d %H:%M:%S") | eval catchup_unix_time = strptime(catchup_updated_time, "%Y/%m/%d %H:%M:%S.%3N") | eval wss_to_sky_latency = catchup_unix_time - booking_timestamp_unix | eval distributor_timestamp_unix = strptime(distributor_timestamp, "%Y/%m/%d %H:%M:%S.%3N") | eval distributor_latency = distributor_timestamp_unix - booking_timestamp_unix | eval ep_timestamp_unix = strptime(ep_timestamp, "%Y/%m/%d %H:%M:%S.%3N") | eval ep_latency = ep_timestamp_unix - distributor_timestamp_unix | eval mq_to_sky_update_latency = catchup_unix_time - distributor_timestamp_unix | eval ep_timestamp = strftime(strptime(ep_timestamp, "%Y-%m-%d %H:%M:%S.%3N"), "%Y/%m/%d %H:%M:%S.%3N") | eval distributor_timestamp = strftime(strptime(distributor_timestamp, "%Y/%m/%d %H:%M:%S.%4N"), "%Y/%m/%d %H:%M:%S.%4N") | eval ep_timestamp_unix = strptime(ep_timestamp, "%Y/%m/%d %H:%M:%S.%3N") | eval distributor_timestamp_unix = strptime(distributor_timestamp, "%Y/%m/%d %H:%M:%S.%4N") | eval ep_latency = ep_timestamp_unix - distributor_timestamp_unix | table trade_id, portfolio_name, portfolio_entity, sky_id, event_id, booking_timestamp, booking_timestamp_unix, distributor_timestamp, distributor_timestamp_unix, ep_timestamp, distributor_latency, ep_latency, catchup_updated_time, wss_to_sky_latency, mq_to_sky_update_latency, mx_status, operation, action, catchup_unix_time | dedup sky_id | sort booking_timestamp | rex field=trade_id "^\w+ (?<dealnumber>\d+)$" | join type=left dealnumber [ search index=wss "Sending message" source="/proj/flowfx/wss/FFXWS01P/log/MQ1.log" ```Exclude Far Legs of Swap Trades for first Iteration of Dash``` NOT "<swap_leg>2</swap_leg>" ```Exclude Cancels, Amends, Auxiliaries, Allocations, Blocks - allocated ``` NOT "<status>" ```Exclude MM Deals ``` NOT "<WSSMMTRADE>" | rex "\<transaction\>(?P<tid>.*?)\<\/transaction\>" | rex "\<deal_number\>(?P<dealnumber>.*?)\<\/deal_number\>" | rex "\<external_deal\>(?P<sourcesystemid>.*?)\<\/external_deal\>" | rex "\<cust_type\>(?P<custType>.*?)\<\/cust_type\>" | eval region=case(host == "pffxa01z", "Global", host == "pffxa02z", "China") | eval wssSendingTime=strftime(_time,"%Y-%m-%d %H:%M:%S.%Q") | table tid,dealnumber,region,custType,sourcesystemid,wssSendingTime, booking_timestamp, booking_timestamp_unix, distributor_timestamp_unix, catchup_unix_time | join type=left tid [ search index=wss source="/proj/flowfx/ffx/log/flowfx-processor.log" "INFO SLA FFX-Processor received" | rex "transactionId\=(?P<tid>.*?)\," | eval flowfxincomingtime=strftime(_time,"%Y-%m-%d %H:%M:%S.%Q") | table flowfxincomingtime,tid, booking_timestamp, booking_timestamp_unix, distributor_timestamp_unix, catchup_unix_time ] | table tid,dealnumber,region,custType,sourcesystemid,wssSendingTime,flowfxincomingtime | eval wssSendingTimeUnix=strptime(wssSendingTime,"%Y-%m-%d %H:%M:%S.%Q") | eval flowfxincomingtimeUnix=strptime(flowfxincomingtime,"%Y-%m-%d %H:%M:%S.%Q") | eval timebetweenWssFlowfx = flowfxincomingtimeUnix - wssSendingTimeUnix | table tid,dealnumber,region,custType,sourcesystemid,wssSendingTime,flowfxincomingtime,timebetweenWssFlowfx,wssSendingTimeUnix,flowfxincomingtimeUnix, booking_timestamp, booking_timestamp_unix, distributor_timestamp_unix, catchup_unix_time | join type=left tid [ search index=wss source="/proj/flowfx/ffx/log/flowfx-trade-sender-cim.log" "INFO SLA FFX-Trade-Sender sent" | rex "nearTransactionId\=(?P<tid>.*?)\," | eval CIMsendingTime=strftime(_time,"%Y/%m/%d %H:%M:%S.%Q") | eval MQ_available_time=strftime(_time - 7200, "%Y-%m-%d %H:%M:%S.%Q") | table CIMsendingTime,tid,MQ_available_time,booking_timestamp, booking_timestamp_unix, distributor_timestamp_unix ] | table tid,dealnumber,region,custType,sourcesystemid,wssSendingTime,flowfxincomingtime,timebetweenWssFlowfx,wssSendingTimeUnix,flowfxincomingtimeUnix,CIMsendingTime, MQ_available_time, booking_timestamp, booking_timestamp_unix, distributor_timestamp_unix, catchup_unix_time ] | eval CIMsendingTime_unix = strptime(CIMsendingTime, "%Y/%m/%d %H:%M:%S.%3Q") | eval cim_latency = CIMsendingTime_unix - booking_timestamp_unix | eval distributor_latency = distributor_timestamp_unix - CIMsendingTime_unix | eval distributor_to_sky_latency = catchup_unix_time - CIMsendingTime_unix | where len(CIMsendingTime) > 0 | eval cim_latency = round(cim_latency * 1000,0) | eval distributor_latency = round(distributor_latency * 1000,0) | eval ep_latency = round(ep_latency * 1000,0) | eval wss_to_sky_latency = round(wss_to_sky_latency * 1000,0) | eval mq_to_sky_update_latency = round(mq_to_sky_update_latency * 1000,0) | eval distributor_to_sky_latency = round(distributor_to_sky_latency * 1000,0) | table trade_id, portfolio_name, sky_id, booking_timestamp,CIMsendingTime, distributor_timestamp, ep_timestamp, catchup_updated_time, wss_to_sky_latency, cim_latency, distributor_latency, ep_latency, mq_to_sky_update_latency, distributor_to_sky_latency, mx_status, operation, action My above current search query but i get more events and less statistics results in last 24 hours period and compared to last 4 hours period.
Am getting same error. Do we know the fix for this issue ?   Regards
Hi, Let's say I have sample data below all being ingested to index="characters". How do I create two separate sub-indexes "superheroes" and "villains" such that for events where archetype="superher... See more...
Hi, Let's say I have sample data below all being ingested to index="characters". How do I create two separate sub-indexes "superheroes" and "villains" such that for events where archetype="superhero", the index "superheroes" will contain only events with id=superman, batman and archetype="villain" will only contain event with id="joker"? The reasoning is I want to set permissions on the sub-indexes so only specific users can see the index (e.g. only people with role "good guys" can see superhero data).  I have tried summary indexing with the following query, scheduled the search, and enabled summary indexing but it doesn't capture the original fields in the data.  index=characters | fields id, strengths, archetype | where archetype="superhero" | eventstats count as total_superheroes | table id, strengths, archetype Sample Json Data: [ { "id": "superman", "strengths": "super strength, flight, and heat vision", "archetype": "superhero" }, { "id": "batman", "strengths": "exceptional martial arts skills, detective abilities, and psychic abilities", "archetype": "superhero" }, { "id": "joker", "strengths": "cunning and unpredictable personality", "archetype": "villain" } ]
See transaction.  Because the sample dataset is small, and they do not start at the top of a cycle, I wanted to show results from incomplete transactions.  You need to analyze real data to see which ... See more...
See transaction.  Because the sample dataset is small, and they do not start at the top of a cycle, I wanted to show results from incomplete transactions.  You need to analyze real data to see which options are right for your use case.
Hi all:           I'm a rookie user ask for help, I want to extract all vault in one _raw data(CLI command log as below photo),           I want have Location,Card,Type,Mnemonic,Part Number,Serial ... See more...
Hi all:           I'm a rookie user ask for help, I want to extract all vault in one _raw data(CLI command log as below photo),           I want have Location,Card,Type,Mnemonic,Part Number,Serial Number,CLEI,Pmax(W),Imax(A) these fields all vault, can some one help me please, thank you very much.  
Hi @yuanliu , may I know what does keepevicted=t do and what happens if we dont use it?
Does anyone know how does Cluster Manager populate dmc_forwarder_assets input lookup csv table? I have an issue where my UF forwarder reports show hosts with os containing repeated entries of Window... See more...
Does anyone know how does Cluster Manager populate dmc_forwarder_assets input lookup csv table? I have an issue where my UF forwarder reports show hosts with os containing repeated entries of Windows hundreds and even 000's of times. I'd like to check how this data table is being populated by CM?
Amazing! Thank you. Yes I misunderstood macros.
Thank you @kiran_panchavat for your response. However, this may not be useful as we can not install Splunk inside the container. We are not monitoring the container itself or the docker logs. The log... See more...
Thank you @kiran_panchavat for your response. However, this may not be useful as we can not install Splunk inside the container. We are not monitoring the container itself or the docker logs. The logs that needs to be monitored are from some applications installed inside the container. As mentioned we have around 5-6 containters.
I can see below status for the scheduled savedsearches. status="deferred" status="continued" What is the difference between the two and which one will get skipped later on(status="skipped") Are... See more...
I can see below status for the scheduled savedsearches. status="deferred" status="continued" What is the difference between the two and which one will get skipped later on(status="skipped") Are there any "failed" status as well?
oh, this is not a question.. this is a solution, i see.  thanks for sharing. 
8/2024: I get this message with Linux Splunk v9.3.0 It started appearing after I relocated $SPLUNK_DB and freed up the space under $SPLUNK_HOME/var/lib/splunk/ Update: The message stopped after sp... See more...
8/2024: I get this message with Linux Splunk v9.3.0 It started appearing after I relocated $SPLUNK_DB and freed up the space under $SPLUNK_HOME/var/lib/splunk/ Update: The message stopped after splunkd re-created all the 2-byte index .dat files under the old location  $SPLUNK_HOME/var/lib/splunk/ Maybe I should have used a symbolic link to relocate the index DB instead of defining a new DB location in splunk-launch.conf
Hi, Are there plans to upgrade the html to be compatible with Splunk 9.1?   https://lantern.splunk.com/Splunk_Platform/Product_Tips/Searching_and_Reporting/Updating_deprecated_HTML_dashboards
@ITWhisperer This is what I imagine it should look like  but im not sure if there is a way to add in a condition for Streamstats  for this command?  or a workaround? "reset_on_change= if (status="... See more...
@ITWhisperer This is what I imagine it should look like  but im not sure if there is a way to add in a condition for Streamstats  for this command?  or a workaround? "reset_on_change= if (status="UP", 1, 0)  " | bucket span=1m _time | eval status_change=if(status="DOWN",1,0) | streamstats sum(status_change) as down_count  reset_on_change= if (status="UP", 1, 0) | eval is_alert=if(down_count >=5 AND status="DOWN",1,0) | where is_alert=1
@ITWhisperer want an alert if there has been a period for every1 minute of at least 5 minutes of Status being "Down" and if its interrupted with a status = Up then it resets the count and will not al... See more...
@ITWhisperer want an alert if there has been a period for every1 minute of at least 5 minutes of Status being "Down" and if its interrupted with a status = Up then it resets the count and will not alert regarding the amount of event counts
Your data does not match your description - the Status field appears to be either "up" or "Down" not "true" - because of this, it is not clear whether you want an alert if there has been a period of ... See more...
Your data does not match your description - the Status field appears to be either "up" or "Down" not "true" - because of this, it is not clear whether you want an alert if there has been a period of at least 5 minutes of Status being "Down" or Status being "up" anywhere within the time period of the search - please clarify your requirement
I get this too.  In Splunkd.log, we see the shutdown process, but then it just... doesn't shut down... until it times out. Looks like the shutdown process completes, but the HttpPubSubConnection ke... See more...
I get this too.  In Splunkd.log, we see the shutdown process, but then it just... doesn't shut down... until it times out. Looks like the shutdown process completes, but the HttpPubSubConnection keeps going. Shutdown [182482 Shutdown] - shutting down level="ShutdownLevel_Tailing" 08-15-2024 22:27:57.171 +0000 INFO Shutdown [182482 Shutdown] - shutting down name="TailingProcessor" 08-15-2024 22:27:57.171 +0000 INFO TailingProcessor [182482 Shutdown] - Will reconfigure input. 08-15-2024 22:27:57.171 +0000 INFO TailingProcessor [182482 Shutdown] - Calling addFromAnywhere in TailWatcher=0x7f4e53dfd a10. 08-15-2024 22:27:57.171 +0000 INFO TailingProcessor [182712 MainTailingThread] - Shutting down with TailingShutdownActor=0 x7f4e77429300 and TailWatcher=0x7f4e53dfda10. 08-15-2024 22:27:57.171 +0000 INFO TailingProcessor [182712 MainTailingThread] - Pausing TailReader module... 08-15-2024 22:27:57.171 +0000 INFO TailReader [182712 MainTailingThread] - State transitioning from 0 to 1 (pseudoPause). 08-15-2024 22:27:57.171 +0000 INFO TailReader [182712 MainTailingThread] - State transitioning from 0 to 1 (pseudoPause). 08-15-2024 22:27:57.171 +0000 INFO TailingProcessor [182712 MainTailingThread] - Removing TailWatcher from eventloop... 08-15-2024 22:27:57.176 +0000 INFO TailingProcessor [182712 MainTailingThread] - ...removed. 08-15-2024 22:27:57.176 +0000 INFO TailingProcessor [182712 MainTailingThread] - Eventloop terminated successfully. 08-15-2024 22:27:57.177 +0000 INFO TailingProcessor [182712 MainTailingThread] - Signaling shutdown complete. 08-15-2024 22:27:57.177 +0000 INFO TailReader [182712 MainTailingThread] - State transitioning from 1 to 2 (signalShutdown). 08-15-2024 22:27:57.177 +0000 INFO TailReader [182712 MainTailingThread] - Shutting down batch-reader 08-15-2024 22:27:57.177 +0000 INFO TailReader [182712 MainTailingThread] - State transitioning from 1 to 2 (signalShutdown). 08-15-2024 22:27:57.177 +0000 INFO Shutdown [182482 Shutdown] - shutting down level="ShutdownLevel_IdataDO_Collector" 08-15-2024 22:27:57.177 +0000 INFO Shutdown [182482 Shutdown] - shutting down name="IdataDO_Collector" 08-15-2024 22:27:57.178 +0000 INFO Shutdown [182482 Shutdown] - shutting down level="ShutdownLevel_PeerManager" 08-15-2024 22:27:57.178 +0000 INFO Shutdown [182482 Shutdown] - shutting down name="BundleStatusManager" 08-15-2024 22:27:57.178 +0000 INFO Shutdown [182482 Shutdown] - shutting down name="DistributedPeerManager" 08-15-2024 22:27:57.692 +0000 INFO TcpInputProc [182624 TcpPQReaderThread] - TcpInput queue shut down cleanly. 08-15-2024 22:27:57.692 +0000 INFO TcpInputProc [182624 TcpPQReaderThread] - Reader thread stopped. 08-15-2024 22:27:57.692 +0000 INFO TcpInputProc [182623 TcpListener] - TCP connection cleanup complete 08-15-2024 22:28:52.001 +0000 INFO HttpPubSubConnection .....  ... ... INFO IndexProcessor [199494 MainThread] - handleSignal : Disabling streaming searches. Splunk continues to write log lines from HttpPubSubConnection - Running phone.... after the Shutdown, nothing else shows up in the logs.  I re-ran "./splunk stop" in another session, and it finally logged one more line and actually stopped.
I have search query, if the Status is field is true for more than 5 min, I need to trigger an alert  no matter the Event count result. if its within the timeframe then fire. Mabey even have it searc... See more...
I have search query, if the Status is field is true for more than 5 min, I need to trigger an alert  no matter the Event count result. if its within the timeframe then fire. Mabey even have it search for every 1minute. for example  this should not fire an Alert because it recovered within the 5 min 1:00 Status = Down   (event result count X5) 1:03 Status = up 1:07 Status = Down  (event count X3) 1:10 Status = up 1:13 Status = up 1:16 Status = up for example  this should  fire an Alert  1:00 Status = Down  (event result count X1) 1:03 Status = Down (event result count X1) 1:07 Status = Down (event result count X1) 1:10 Status = up 1:13 Status = up 1:16 Status = up
Not a search head limit, but an ingestion limit.  If you look at raw events, you'll probably see one JSON document broken into multiple "events".  The solution is in props.conf (or use Splunk Web to ... See more...
Not a search head limit, but an ingestion limit.  If you look at raw events, you'll probably see one JSON document broken into multiple "events".  The solution is in props.conf (or use Splunk Web to set MAX_EVENTS).  Good thing you noticed line numbers.  It took me like 2 years.  See my post in Getting Data In.
@yuanliu , I am not running any complex query, with the basic search when I hover over my mouse on the interested field "LogController_LogMerticsAsync_request.loggerData{}.adType", I am only getting ... See more...
@yuanliu , I am not running any complex query, with the basic search when I hover over my mouse on the interested field "LogController_LogMerticsAsync_request.loggerData{}.adType", I am only getting top the 3 values instead of 5 values as you provided the table.  The Json event I provided is a trauncated, the actual number of lines in JSON format is around 959 Lines. So Is there any limit setting on the search head to analyze whole event?