All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Rather than using the subsearch syntax with append | append [ | inputlookup ... ] use the native | inputlookup append=t which has no subsearch limitations. You also don't need the redundant fi... See more...
Rather than using the subsearch syntax with append | append [ | inputlookup ... ] use the native | inputlookup append=t which has no subsearch limitations. You also don't need the redundant fields command as it will be removed with the stats, so  index=EDR | stats count | eval Status=if((count > "0"),"Compliant","Not Compliant"), Solution="EDR" | inputlookup append=t compliance.csv | stats first(Status) AS Status BY Solution | outputlookup compliance.csv  
Depends on the lookup type. If your lookup is a csv-file based one, you can't update it. The only thing you can do, as was shown by @gcusello , is to overwrite whole lookup with updated contents.
Hi @whrg , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points... See more...
Hi @whrg , good for you, see next time! let us know if we can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @MoeTaher , yes correct (I'm sorry!): index=EDR | stats count | eval Status=if((count > "0"),"Compliant","Not Compliant"), Solution="EDR" | fields - count | append [ | inputlookup compliance.csv... See more...
Hi @MoeTaher , yes correct (I'm sorry!): index=EDR | stats count | eval Status=if((count > "0"),"Compliant","Not Compliant"), Solution="EDR" | fields - count | append [ | inputlookup compliance.csv | fields Solution Status ] | stats first(Status) AS Status BY Solution | outputlookup compliance.csv  Ciao. Giuseppe
Thanks @gcusello , How do I replace join with stats as I am taking data from other tables
Hi @eherbst63 , good for you, see next time! let us know if we can help you more, or, please, accept one answer (also you one) for the other people of Community. Ciao and happy splunking Giuseppe... See more...
Hi @eherbst63 , good for you, see next time! let us know if we can help you more, or, please, accept one answer (also you one) for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
Hi @wm , probably because you're using many join commands and this command uses subsearches: subsearches have the limit of 50,000 results, so probably the match between the subsearches are less beca... See more...
Hi @wm , probably because you're using many join commands and this command uses subsearches: subsearches have the limit of 50,000 results, so probably the match between the subsearches are less because there are less results than the ones that should be. Splunk isn't a database so you cannot use the approach thet you usually use in a query, in other words, avoid join command and correlata searches using the stats command. In addition, using join, yousurela have a very slow search. Search in Community and you'll find many examples of replace of join with stats. Ciao. Giuseppe
Hi all, index=sky sourcetype=sky_trade_wss_timestamp | rex field=_raw "trade_id=\"(?<trade_id>\X+)\", event_id" | rex field=_raw "mx_status=\"(?<status>\X+)\", operation" | rex field=_raw "op... See more...
Hi all, index=sky sourcetype=sky_trade_wss_timestamp | rex field=_raw "trade_id=\"(?<trade_id>\X+)\", event_id" | rex field=_raw "mx_status=\"(?<status>\X+)\", operation" | rex field=_raw "operation=\"(?<operation>\X+)\", action" | rex field=_raw " action=\"(?<action>\X+)\", tradebooking_sgp" | rex field=_raw " eventtime_sgp=\"(?<distributor_timestamp>\X+)\", sky_to_mq" | rex field=_raw "distributor_latency=\"(?<distributor_latency>[^\"]+)\".*\bportfolio_name=\"(?<portfolio_name>[^\"]+)\".*\bportfolio_entity=\"(?<portfolio_entity>[^\"]+)\".*\btrade_type=\"(?<trade_type>[^\"]+)" | join event_id [ search index=sky sourcetype=Sky_WSS_EP_Logs "Successfully processed event" | rex field=_raw "INFO: (?<ep_timestamp>\d{8} \d{2}:\d{2}:\d{2}.\d{3})" | rex field=_raw "Successfully processed event: (?<event_id>\X+), action" | eval ep_timestamp = strftime(strptime(ep_timestamp."+0800", "%Y%d%m %H:%M:%S.%N%z"), "%Y-%m-%d %H:%M:%S.%3N")] | join type=left sky_id [ search index=sky "New trades in amendment" "*pnl*" | rex "Trade Events (?<trades>.*)" | rex max_match=0 field=trades "(?<both_id>\d+:\d+)" | mvexpand both_id | rex field=both_id ":(?<sky_id>\d+)" | rex max_match=1 field=_raw "(?<booking_pnl_timestamp>\d{4}+-\d{2}+-\d{2} \d{2}:\d{2}:\d{2},\d{3})"] | rex field=tradebooking_sgp "(?<booking_timestamp>\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2})" | rex field=distributor_timestamp "(?<distributor_timestamp>\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2}\.\d{3})" | eval booking_pnl_timestamp = booking_pnl_timestamp."+0800" | eval ep_latency = strptime(ep_timestamp, "%Y-%m-%d %H:%M:%S.%N%z") - strptime(booking_timestamp, "%Y/%m/%d %H:%M:%S.%N%z") | eval pnl_latency = strptime(booking_pnl_timestamp, "%Y-%m-%d %H:%M:%S,%N%z") - strptime(booking_timestamp, "%Y/%m/%d %H:%M:%S.%N%z") | search trade_id = "*" | search sky_id = "*" | search event_id = "*" | search action = "*" | search mx_status = "live" | search operation = "*" | table trade_id, sky_id, event_id, booking_timestamp, distributor_timestamp, ep_timestamp, mx_status, operation, action, distributor_latency, ep_latency, portfolio_name, portfolio_entity | sort ep_timestamp | join type=left sky_id [ search index=sky sourcetype=sky_cashfx_catchup_logs "[WSS] - Done incremental update" | rex field=_raw "Max Skylib TradeID: (?<sky_id>\d+)" | rex field=_raw "^(?<catchup_updated_time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2},\d{3})" | eval catchup_updated_time = strftime(strptime(catchup_updated_time, "%Y-%m-%d %H:%M:%S,%3N"), "%Y/%m/%d %H:%M:%S.%3N") | table sky_id, catchup_updated_time, _raw, ] | eval booking_timestamp_unix = strptime(booking_timestamp, "%Y/%m/%d %H:%M:%S") | eval catchup_unix_time = strptime(catchup_updated_time, "%Y/%m/%d %H:%M:%S.%3N") | eval wss_to_sky_latency = catchup_unix_time - booking_timestamp_unix | eval distributor_timestamp_unix = strptime(distributor_timestamp, "%Y/%m/%d %H:%M:%S.%3N") | eval distributor_latency = distributor_timestamp_unix - booking_timestamp_unix | eval ep_timestamp_unix = strptime(ep_timestamp, "%Y/%m/%d %H:%M:%S.%3N") | eval ep_latency = ep_timestamp_unix - distributor_timestamp_unix | eval mq_to_sky_update_latency = catchup_unix_time - distributor_timestamp_unix | eval ep_timestamp = strftime(strptime(ep_timestamp, "%Y-%m-%d %H:%M:%S.%3N"), "%Y/%m/%d %H:%M:%S.%3N") | eval distributor_timestamp = strftime(strptime(distributor_timestamp, "%Y/%m/%d %H:%M:%S.%4N"), "%Y/%m/%d %H:%M:%S.%4N") | eval ep_timestamp_unix = strptime(ep_timestamp, "%Y/%m/%d %H:%M:%S.%3N") | eval distributor_timestamp_unix = strptime(distributor_timestamp, "%Y/%m/%d %H:%M:%S.%4N") | eval ep_latency = ep_timestamp_unix - distributor_timestamp_unix | table trade_id, portfolio_name, portfolio_entity, sky_id, event_id, booking_timestamp, booking_timestamp_unix, distributor_timestamp, distributor_timestamp_unix, ep_timestamp, distributor_latency, ep_latency, catchup_updated_time, wss_to_sky_latency, mq_to_sky_update_latency, mx_status, operation, action, catchup_unix_time | dedup sky_id | sort booking_timestamp | rex field=trade_id "^\w+ (?<dealnumber>\d+)$" | join type=left dealnumber [ search index=wss "Sending message" source="/proj/flowfx/wss/FFXWS01P/log/MQ1.log" ```Exclude Far Legs of Swap Trades for first Iteration of Dash``` NOT "<swap_leg>2</swap_leg>" ```Exclude Cancels, Amends, Auxiliaries, Allocations, Blocks - allocated ``` NOT "<status>" ```Exclude MM Deals ``` NOT "<WSSMMTRADE>" | rex "\<transaction\>(?P<tid>.*?)\<\/transaction\>" | rex "\<deal_number\>(?P<dealnumber>.*?)\<\/deal_number\>" | rex "\<external_deal\>(?P<sourcesystemid>.*?)\<\/external_deal\>" | rex "\<cust_type\>(?P<custType>.*?)\<\/cust_type\>" | eval region=case(host == "pffxa01z", "Global", host == "pffxa02z", "China") | eval wssSendingTime=strftime(_time,"%Y-%m-%d %H:%M:%S.%Q") | table tid,dealnumber,region,custType,sourcesystemid,wssSendingTime, booking_timestamp, booking_timestamp_unix, distributor_timestamp_unix, catchup_unix_time | join type=left tid [ search index=wss source="/proj/flowfx/ffx/log/flowfx-processor.log" "INFO SLA FFX-Processor received" | rex "transactionId\=(?P<tid>.*?)\," | eval flowfxincomingtime=strftime(_time,"%Y-%m-%d %H:%M:%S.%Q") | table flowfxincomingtime,tid, booking_timestamp, booking_timestamp_unix, distributor_timestamp_unix, catchup_unix_time ] | table tid,dealnumber,region,custType,sourcesystemid,wssSendingTime,flowfxincomingtime | eval wssSendingTimeUnix=strptime(wssSendingTime,"%Y-%m-%d %H:%M:%S.%Q") | eval flowfxincomingtimeUnix=strptime(flowfxincomingtime,"%Y-%m-%d %H:%M:%S.%Q") | eval timebetweenWssFlowfx = flowfxincomingtimeUnix - wssSendingTimeUnix | table tid,dealnumber,region,custType,sourcesystemid,wssSendingTime,flowfxincomingtime,timebetweenWssFlowfx,wssSendingTimeUnix,flowfxincomingtimeUnix, booking_timestamp, booking_timestamp_unix, distributor_timestamp_unix, catchup_unix_time | join type=left tid [ search index=wss source="/proj/flowfx/ffx/log/flowfx-trade-sender-cim.log" "INFO SLA FFX-Trade-Sender sent" | rex "nearTransactionId\=(?P<tid>.*?)\," | eval CIMsendingTime=strftime(_time,"%Y/%m/%d %H:%M:%S.%Q") | eval MQ_available_time=strftime(_time - 7200, "%Y-%m-%d %H:%M:%S.%Q") | table CIMsendingTime,tid,MQ_available_time,booking_timestamp, booking_timestamp_unix, distributor_timestamp_unix ] | table tid,dealnumber,region,custType,sourcesystemid,wssSendingTime,flowfxincomingtime,timebetweenWssFlowfx,wssSendingTimeUnix,flowfxincomingtimeUnix,CIMsendingTime, MQ_available_time, booking_timestamp, booking_timestamp_unix, distributor_timestamp_unix, catchup_unix_time ] | eval CIMsendingTime_unix = strptime(CIMsendingTime, "%Y/%m/%d %H:%M:%S.%3Q") | eval cim_latency = CIMsendingTime_unix - booking_timestamp_unix | eval distributor_latency = distributor_timestamp_unix - CIMsendingTime_unix | eval distributor_to_sky_latency = catchup_unix_time - CIMsendingTime_unix | where len(CIMsendingTime) > 0 | eval cim_latency = round(cim_latency * 1000,0) | eval distributor_latency = round(distributor_latency * 1000,0) | eval ep_latency = round(ep_latency * 1000,0) | eval wss_to_sky_latency = round(wss_to_sky_latency * 1000,0) | eval mq_to_sky_update_latency = round(mq_to_sky_update_latency * 1000,0) | eval distributor_to_sky_latency = round(distributor_to_sky_latency * 1000,0) | table trade_id, portfolio_name, sky_id, booking_timestamp,CIMsendingTime, distributor_timestamp, ep_timestamp, catchup_updated_time, wss_to_sky_latency, cim_latency, distributor_latency, ep_latency, mq_to_sky_update_latency, distributor_to_sky_latency, mx_status, operation, action My above current search query but i get more events and less statistics results in last 24 hours period and compared to last 4 hours period.
Am getting same error. Do we know the fix for this issue ?   Regards
Hi, Let's say I have sample data below all being ingested to index="characters". How do I create two separate sub-indexes "superheroes" and "villains" such that for events where archetype="superher... See more...
Hi, Let's say I have sample data below all being ingested to index="characters". How do I create two separate sub-indexes "superheroes" and "villains" such that for events where archetype="superhero", the index "superheroes" will contain only events with id=superman, batman and archetype="villain" will only contain event with id="joker"? The reasoning is I want to set permissions on the sub-indexes so only specific users can see the index (e.g. only people with role "good guys" can see superhero data).  I have tried summary indexing with the following query, scheduled the search, and enabled summary indexing but it doesn't capture the original fields in the data.  index=characters | fields id, strengths, archetype | where archetype="superhero" | eventstats count as total_superheroes | table id, strengths, archetype Sample Json Data: [ { "id": "superman", "strengths": "super strength, flight, and heat vision", "archetype": "superhero" }, { "id": "batman", "strengths": "exceptional martial arts skills, detective abilities, and psychic abilities", "archetype": "superhero" }, { "id": "joker", "strengths": "cunning and unpredictable personality", "archetype": "villain" } ]
See transaction.  Because the sample dataset is small, and they do not start at the top of a cycle, I wanted to show results from incomplete transactions.  You need to analyze real data to see which ... See more...
See transaction.  Because the sample dataset is small, and they do not start at the top of a cycle, I wanted to show results from incomplete transactions.  You need to analyze real data to see which options are right for your use case.
Hi all:           I'm a rookie user ask for help, I want to extract all vault in one _raw data(CLI command log as below photo),           I want have Location,Card,Type,Mnemonic,Part Number,Serial ... See more...
Hi all:           I'm a rookie user ask for help, I want to extract all vault in one _raw data(CLI command log as below photo),           I want have Location,Card,Type,Mnemonic,Part Number,Serial Number,CLEI,Pmax(W),Imax(A) these fields all vault, can some one help me please, thank you very much.  
Hi @yuanliu , may I know what does keepevicted=t do and what happens if we dont use it?
Does anyone know how does Cluster Manager populate dmc_forwarder_assets input lookup csv table? I have an issue where my UF forwarder reports show hosts with os containing repeated entries of Window... See more...
Does anyone know how does Cluster Manager populate dmc_forwarder_assets input lookup csv table? I have an issue where my UF forwarder reports show hosts with os containing repeated entries of Windows hundreds and even 000's of times. I'd like to check how this data table is being populated by CM?
Amazing! Thank you. Yes I misunderstood macros.
Thank you @kiran_panchavat for your response. However, this may not be useful as we can not install Splunk inside the container. We are not monitoring the container itself or the docker logs. The log... See more...
Thank you @kiran_panchavat for your response. However, this may not be useful as we can not install Splunk inside the container. We are not monitoring the container itself or the docker logs. The logs that needs to be monitored are from some applications installed inside the container. As mentioned we have around 5-6 containters.
I can see below status for the scheduled savedsearches. status="deferred" status="continued" What is the difference between the two and which one will get skipped later on(status="skipped") Are... See more...
I can see below status for the scheduled savedsearches. status="deferred" status="continued" What is the difference between the two and which one will get skipped later on(status="skipped") Are there any "failed" status as well?
oh, this is not a question.. this is a solution, i see.  thanks for sharing. 
8/2024: I get this message with Linux Splunk v9.3.0 It started appearing after I relocated $SPLUNK_DB and freed up the space under $SPLUNK_HOME/var/lib/splunk/ Update: The message stopped after sp... See more...
8/2024: I get this message with Linux Splunk v9.3.0 It started appearing after I relocated $SPLUNK_DB and freed up the space under $SPLUNK_HOME/var/lib/splunk/ Update: The message stopped after splunkd re-created all the 2-byte index .dat files under the old location  $SPLUNK_HOME/var/lib/splunk/ Maybe I should have used a symbolic link to relocate the index DB instead of defining a new DB location in splunk-launch.conf
Hi, Are there plans to upgrade the html to be compatible with Splunk 9.1?   https://lantern.splunk.com/Splunk_Platform/Product_Tips/Searching_and_Reporting/Updating_deprecated_HTML_dashboards