All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Its quite random but here is one 2024-12-27 23:05:09.917, system="murex", id="645437844", sky_id="645437844", trade_id="31791027", event_id="100038914", mx_status="live", operation="nooperation", ... See more...
Its quite random but here is one 2024-12-27 23:05:09.917, system="murex", id="645437844", sky_id="645437844", trade_id="31791027", event_id="100038914", mx_status="live", operation="nooperation", action="fixing", tradebooking_sgp="2024/12/27 08:42:39.0000", eventtime_sgp="2024/12/27 08:42:33.6400", sky_to_mq_latency="-5.-360", portfolio_name="A ZZ AUD LTR", portfolio_entity="ANZBG MELB", trade_type="BasisSwap" 31791027;LIVE;17500000.00000000;AUD;IRD;IRS;;A ZZ AUD LTR;X_GCM_IRD_AUCNZ       index=sky sourcetype=sky_trade_murex_timestamp | rex field=_raw "trade_id=\"(?<trade_id>\d+)\"" | rex field=_raw "mx_status=\"(?<mx_status>[^\"]+)\"" | rex field=_raw "sky_id=\"(?<sky_id>\d+)\"" | rex field=_raw "event_id=\"(?<event_id>\d+)\"" | rex field=_raw "operation=\"(?<operation>[^\"]+)\"" | rex field=_raw "action=\"(?<action>[^\"]+)\"" | rex field=_raw "tradebooking_sgp=\"(?<tradebooking_sgp>[^\"]+)\"" | rex field=_raw "portfolio_name=\"(?<portfolio_name>[^\"]+)\"" | rex field=_raw "portfolio_entity=\"(?<portfolio_entity>[^\"]+)\"" | rex field=_raw "trade_type=\"(?<trade_type>[^\"]+)\"" | eval trade_id = replace(trade_id, "\"", "") | rename trade_id as NB | table sky_id, NB, event_id, mx_status, operation, action, tradebooking_sgp, portfolio_name, portfolio_entity, trade_type | join type=left NB [ search index=sky sourcetype=mx_to_sky | rex field=_raw "(?<NB>[^;]+);(?<TRN_STATUS>[^;]+);(?<NOMINAL>[^;]+);(?<CURRENCY>[^;]+);(?<TRN_FMLY>[^;]+);(?<TRN_GRP>[^;]+);(?<TRN_TYPE>[^;]*);(?<BPFOLIO>[^;]*);(?<SPFOLIO>[^;]*)" | table TRN_STATUS, NB, NOMINAL, CURRENCY, TRN_FMLY, TRN_GRP, TRN_TYPE, BPFOLIO, SPFOLIO] | table sky_id, NB, event_id, mx_status, operation, action, tradebooking_sgp, portfolio_name, portfolio_entity, trade_type | table sky_id, NB, event_id, mx_status, operation, action, tradebooking_sgp, portfolio_name, portfolio_entity, trade_type, TRN_STATUS, NOMINAL, CURRENCY, TRN_FMLY, TRN_GRP, TRN_TYPE, BPFOLIO, SPFOLIO   For that NB i get all columns and empty for TRN_STATUS, NOMINAL, CURRENCY, TRN_FMLY, TRN_GRP, TRN_TYPE, BPFOLIO, SPFOLIO  
Ahhhh... The "table" command is a transforming command. So you can't use it in either search. Use "fields" instead.
If there are results for the trade_id/NB in both searches, then it is possible that the rex has not extracted the fields as you expect. Please share the two events which don't match
Thank you for your quick responses. I have increased the frozen time period from my indexer machine. And I am able to increase it according to my requirement.
Thanks!! I get this error though  Error in 'multisearch' command: Multisearch subsearches might only contain purely streaming operations (subsearch 1 contains a non-streaming command).  
It doesn't seem to include any "start" or "end" time. It's just one timestamp. So you must think of a proper logic behind your request.
No problem. But why is it that some rows are populated and some are not then? i.e some match and some don't match. I renamed trade_id as NB then left=join NB so it should join 2 of them together but ... See more...
No problem. But why is it that some rows are populated and some are not then? i.e some match and some don't match. I renamed trade_id as NB then left=join NB so it should join 2 of them together but why does it not work for some rows or columns although clearly it matches in both searches?
Assuming both your joined searches produce proper results (it's up to you to check it - we don't know), the easiest and most straightforward way to avoid join altogether is to use multisearch to run ... See more...
Assuming both your joined searches produce proper results (it's up to you to check it - we don't know), the easiest and most straightforward way to avoid join altogether is to use multisearch to run those in parallel and then stats the results. This way you're not prone to hit join's limits and since your searches are streaming ones you can use multisearch and you're not limited by subsearch limits which you might hit when using append.   | multisearch [ search index=sky sourcetype=sky_trade_murex_timestamp | rex field=_raw "trade_id=\"(?<trade_id>\d+)\"" | rex field=_raw "mx_status=\"(?<mx_status>[^\"]+)\"" | rex field=_raw "sky_id=\"(?<sky_id>\d+)\"" | rex field=_raw "event_id=\"(?<event_id>\d+)\"" | rex field=_raw "operation=\"(?<operation>[^\"]+)\"" | rex field=_raw "action=\"(?<action>[^\"]+)\"" | rex field=_raw "tradebooking_sgp=\"(?<tradebooking_sgp>[^\"]+)\"" | rex field=_raw "portfolio_name=\"(?<portfolio_name>[^\"]+)\"" | rex field=_raw "portfolio_entity=\"(?<portfolio_entity>[^\"]+)\"" | rex field=_raw "trade_type=\"(?<trade_type>[^\"]+)\"" | rename trade_id as NB | dedup NB | eval NB = tostring(trim(NB)) | table sky_id, NB, event_id, mx_status, operation, action, tradebooking_sgp, portfolio_name, portfolio_entity, trade_type ] [ search index=sky sourcetype=mx_to_sky | rex field=_raw "(?<NB>\d+);(?<TRN_STATUS>[^;]+);(?<NOMINAL>[^;]+);(?<CURRENCY>[^;]+);(?<TRN_FMLY>[^;]+);(?<TRN_GRP>[^;]+);(?<TRN_TYPE>[^;]*);(?<BPFOLIO>[^;]*);(?<SPFOLIO>[^;]*)" | eval NB = tostring(trim(NB)) | table TRN_STATUS, NB, NOMINAL, CURRENCY, TRN_FMLY, TRN_GRP, TRN_TYPE, BPFOLIO, SPFOLIO ] | stats values(*) as * by NB    
Positive lookahead doesn't perform well in Splunk and, generally, is unnecessary.
Sorry I miscounted, it does look right - the issue here is that the trade_id does not match the first field in the mx_to_sky event
Sorry how do i rectify it? 32265376;DEAD;3887.00000000;XAU;CURR;FXD;FXD;CM TR GLD AUS;X_CMTR XAU SWAP Did u mean this 
Your regex assumes (insists!) that the event has 9 fields separated by (8) semi-colons - your sample data has only 8 fields separated by 7 semi-colons.
感谢您的回复。对于我描述的问题,我深表歉意! 我们的样本数据如下: 2024-12-12 00:30:12 “, 0699075634,” 刘志强 “,” 物流部 “,” 是 “ 2024-12-12 08:30:14 ”, 0699075634,“ 刘志强 ”,“ 物流部 ”,“ 是 ” 2024-12-12 11:30:12 “, 0699075634,” 刘志强 “,” 物流部 “... See more...
感谢您的回复。对于我描述的问题,我深表歉意! 我们的样本数据如下: 2024-12-12 00:30:12 “, 0699075634,” 刘志强 “,” 物流部 “,” 是 “ 2024-12-12 08:30:14 ”, 0699075634,“ 刘志强 ”,“ 物流部 ”,“ 是 ” 2024-12-12 11:30:12 “, 0699075634,” 刘志强 “,” 物流部 “,” 是 “ 2024-12-13 15:30:55 ”, 0699075634,“ 刘志强 ”,“ 物流部 ”,“ 是 ” 2024-12-13 00:30:12 “, 0699075634,” 刘志强 “,” 物流部 “,” 是 “ 2024-12-14 19:30:30 ”, 0699075634,“ 刘志强 ”,“ 物流部 ”,“ 是 ” 2024-12-14 22:30:12 “, 0699075634,” 刘志强 “,” 物流部 “,” 是 “ 字段标题为: opr_time oprt_user_acct oprt_user_name blng_dept_name is_cont_sens_acct
Since the first part is just determining values for earliest and latest, you might be able to avoid map like this index=edwapp sourcetype=ygttest is_cont_sens_acct="是" [search index=edwapp sourcetyp... See more...
Since the first part is just determining values for earliest and latest, you might be able to avoid map like this index=edwapp sourcetype=ygttest is_cont_sens_acct="是" [search index=edwapp sourcetype=ygttest is_cont_sens_acct="是" | stats earliest(_time) as earliest_time latest(_time) as latest_time | addinfo | table info_min_time info_max_time earliest_time latest_time | eval earliest_time=strftime(earliest_time,"%F 00:00:00") | eval earliest_time=strptime(earliest_time,"%F %T") | eval earliest_time=round(earliest_time) | eval searchEarliestTime2=if(info_min_time == "0.000", earliest_time, info_min_time) | eval searchLatestTime2=if(info_max_time="+Infinity", relative_time(latest_time,"+1d"), info_max_time) | eval earliest=mvrange(searchEarliestTime2,searchLatestTime2, "1d") | mvexpand earliest | eval latest=relative_time(earliest,"+7d") | where latest <=searchLatestTime2 | eval latest=round(latest) | fields earliest latest] | dedup day oprt_user_name blng_dept_name oprt_user_acct | stats count as "fwcishu" by day oprt_user_name blng_dept_name oprt_user_acct | eval a=$a$ | eval b=$b$ | stats count as "day_count",values(day) as "qdate",max(day) as "alert_date" by a b oprt_user_name,oprt_user_acct " maxsearches=500000 | where day_count > 2 | eval alert_date=strptime(alert_date,"%F") | eval alert_date=relative_time(alert_date,"+1d") | eval alert_date=strftime(alert_date, "%F") | table a b oprt_user_name oprt_user_acct day_count qdate alert_date  
And the problem is these columns are empty for some and populated for some. For those empty, I clearly checked the NB is matching in both searches TRN_STATUS, NOMINAL, CURRENCY, TRN_FMLY, TRN_GRP,... See more...
And the problem is these columns are empty for some and populated for some. For those empty, I clearly checked the NB is matching in both searches TRN_STATUS, NOMINAL, CURRENCY, TRN_FMLY, TRN_GRP, TRN_TYPE, BPFOLIO, SPFOLIO
HI query joining 2 searches on left join. Its matching some rows and not matching some rows although the column where I join on is clearly seen in both searches.       index=sky sourcetype=sky... See more...
HI query joining 2 searches on left join. Its matching some rows and not matching some rows although the column where I join on is clearly seen in both searches.       index=sky sourcetype=sky_trade_murex_timestamp | rex field=_raw "trade_id=\"(?<trade_id>\d+)\"" | rex field=_raw "mx_status=\"(?<mx_status>[^\"]+)\"" | rex field=_raw "sky_id=\"(?<sky_id>\d+)\"" | rex field=_raw "event_id=\"(?<event_id>\d+)\"" | rex field=_raw "operation=\"(?<operation>[^\"]+)\"" | rex field=_raw "action=\"(?<action>[^\"]+)\"" | rex field=_raw "tradebooking_sgp=\"(?<tradebooking_sgp>[^\"]+)\"" | rex field=_raw "portfolio_name=\"(?<portfolio_name>[^\"]+)\"" | rex field=_raw "portfolio_entity=\"(?<portfolio_entity>[^\"]+)\"" | rex field=_raw "trade_type=\"(?<trade_type>[^\"]+)\"" | rename trade_id as NB | dedup NB | eval NB = tostring(trim(NB)) | table sky_id, NB, event_id, mx_status, operation, action, tradebooking_sgp, portfolio_name, portfolio_entity, trade_type | join type=left NB [ search index=sky sourcetype=mx_to_sky | rex field=_raw "(?<NB>\d+);(?<TRN_STATUS>[^;]+);(?<NOMINAL>[^;]+);(?<CURRENCY>[^;]+);(?<TRN_FMLY>[^;]+);(?<TRN_GRP>[^;]+);(?<TRN_TYPE>[^;]*);(?<BPFOLIO>[^;]*);(?<SPFOLIO>[^;]*)" | eval NB = tostring(trim(NB)) | table TRN_STATUS, NB, NOMINAL, CURRENCY, TRN_FMLY, TRN_GRP, TRN_TYPE, BPFOLIO, SPFOLIO] | table sky_id, NB, event_id, mx_status, operation, action, tradebooking_sgp, portfolio_name, portfolio_entity, trade_type, TRN_STATUS, NOMINAL, CURRENCY, TRN_FMLY, TRN_GRP, TRN_TYPE, BPFOLIO, SPFOLIO        This above is my source code And the raw data is       Time Event 27/12/2024 17:05:39.000 32265376;DEAD;3887.00000000;XAU;CURR;FXD;FXD;CM TR GLD AUS;X_CMTR XAU SWAP host = APPSG002SIN0117source = D:\SkyNet\data\mx_trade_report\MX2_TRADE_STATUS_20241227_200037.csvsourcetype = mx_to_sky Time Event 27/12/2024 18:05:36.651 2024-12-27 18:05:36.651, system="murex", id="645131777", sky_id="645131777", trade_id="32265483", event_id="100023788", mx_status="DEAD", operation="NETTING", action="insertion", tradebooking_sgp="2024/12/26 01:02:01.0000", eventtime_sgp="2024/12/26 01:01:51.7630", sky_to_mq_latency="-9.-237", portfolio_name="I CREDIT INC", portfolio_entity="ANZSEC INC", trade_type="BondTrade" host = APPSG002SIN0032source = sky_trade_murex_timestamp sourcetype = sky_trade_murex_timestamp  
It appears you have multiple stats for the same transaction in the event . try using mvdedup | spath | eval date=strftime(_time,"%m-%d %k:%M") | table date *.pct2ResTime | foreach *.pct2ResTime ... See more...
It appears you have multiple stats for the same transaction in the event . try using mvdedup | spath | eval date=strftime(_time,"%m-%d %k:%M") | table date *.pct2ResTime | foreach *.pct2ResTime [| eval <<FIELD>> = mvdedup('<<FIELD>>')] | untable date transaction pct2ResTime | eval "Transaction Name"=mvindex(split(transaction,"."),0) | xyseries "Transaction Name" date pct2ResTime
This seems to be different from your previous description. Counting is one thing, listing sessions is another. Furthermore, we don't know your data.
Thank you, but the client wants to obtain dimensions every 7 days, with approximately 1200 result sets. The output results need to include: start time, end time, username, department, number of days ... See more...
Thank you, but the client wants to obtain dimensions every 7 days, with approximately 1200 result sets. The output results need to include: start time, end time, username, department, number of days visited, multi value query time, and alarm time
You're overcomplicating your search. If you want to calculate how many days during a week your users connected to a service there are probably several ways about it. The easiest and most straightfor... See more...
You're overcomplicating your search. If you want to calculate how many days during a week your users connected to a service there are probably several ways about it. The easiest and most straightforward would probably be to | bin _time span=1d to have all visits during the same day with the same timestamp (the alternative would be to use strftime) Now you need to calculate different days in each week for each user | stats dc(_time) by user _time span=1d the alternative is the timechart command.