index=hum_stg_app "msg.OM_MsgType"=REQUEST msg.OM_Body.header.transactionId=* "msg.service_name"="fai-np-notification" "msg.OM_Body.header.templateType"=vsf_device_auth_otp_template "msg.OM_Body.header.channelType{}"=sms "msg.OM_Body.header.organization"=VSF | rename msg.OM_Body.header.transactionId as transactionId | eval lenth=len(transactionId) |sort 1000000 _time | dedup transactionId _time | search lenth=40 | rename _time as Time1 | eval Request_time=strftime(Time1,"%y-%m-%d %H:%M:%S") | stats count by Time1 transactionId Request_time | appendcols [| search index=hum_stg_app earliest=-30d fcr-np-sms-gateway "msg.service_name"="fcr-np-sms-gateway" "msg.TransactionId"=* "msg.NowSMSResponse"="{*Success\"}" | rename "msg.TransactionId" as transactionId_request|sort 1000000 _time | dedup transactionId_request _time |eval Time=case(like(_raw,"%fcr-np-sms-gateway%"),_time) | eval lenth=len(transactionId_request) | search lenth=40 | dedup transactionId_request | stats count by transactionId_request Time ] | eval Transaction_Completed_time=strftime(Time,"%y-%m-%d %H:%M:%S") | eval Time_dif=Time-Time1 | eval Time_diff=(Time_dif)/3600 | fields transactionId transactionId_request Request_time Transaction_Completed_time count Time_diff Request_time Time Time1
#getting wrong value in Transaction_Completed_time.
Let this be a lesson for all who ask questions: Illustrate/explain your data (anonymize as needed), your desired result, and explain the logic between data and desired result in plain language without SPL. SPL should be after all the explanations, before illustrating the actual result from SPL, then explain why that result is different from desired one if it is not painfully obvious.
Secondly, posting SPL without formatting discourages volunteers. Third, SPL (and raw data) is best illustrated in code box. Let me help so other volunteers do not have to do the hard work.
index=hum_stg_app "msg.OM_MsgType"=REQUEST msg.OM_Body.header.transactionId=* "msg.service_name"="fai-np-notification"
"msg.OM_Body.header.templateType"=vsf_device_auth_otp_template "msg.OM_Body.header.channelType{}"=sms "msg.OM_Body.header.organization"=VSF
| rename msg.OM_Body.header.transactionId as transactionId
| eval lenth=len(transactionId)
| sort 1000000 _time
| dedup transactionId _time
| search lenth=40
| rename _time as Time1
| eval Request_time=strftime(Time1,"%y-%m-%d %H:%M:%S")
| stats count by Time1 transactionId Request_time
| appendcols
[| search index=hum_stg_app earliest=-30d fcr-np-sms-gateway "msg.service_name"="fcr-np-sms-gateway" "msg.TransactionId"=* "msg.NowSMSResponse"="{*Success\"}"
| rename "msg.TransactionId" as transactionId_request
| sort 1000000 _time
| dedup transactionId_request _time
| eval Time=case(like(_raw,"%fcr-np-sms-gateway%"),_time)
| eval lenth=len(transactionId_request)
| search lenth=40
| dedup transactionId_request
| stats count by transactionId_request Time ]
| eval Transaction_Completed_time=strftime(Time,"%y-%m-%d %H:%M:%S")
| eval Time_dif=Time-Time1
| eval Time_diff=(Time_dif)/3600
| fields transactionId transactionId_request Request_time Transaction_Completed_time count Time_diff Request_time Time Time1
I took the pain to reverse engineer your intentions. One thing I cannot understand is why you expect appendcols to not misalign transactionId between request and response. (I am quite convinced that Time_diff is only meaningful only when transactionId and transactionId_request match when there is such a field name such as transactionId.) Additionally, using subsearch in the same dataset should be used only as a last resort. This type of transaction-based calculations do not warrant such use.
Let me try mind-reading a bit and state the goal of your search: Calculate the difference between the time request is send and the time response indicates completion for the same transactionId. To do this, simply search both the request event and completion event in one search, then do a stats to find time range, the earliest (request) time and the latest (completion) time. Like this
index=hum_stg_app (("msg.OM_MsgType"=REQUEST msg.OM_Body.header.transactionId=* "msg.service_name"="fai-np-notification"
"msg.OM_Body.header.templateType"=vsf_device_auth_otp_template "msg.OM_Body.header.channelType{}"=sms "msg.OM_Body.header.organization"=VSF) OR (fcr-np-sms-gateway "msg.service_name"="fcr-np-sms-gateway" "msg.TransactionId"=* "msg.NowSMSResponse"="{*Success\"}"))
| eval transactionId = coalesce('msg.OM_Body.header.transactionId', 'msg.transactionId')
| eval lenth=len(transactionId)
| sort 1000000 _time
| dedup transactionId _time
| search lenth=40
| stats range(_time) as Time_diff min(_time) as Request_time max(_time) as Transaction_Completed_time by transactionId
| eval Time_diff=Time_diff/3600
Two notes:
A lot to unpack here, but please whenever you post SPL, please put it in a code block using the </> icon in the UI
Firstly, you have a number of challenging commands, appendcols, dedup, sort
Based on your use of sort 100000 it implies you have a reasonable volume of data.
If you have your first search that returns 3 results and you then have appendcols that returns 2 or 4 or NOT 3 or an ANY different order then the columns will not align.
Using sort early on is a bad choice, it will cause performance issues and if you have more than 100000 items, they will be truncated, so can also caused problems with your appencols if truncation occurs.
Your first search logic could be changed to more efficient with
index=hum_stg_app "msg.OM_MsgType"=REQUEST msg.OM_Body.header.transactionId=* "msg.service_name"="fai-np-notification" "msg.OM_Body.header.templateType"=vsf_device_auth_otp_template "msg.OM_Body.header.channelType{}"=sms "msg.OM_Body.header.organization"=VSF
| rename msg.OM_Body.header.transactionId as transactionId
| stats earliest(_time) as Time1 count by transactionId
| eval lenth=len(transactionId)
| where length=40
| eval Request_time=strftime(Time1,"%y-%m-%d %H:%M:%S")
which I believe is doing what you are trying to do. The same principle applies to the second search.
Is your time range in the appendcols search the same as the outer search?
Is transactionId from the first search supposed to be the same as transactionId_request?
You can probably combine these to a single search, but if these two transaction Ids are the same, you would be safer using append rather than appendcols and then doing a final stats by common_tx_id to join the two data sets together.
Can you give more detail on how they are different - and when diagnosing these, find a small data set where you can reproduce the issue.