Splunk Search

Clarification about appendcols needed

damucka
Builder

Hello,

My alert looks as follows:

|inputcsv anomalies_ls5923.txt | where like(ANOMALY_ID, "iA%")| tail 1
|rename comment AS "    **************************** Start:   Check if this is a new anomaly **************************************************** "
| rename ALERT_TYPE      as   ALERT_TYPE_OLD
         ANOMALY_ID      as   ANOMALY_ID_OLD 
         HOST            as   HOST_OLD
         RCA_TO_REPORT   as   RCA_TO_REPORT_OLD
         TIMESTAMP       as   TIMESTAMP_OLD
         TIME_SINCE      as   TIME_SINCE_OLD

| appendcols
[
| noop search_optimization=false
| dbxquery query="call \"ML\".\"ML.PROCEDURES::PR_ALERT_TYPE_ANOMALY_ID_SCORE_TO_REPORT_ALL_HOST\"('BWP', to_timestamp(to_nvarchar(now(), 'YYYY-MM-DD HH24:MI'),'YYYY-MM-DD HH24:MI'), ?)" connection="HANA_MLBSO" 

| where HOST="ls5923/28"
| tail 1
]

| eval alert_host = "ls5923/28"
| where HOST=alert_host
| dedup HOST
| eval NEW_ANOMALY = if (ANOMALY_ID = ANOMALY_ID_OLD OR ALERT_TYPE="NORMAL",0,1)
| eval splunkTime = if (isnotnull(ALERT_TYPE),strftime(now(),"%Y-%m-%d %H:%M:%S"),"")

| outputtext usexml=false | fields splunkTime ALERT_TYPE ANOMALY_ID HOST RCA_TO_REPORT TIMESTAMP TIME_SINCE NEW_ANOMALY| fields - _raw  
   | outputcsv anomalies_ls5923.txt append=true 

|rename comment AS "    **************************** Start:  Check for HSR switch or crashdump **************************************************** "
|eval HSR_OR_CRASH = [search index=_internal sourcetype=scheduler 
    savedsearch_name="HSR-switch - BWP" OR
    savedsearch_name="Crash Dump Alert"
    alert_actions="*email*"
    earliest=-5m latest=now
    | stats count as resultcount
    | eval HSR_OR_CRASH=case(resultcount>0, "1",1<2,"0") | return $HSR_OR_CRASH]
|rename comment AS "    **************************** End:  Check for HSR switch or crashdump ****************************************************** "    


|rename comment AS "    **************************** Start:  Set the decision trigger to fire an alert ******************************************** "
| eval decision=case(NEW_ANOMALY=1 AND HSR_OR_CRASH=0,"1",1<2,0)
|rename comment AS "    **************************** End:  Set the decision trigger to fire an alert ********************************************** "


|rename comment AS " ****************************  Start:     set the e-mail recipients **********************************************************  "
| eval SYSID="BWP"
| eval DBSID="BWP"
| eval PRIO = "P1"
| eval reason=case(ALERT_TYPE="NORMAL","",1<2," - reason -")
| eval RESOLUTION_TEXT=case(isnotnull(RCA_TO_REPORT),"The possible cause of anomaly are :",1<2,"")
| lookup email_groups.csv DBSID OUTPUT email_recipients_DBSID AS email_recipients_DBSID
| lookup email_groups_critical_alerts.csv "PRIO" OUTPUT email_recipients_critical_alerts AS email_recipients_critical_alerts
| strcat email_recipients_critical_alerts email_recipients_DBSID  email_recipients
|rename comment AS " ****************************  End:     set the e-mail recipients ************************************************************  "

| rename comment AS " ****************************  RCA SQL *********************************  "

|appendcols [
|inputcsv anomalies_ls5923.txt | where NEW_ANOMALY=1 | tail 1

| eval SID = "BWP"
| eval ML_HOST = "ls5923"
| eval SQL = "CPU_HEADLINE"
| lookup datafetch_sql_texts2.csv SQL output SQL_TEXT as CPU_HEADLINE
| eval SQL = "CPU_HEADLINE_BWP"
| lookup datafetch_sql_texts2.csv SQL output SQL_TEXT as CPU_HEADLINE_BWP
| eval SQL = "CPU_DEATILS_BWP"
| lookup datafetch_sql_texts2.csv SQL output SQL_TEXT as CPU_DEATILS_BWP
| eval SQL = "MEM_HEADLINE"
| lookup datafetch_sql_texts2.csv SQL output SQL_TEXT as MEM_HEADLINE
| eval SQL = "MEM_BWP"
| lookup datafetch_sql_texts2.csv SQL output SQL_TEXT as MEM_BWP

| eval CPU_HEADLINE_BWP = replace(CPU_HEADLINE_BWP,"x22", "\"")
| eval CPU_HEADLINE_BWP = replace(CPU_HEADLINE_BWP,"ML_HOST", ML_HOST)
| eval CPU_HEADLINE_BWP = replace(CPU_HEADLINE_BWP,"ML_SID", SID)

| eval CPU_DEATILS_BWP = replace(CPU_DEATILS_BWP,"x22", "\"")
| eval CPU_DEATILS_BWP = replace(CPU_DEATILS_BWP,"ML_HOST", ML_HOST)
| eval CPU_DEATILS_BWP = replace(CPU_DEATILS_BWP,"ML_SID", SID)

| eval MEM_BWP = replace(MEM_BWP,"x22", "\"")
| eval MEM_BWP = replace(MEM_BWP,"ML_HOST", ML_HOST)
| eval MEM_BWP = replace(MEM_BWP,"ML_SID", SID)

 |rename comment AS " *********************************** Evaluate the RCA ***********************************************  "
| eval CPU_HEADLINE_BWP = if(like(RCA_TO_REPORT, "%CPU%"), CPU_HEADLINE_BWP, "")
| eval CPU_HEADLINE = if(like(RCA_TO_REPORT, "%CPU%"), CPU_HEADLINE, "")
| eval CPU_DEATILS_BWP = if(like(RCA_TO_REPORT, "%CPU%"), CPU_DEATILS_BWP, "")
| eval MEM_HEADLINE = if(like(RCA_TO_REPORT, "%MEMORY_USED%"), MEM_HEADLINE, "")
| eval MEM_BWP = if(like(RCA_TO_REPORT, "%MEMORY_USED%"), MEM_BWP, "")


 |rename comment AS " *********************************** Set table with SQL Queries to Execute ***********************************************  "

 | table [|makeresults |  eval search ="CPU_HEADLINE CPU_HEADLINE_BWP CPU_DEATILS_BWP MEM_HEADLINE MEM_BWP" | table search ] 
 | transpose 
 | rename "row 1" AS RCA_SQL_TEXT 

]

| table RCA_SQL_TEXT DBSID PRIO reason RESOLUTION_TEXT decision ALERT_TYPE HOST TIME_SINCE RCA_TO_REPORT
| filldown DBSID PRIO reason RESOLUTION_TEXT decision ALERT_TYPE HOST TIME_SINCE RCA_TO_REPORT
| where isnotnull(RCA_SQL_TEXT) and 
        isnotnull(DBSID) and
        isnotnull(PRIO) and
        isnotnull(reason) and
        isnotnull(RESOLUTION_TEXT) and
        isnotnull(decision) and
        isnotnull(ALERT_TYPE) and
        isnotnull(HOST) and
        isnotnull(TIME_SINCE) and
        isnotnull(RCA_TO_REPORT)
| map maxsearches=20 search="dbxquery query=\"$RCA_SQL_TEXT$\" connection=\"HANA_MLBSO\" 
       | appendcols[|makeresults
       | eval DBSID=\"$DBSID$\" 
       | eval PRIO=\"$PRIO$\" 
       | eval reason=\"$reason$\" 
       | eval RESOLUTION_TEXT = \"$RESOLUTION_TEXT$\" 
       | eval decision = \"$decision$\"
       | eval ALERT_TYPE = \"$ALERT_TYPE$\"
       | eval HOST = \"$HOST$\"
       | eval TIME_SINCE = \"$TIME_SINCE$\"
       | eval RCA_TO_REPORT = \"$RCA_TO_REPORT$\"
       ] "

| mvcombine LINE
| rex field=LINE mode=sed "s/(CPU_HEADER)/\n\n****************************************  CPU  ****************************************\n/"
| rex field=LINE mode=sed "s/(MEMORY_HEADER)/\n\n****************************************  MEMORY  ****************************************\n/"

|rename comment AS " ****************************  Start:     set the e-mail recipients **********************************************************  "
| lookup email_groups.csv DBSID OUTPUT email_recipients_DBSID AS email_recipients_DBSID
| lookup email_groups_critical_alerts.csv "PRIO" OUTPUT email_recipients_critical_alerts AS email_recipients_critical_alerts
| strcat email_recipients_critical_alerts email_recipients_DBSID  email_recipients
|rename comment AS " ****************************  End:     set the e-mail recipients ************************************************************  "

Now, without going into much details, the issue is that in the RCA SQL section I am trying to read the anomalies_ls5923.txt file and hope I would see the results written there with outputext some lines before. Unfortunately this is not the case, I am reading always the one before anomaly. This leads me to my question:
- As the second read of the anomalies_ls5923.txt takes place in the appendcols sub-section, I suspect that this appendcols gets executed independent / in parallel to the rest of the SPL. Could you confirm? Is this the case that the appendols does not run serially after the SPL code which is before but independently?

Thank you,
Kind Regards,
Kamil

0 Karma

Richfez
SplunkTrust
SplunkTrust

Appendcols, append, subsearches...

I don't think they work like I think you think they work. (Lol, what a sentence).

Here's a simple run anywhere example:

| makeresults | eval outside_appendcols=time() |appendcols [| makeresults | eval inside_appendcols=time()]
| eval inside_appendcols_ran_first_by = outside_appendcols - inside_appendcols

Depending on how fast your system is, you'll see that the appendcols actually runs first.

In any case, I think - both from the question and from the code you wrote - that you are approaching SPL as if it's a regular, procedural language. It's fairly step-by-step if you don't use subsearches or anything else in [] brackets , but once you do then it's all backwards. Subsearches and [] bracketed things run first, not last.

And in the best event, I'd never try to rely on the order of them. They're not linear. I could probably construct a run-anywhere example that checks in what order half a dozen subsearches in different places in some SPL would run, and you'd see what a mess it would be, but I think you get the idea.

So where does that leave us?

In my opinion, I think maybe too much is being done in one big fell swoop there. This has the feel of being 3 or 4 different discrete steps - maybe a summary index, maybe another lookup or two, I'm not really sure at this time, but certainly the feeling that it's trying to do too much in one piece of SPL.

I think we'd be happy to help, but would need more information about what you are trying to actually accomplish and from what data you are starting from?

(Also, the search itself is pretty impressive, I don't think I've seen such an amount of programmatic work done inside SPL ever!)

0 Karma
Get Updates on the Splunk Community!

.conf24 | Registration Open!

Hello, hello! I come bearing good news: Registration for .conf24 is now open!   conf is Splunk’s rad annual ...

Splunk is officially part of Cisco

Revolutionizing how our customers build resilience across their entire digital footprint.   Splunk ...

Splunk APM & RUM | Planned Maintenance March 26 - March 28, 2024

There will be planned maintenance for Splunk APM and RUM between March 26, 2024 and March 28, 2024 as ...