All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Rahul-Sri , this is another question and it's always better to open a new case, even if this is the followig step to your request, in this way you'll have surely faster and probably better answe... See more...
Hi @Rahul-Sri , this is another question and it's always better to open a new case, even if this is the followig step to your request, in this way you'll have surely faster and probably better answers. Anyway, the approach is to use eval not format command and round the number: | eval count=round(count/1000000,2)."M" please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated   
Hi @Ryan.Paredez , Thank yo for this. Actually I was having this concern for another account.  Regards Fadil
Hi, In the above query in my dashboard is displaying large numbers. I want to convert those to shorter number with million added to it. For example if the value shows 600,0000 then the result should ... See more...
Hi, In the above query in my dashboard is displaying large numbers. I want to convert those to shorter number with million added to it. For example if the value shows 600,0000 then the result should display 6mil. How I can achieve? I tried using--> | eval status=case(like(status, "2%"),"200|201",like(status, "5%"),"503")|timechart span=1d@d usenull=false useother=f count(status) by status|fieldformat count = count/1000000 But this does not work. Any help is appreciated.
My mistake - I neglected groupby. I know this has come up before (because some veterans here helped me:-)) But I can't find the old answer. (In fact, this delta with groupby question comes up regula... See more...
My mistake - I neglected groupby. I know this has come up before (because some veterans here helped me:-)) But I can't find the old answer. (In fact, this delta with groupby question comes up regularly because it's a common use case.)  So, here is a shot:   |mstats sum(transaction) as Trans where index=host-metrics service=login application IN(app1, app2, app3, app4) span=1h by application | streamstats window=2 global=false range(Trans) as delta max(Trans) as Trans_max max(_time) as _time by application | sort application _time | eval delta = if(Trans_max == Trans, delta, "-" . delta) | eval pct_delta = delta / Trans * 100 | fields - Trans_max   Here is my full simulation   | mstats max(_value) as Trans where index=_metrics metric_name = spl.mlog.bucket_metrics.* earliest=-8h@h latest=-4h@h by metric_name span=1h | rename metric_name as application ``` the above simulates |mstats sum(transaction) as Trans where index=host-metrics service=login application IN(app1, app2, app3, app4) span=1h by application ``` | streamstats window=2 global=false range(Trans) as delta max(Trans) as Trans_max max(_time) as _time by application | sort application _time | eval delta = if(Trans_max == Trans, delta, "-" . delta) | eval pct_delta = delta / Trans * 100 | fields - Trans_max   My output is _time application Trans delta pct_delta 2024-03-28 12:00 spl.mlog.bucket_metrics.created 0.000000 0.000000   2024-03-28 13:00 spl.mlog.bucket_metrics.created 0.000000 0.000000   2024-03-28 14:00 spl.mlog.bucket_metrics.created 0.000000 0.000000   2024-03-28 15:00 spl.mlog.bucket_metrics.created 0.000000 0.000000   2024-03-28 12:00 spl.mlog.bucket_metrics.created_replicas 0.000000 0.000000   2024-03-28 13:00 spl.mlog.bucket_metrics.created_replicas 0.000000 0.000000   2024-03-28 14:00 spl.mlog.bucket_metrics.created_replicas 0.000000 0.000000   2024-03-28 15:00 spl.mlog.bucket_metrics.created_replicas 0.000000 0.000000   2024-03-28 12:00 spl.mlog.bucket_metrics.current_hot 12.000000 0.000000 0.000000 2024-03-28 13:00 spl.mlog.bucket_metrics.current_hot 12.000000 0.000000 0.000000 2024-03-28 14:00 spl.mlog.bucket_metrics.current_hot 12.000000 0.000000 0.000000 2024-03-28 15:00 spl.mlog.bucket_metrics.current_hot 12.000000 0.000000 0.000000 2024-03-28 12:00 spl.mlog.bucket_metrics.current_hot_replicas 0.000000 0.000000   2024-03-28 13:00 spl.mlog.bucket_metrics.current_hot_replicas 0.000000 0.000000   2024-03-28 14:00 spl.mlog.bucket_metrics.current_hot_replicas 0.000000 0.000000   2024-03-28 15:00 spl.mlog.bucket_metrics.current_hot_replicas 0.000000 0.000000   2024-03-28 12:00 spl.mlog.bucket_metrics.current_total 215.000000 0.000000 0.000000 2024-03-28 13:00 spl.mlog.bucket_metrics.current_total 215.000000 0.000000 0.000000 2024-03-28 14:00 spl.mlog.bucket_metrics.current_total 214.000000 -1.000000 -0.4672897 2024-03-28 15:00 spl.mlog.bucket_metrics.current_total 214.000000 0.000000 0.000000 2024-03-28 12:00 spl.mlog.bucket_metrics.frozen 0.000000 0.000000   2024-03-28 13:00 spl.mlog.bucket_metrics.frozen 0.000000 0.000000   2024-03-28 14:00 spl.mlog.bucket_metrics.frozen 1.000000 1.000000 100.0000 2024-03-28 15:00 spl.mlog.bucket_metrics.frozen 0.000000 -1.000000   2024-03-28 12:00 spl.mlog.bucket_metrics.rolled 0.000000 0.000000   2024-03-28 13:00 spl.mlog.bucket_metrics.rolled 0.000000 0.000000   2024-03-28 14:00 spl.mlog.bucket_metrics.rolled 0.000000 0.000000   2024-03-28 15:00 spl.mlog.bucket_metrics.rolled 0.000000 0.000000   2024-03-28 12:00 spl.mlog.bucket_metrics.total_removed 0.000000 0.000000   2024-03-28 13:00 spl.mlog.bucket_metrics.total_removed 0.000000 0.000000   2024-03-28 14:00 spl.mlog.bucket_metrics.total_removed 1.000000 1.000000 100.0000 2024-03-28 15:00 spl.mlog.bucket_metrics.total_removed 0.000000 -1.000000   Obviously my results have lots of nulls because lots of my "Trans" values are zero.  But you get the idea.
Reached out to their support team at education@splunk.com and they resolved it for me.
@yuanliu There was a misunderstanding from my end about the query. Your suggested query works great. Thanks again 
Hi @yuanliu , Thanks a lot, your query works
Hi We are looking a way to integrate Checkmarx with Splunk what will be the best way?
To clarify about your query - 'given that list is an array, selecting only the first element for matching may not be what the use case demands' I understand that it sounds weird  , but our use ca... See more...
To clarify about your query - 'given that list is an array, selecting only the first element for matching may not be what the use case demands' I understand that it sounds weird  , but our use case is about selecting events where the first object in an array/list should have  type == "code" What I was trying to say is: Do you select this one, when type == "code" is the second element? { list: [ {"name": "Hello", "type": "document"}, {"name": "Hello", "type": "code"} ] } If you want to select this kind of events as well as the other kind, only the second search will work.  If you want to select an event only if its first element contains type == "code", use the first search. the first query, as you have mentioned it 'Select events in which list{}.name has one unique value "Hello" ' is there a way select events in which all the objects should contain name == "Hello" instead of just one unique value? This gets confusing.  My rephrasing "has one unique value 'Hello'" is based on your OP statement In all the items in the list array should have "name": "Hello" Did I misunderstand this? Anyway, my searches do retrieve Event 1 as expected.  Is there any problem with them?
hi @yuanliu , when i run the below query, trans values are fine, but getting negative values  and empty row for the delta_Trans and pct_delta_Trans fields values are not correct. _time applicat... See more...
hi @yuanliu , when i run the below query, trans values are fine, but getting negative values  and empty row for the delta_Trans and pct_delta_Trans fields values are not correct. _time application Trans delta_Trans pct_delta_Trans 2022-01-22 02:00 app1 3456.000000     2022-01-22 02:00 app2 5632.000000 -1839.000000 -5438.786543 2022-01-22 02:00 app3 5643.000000 36758.000000 99.76435678 2022-01-22 02:00 app4 16543.00000 -8796.908678 -8607.065438
Hi @yuanliu , Thanks for the response the first query, as you have mentioned it 'Select events in which list{}.name has one unique value "Hello" ' is there a way select events in which all the obje... See more...
Hi @yuanliu , Thanks for the response the first query, as you have mentioned it 'Select events in which list{}.name has one unique value "Hello" ' is there a way select events in which all the objects should contain name == "Hello" instead of just one unique value? To clarify about your query - 'given that list is an array, selecting only the first element for matching may not be what the use case demands' I understand that it sounds weird  , but our use case is about selecting events where the first object in an array/list should have    type == "code"    
Something like this? |mstats sum(transaction) as Trans where index=host-metrics service=login application IN(app1, app2, app3, app4) span=1h by application | delta Trans as delta_Trans | eval pct_de... See more...
Something like this? |mstats sum(transaction) as Trans where index=host-metrics service=login application IN(app1, app2, app3, app4) span=1h by application | delta Trans as delta_Trans | eval pct_delta_Trans = delta_Trans / Trans * 100
I don't quite get why you want a sparse corner for Total_* but it is hackable   | appendpipe [ eval Total_A = null() ] | eval Total_B = if(isnull(Total_A), Total_B, null()) | eval Unixtime_... See more...
I don't quite get why you want a sparse corner for Total_* but it is hackable   | appendpipe [ eval Total_A = null() ] | eval Total_B = if(isnull(Total_A), Total_B, null()) | eval Unixtime_AB = if(isnull(Total_B), Unixtime_A, Unixtime_B) | fields Total_* Unixtime_AB   (Note this hack works for a small number of Unixtime_* but not particularly scalable.) Just in case you want a dense matrix, I'm offering an obvious result set: Total_AB Unixtime_AB 1 imaginary_unix_3 2 imaginary_unix_1 3 imaginary_unix_4 4 imaginary_unix_3 5 imaginary_unix_1 6 imaginary_unix_4 To get this, do   | appendpipe [ eval Total_A = null() ] | eval Total_AB = if(isnull(Total_A), Total_B, Total_A) | eval Unixtime_AB = if(isnull(Total_B), Unixtime_A, Unixtime_B) | fields - *_A *_B   Here is an emulation you can play with and compare with real data.   | makeresults format=csv data="Unixtime_A, Total_A, Unixtime_B, Total_B imaginary_unix_1, 1, imaginary_unix_3, 4 imaginary_unix_2, 2, imaginary_unix_1, 5 imaginary_unix_3, 3, imaginary_unix_4, 6" ``` data emulation above ```   Hope this helps.
I want to compare pervious hour data with present hour data and get the percentage using below query. |mstats sum(transaction) as Trans where index=host-metrics service=login application IN(app1, ap... See more...
I want to compare pervious hour data with present hour data and get the percentage using below query. |mstats sum(transaction) as Trans where index=host-metrics service=login application IN(app1, app2, app3, app4) span=1h by application
From the Subject Title, what I mean is it will increase the row count and decrease the column count - that is my intention. After a series of mathematical computations, I ended up with the following... See more...
From the Subject Title, what I mean is it will increase the row count and decrease the column count - that is my intention. After a series of mathematical computations, I ended up with the following table: Unixtime_A Total_A Unixtime_B Total_B imaginary_unix_1 1 imaginary_unix_3 4 imaginary_unix_2 2 imaginary_unix_1 5 imaginary_unix_3 3 imaginary_unix_4 6 Notes: Unixtime_A may not equal Unixtime_B, but they are formatted the same that is snapped to the month with @mon (unixtime) Total_A and Total_B were the result of various conditional counts, so they need to be seperate fields   The desired table is: Unixtime_AB Total_A Total_B imaginary_unix_1 1   imaginary_unix_2 2   imaginary_unix_3 3   imaginary_unix_3   4 imaginary_unix_1   5 imaginary_unix_4   6 Which I can then use | fillnull and use a simple stats to sum both totals by Unixtime_AB. Like so:   | stats sum(Total_A), sum(Total_B) by Unixtime_AB     I'm not 100% sure if transpose, untable, or xyseries could do this - or if I was misusing them somehow.
Hi,  What are the options to integrate Appdynamics with zabbix or the other way around to send data from zabbix to AppDynamics Thanks Akhila
I've been struggling to decide the best method to instrument a Java web app running on Azure App Service. There's plenty of documentation for AKS services, ECS services and so on. There's even docume... See more...
I've been struggling to decide the best method to instrument a Java web app running on Azure App Service. There's plenty of documentation for AKS services, ECS services and so on. There's even documentation for .NET services running as an Azure App Service but nothing for my use case.  Is there any documentation available for this specific scenario? I've read and re-read the Java APM documentation but I still feel a bit lost.  Thank you for any help and suggestions!
I want to search if FailureMsg field (fail_msg1 OR fail_msg2) is found in _raw of my splunk query search results and return only those matching lines. If they (fail_msg1 OR fail_msg2) are not foun... See more...
I want to search if FailureMsg field (fail_msg1 OR fail_msg2) is found in _raw of my splunk query search results and return only those matching lines. If they (fail_msg1 OR fail_msg2) are not found, return nothing I think this sentence is confusing everybody:-). Is it correct to say that FailureMsg already exists in raw event search, and you only want events matching one of FailureMsg values in your lookup? If the above are true, you have a simple formula index="demo1" source="demo2" [inputlookup sample.csv | fields FailureMsg] Put back into your sample code and incorporating the correction from @isoutamo, you get index="demo1" source="demo2" [inputlookup timelookup.csv | fields FailureMsg] | rex field=_raw "id_num \{ data: (?P<id_num>\d+) \}" | rex field=_raw "test_field_name=(?P<test_field_name>.+)]:" | search test_field_name="test_field_name_1" | table _raw id_num | reverse | filldown id_num  
when i try running a search on my Splunk enterprise in the search and reporting app i get the "insufficient permission to access this resource" message.  i tried to click on  the things under setting... See more...
when i try running a search on my Splunk enterprise in the search and reporting app i get the "insufficient permission to access this resource" message.  i tried to click on  the things under settings and i get the "500 internal server error" message and there's this severe warning message in my search scheduler that says "searches skipped in the last 24 hours" how do i troubleshoot these and get my splunk enterprise running normally again?
  (cont.)   | eval c0_key = json_keys(c0) | foreach c0_key mode=json_array [eval c0_job = mvappend(c0_job, json_object("key", <<ITEM>>, "job", json_extract(c0, <<ITEM>>)))] | mvexpand c0_job |... See more...
  (cont.)   | eval c0_key = json_keys(c0) | foreach c0_key mode=json_array [eval c0_job = mvappend(c0_job, json_object("key", <<ITEM>>, "job", json_extract(c0, <<ITEM>>)))] | mvexpand c0_job | fields c0_job     You now get c0_job {"key":0,"job":{"jobname":"A001_GVE_ADHOC_AUDIT","status":"ENDED NOTOK","Timestamp":"20240317 13:25:23"}} {"key":1,"job":{"jobname":"BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TSYS","status":"ENDED NOTOK","Timestamp":"20240317 13:25:23"}} {"key":2,"job":{"jobname":"BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TSYS_WEEKLY","status":"ENDED NOTOK","Timestamp":"20240317 13:25:23"}} {"key":3,"job":{"jobname":"D001_GVE_SOFT_MATCHING_GDH_CA","status":"ENDED NOTOK","Timestamp":"20240317 13:25:23"}} {"key":4,"job":{"jobname":"D100_AKS_CDWH_SQOOP_TRX_ORG","status":"ENDED NOTOK","Timestamp":"20240317 13:25:23"}} {"key":5,"job":{"jobname":"D100_AKS_CDWH_SQOOP_TYP_123","status":"ENDED NOTOK","Timestamp":"20240317 13:25:23"}} {"key":6,"job":{"jobname":"D100_AKS_CDWH_SQOOP_TYP_45","status":"ENDED OK","Timestamp":"20240317 13:25:23"}} {"key":7,"job":{"jobname":"D100_AKS_CDWH_SQOOP_TYP_ENPW","status":"ENDED NOTOK","Timestamp":"20240317 13:25:23"}} {"key":8,"job":{"jobname":"D100_AKS_CDWH_SQOOP_TYP_T","status":"ENDED NOTOK","Timestamp":"20240317 13:25:23"}} {"key":9,"job":{"jobname":"DREAMPC_CALC_ML_NAMESAPCE","status":"ENDED NOTOK","Timestamp":"20240317 13:25:23"}} {"key":10,"job":{"jobname":"DREAMPC_MEMORY_AlERT_SIT","status":"ENDED NOTOK","Timestamp":"20240317 13:25:23"}} {"key":11,"job":{"jobname":"DREAM_BDV_NBR_PRE_REQUISITE_TLX_LSP_3RD_PARTY_TRNS","status":"ENDED NOTOK","Timestamp":"20240317 13:25:23"}} {"key":12,"job":{"jobname":"DREAM_BDV_NBR_PRE_REQUISITE_TLX_LSP_3RD_PARTY_TRNS_WEEKLY","status":"ENDED NOTOK","Timestamp":"20240317 13:25:23"}} {"key":13,"job":{"jobname":"DREAM_BDV_NBR_STG_TLX_LSP_3RD_PARTY_TRNS","status":"ENDED OK","Timestamp":"20240317 13:25:23"}} {"key":14,"job":{"jobname":"DREAM_BDV_NBR_STG_TLX_LSP_3RD_PARTY_TRNS_WEEKLY","status":"ENDED OK","Timestamp":"20240317 13:25:23"}} {"key":15,"job":{"jobname":"DREAM_BDV_NBR_TLX_LSP_3RD_PARTY_TRNS","status":"ENDED OK","Timestamp":"20240317 13:25:23"}} {"key":16,"job":{"jobname":"DREAM_BDV_NBR_TLX_LSP_3RD_PARTY_TRNS_WEEKLY","status":"ENDED OK","Timestamp":"20240317 13:25:23"}} {"key":17,"job":{"jobname":"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_GDH","status":"ENDED OK","Timestamp":"20240317 13:25:23"}} {"key":18,"job":{"jobname":"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_GDH_WEEKLY","status":"ENDED OK","Timestamp":"20240317 13:25:23"}} {"key":19,"job":{"jobname":"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_SAMCONTDEPOT","status":"ENDED NOTOK","Timestamp":"20240317 13:25:23"}} {"key":20,"job":{"jobname":"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TLXLSP_TRXN","status":"ENDED NOTOK","Timestamp":"20240317 13:25:23"}} {"key":21,"job":{"jobname":"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TRADEABR","status":"ENDED OK","Timestamp":"20240317 13:25:23"}} {"key":22,"job":{"jobname":"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TRADEABR_WEEKLY","status":"ENDED OK","Timestamp":"20240317 13:25:23"}} {"key":23,"job":{"jobname":"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TRADESON","status":"ENDED NOTOK","Timestamp":"20240317 13:25:23"}} {"key":24,"job":{"jobname":"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TRADESON_WEEKLY","status":"ENDED OK","Timestamp":"20240317 13:25:23"}} {"key":25,"job":{"jobname":"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_ZCI","status":"ENDED NOTOK","Timestamp":"20240317 13:25:23"}} {"key":26,"job":{"jobname":"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_ZCI_WEEKLY","status":"ENDED NOTOK","Timestamp":"20240317 13:25:23"}} (I could have placed "jobname", etc., directly into root with more SPL magic but it is not worth it.) Then, you just extract all data using standard spath.  Put everything together,     | rex mode=sed "s/^([^_]+)_/\1row_/" | rex "^[^:]+\s*:\s*(?<json_frame>.+)" ```| eval good = if(json_valid(json_frame), "yes", "no")``` | spath input=json_frame path=row_c0 | eval row_key = json_keys(row_c0) ```| eval r_c0 = json_extract(row_c0, "0") . json_extract(row_c0, "1")``` | eval c0 = "" | foreach row_key mode=json_array [eval c0 = c0 . json_extract(row_c0, <<ITEM>>)] | fields - _* json_frame row_* | rex field=c0 mode=sed "s/} *\"/}, \"/g s/\" *\"/\", \"/g s/$/}/" ```| eval good = if(json_valid(c0), "yes", "no")``` | eval c0_key = json_keys(c0) | foreach c0_key mode=json_array [eval c0_job = mvappend(c0_job, json_object("key", <<ITEM>>, "job", json_extract(c0, <<ITEM>>)))] | mvexpand c0_job | spath input=c0_job | fields - c0*     You then get job.Timestamp job.jobname job.status key 20240317 13:25:23 A001_GVE_ADHOC_AUDIT ENDED NOTOK 0 20240317 13:25:23 BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TSYS ENDED NOTOK 1 20240317 13:25:23 BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TSYS_WEEKLY ENDED NOTOK 2 20240317 13:25:23 D001_GVE_SOFT_MATCHING_GDH_CA ENDED NOTOK 3 20240317 13:25:23 D100_AKS_CDWH_SQOOP_TRX_ORG ENDED NOTOK 4 20240317 13:25:23 D100_AKS_CDWH_SQOOP_TYP_123 ENDED NOTOK 5 20240317 13:25:23 D100_AKS_CDWH_SQOOP_TYP_45 ENDED OK 6 20240317 13:25:23 D100_AKS_CDWH_SQOOP_TYP_ENPW ENDED NOTOK 7 20240317 13:25:23 D100_AKS_CDWH_SQOOP_TYP_T ENDED NOTOK 8 20240317 13:25:23 DREAMPC_CALC_ML_NAMESAPCE ENDED NOTOK 9 20240317 13:25:23 DREAMPC_MEMORY_AlERT_SIT ENDED NOTOK 10 20240317 13:25:23 DREAM_BDV_NBR_PRE_REQUISITE_TLX_LSP_3RD_PARTY_TRNS ENDED NOTOK 11 20240317 13:25:23 DREAM_BDV_NBR_PRE_REQUISITE_TLX_LSP_3RD_PARTY_TRNS_WEEKLY ENDED NOTOK 12 20240317 13:25:23 DREAM_BDV_NBR_STG_TLX_LSP_3RD_PARTY_TRNS ENDED OK 13 20240317 13:25:23 DREAM_BDV_NBR_STG_TLX_LSP_3RD_PARTY_TRNS_WEEKLY ENDED OK 14 20240317 13:25:23 DREAM_BDV_NBR_TLX_LSP_3RD_PARTY_TRNS ENDED OK 15 20240317 13:25:23 DREAM_BDV_NBR_TLX_LSP_3RD_PARTY_TRNS_WEEKLY ENDED OK 16 20240317 13:25:23 DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_GDH ENDED OK 17 20240317 13:25:23 DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_GDH_WEEKLY ENDED OK 18 20240317 13:25:23 DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_SAMCONTDEPOT ENDED NOTOK 19 20240317 13:25:23 DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TLXLSP_TRXN ENDED NOTOK 20 20240317 13:25:23 DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TRADEABR ENDED OK 21 20240317 13:25:23 DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TRADEABR_WEEKLY ENDED OK 22 20240317 13:25:23 DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TRADESON ENDED NOTOK 23 20240317 13:25:23 DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TRADESON_WEEKLY ENDED OK 24 20240317 13:25:23 DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_ZCI ENDED NOTOK 25 20240317 13:25:23 DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_ZCI_WEEKLY ENDED NOTOK 26 Only this way, I can be confident that this is what the app/equipment/device is trying to tell me. Here is a data emulation you can play with and compare with real data     | makeresults | eval _raw = "Dataframe row : {\"_c0\":{\"0\":\"{\",\"1\":\" \\\"0\\\": {\",\"2\":\" \\\"jobname\\\": \\\"A001_GVE_ADHOC_AUDIT\\\"\",\"3\":\" \\\"status\\\": \\\"ENDED NOTOK\\\"\",\"4\":\" \\\"Timestamp\\\": \\\"20240317 13:25:23\\\"\",\"5\":\" }\",\"6\":\" \\\"1\\\": {\",\"7\":\" \\\"jobname\\\": \\\"BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TSYS\\\"\",\"8\":\" \\\"status\\\": \\\"ENDED NOTOK\\\"\",\"9\":\" \\\"Timestamp\\\": \\\"20240317 13:25:23\\\"\",\"10\":\" }\",\"11\":\" \\\"2\\\": {\",\"12\":\" \\\"jobname\\\": \\\"BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TSYS_WEEKLY\\\"\",\"13\":\" \\\"status\\\": \\\"ENDED NOTOK\\\"\",\"14\":\" \\\"Timestamp\\\": \\\"20240317 13:25:23\\\"\",\"15\":\" }\",\"16\":\" \\\"3\\\": {\",\"17\":\" \\\"jobname\\\": \\\"D001_GVE_SOFT_MATCHING_GDH_CA\\\"\",\"18\":\" \\\"status\\\": \\\"ENDED NOTOK\\\"\",\"19\":\" \\\"Timestamp\\\": \\\"20240317 13:25:23\\\"\",\"20\":\" }\",\"21\":\" \\\"4\\\": {\",\"22\":\" \\\"jobname\\\": \\\"D100_AKS_CDWH_SQOOP_TRX_ORG\\\"\",\"23\":\" \\\"status\\\": \\\"ENDED NOTOK\\\"\",\"24\":\" \\\"Timestamp\\\": \\\"20240317 13:25:23\\\"\",\"25\":\" }\",\"26\":\" \\\"5\\\": {\",\"27\":\" \\\"jobname\\\": \\\"D100_AKS_CDWH_SQOOP_TYP_123\\\"\",\"28\":\" \\\"status\\\": \\\"ENDED NOTOK\\\"\",\"29\":\" \\\"Timestamp\\\": \\\"20240317 13:25:23\\\"\",\"30\":\" }\",\"31\":\" \\\"6\\\": {\",\"32\":\" \\\"jobname\\\": \\\"D100_AKS_CDWH_SQOOP_TYP_45\\\"\",\"33\":\" \\\"status\\\": \\\"ENDED OK\\\"\",\"34\":\" \\\"Timestamp\\\": \\\"20240317 13:25:23\\\"\",\"35\":\" }\",\"36\":\" \\\"7\\\": {\",\"37\":\" \\\"jobname\\\": \\\"D100_AKS_CDWH_SQOOP_TYP_ENPW\\\"\",\"38\":\" \\\"status\\\": \\\"ENDED NOTOK\\\"\",\"39\":\" \\\"Timestamp\\\": \\\"20240317 13:25:23\\\"\",\"40\":\" }\",\"41\":\" \\\"8\\\": {\",\"42\":\" \\\"jobname\\\": \\\"D100_AKS_CDWH_SQOOP_TYP_T\\\"\",\"43\":\" \\\"status\\\": \\\"ENDED NOTOK\\\"\",\"44\":\" \\\"Timestamp\\\": \\\"20240317 13:25:23\\\"\",\"45\":\" }\",\"46\":\" \\\"9\\\": {\",\"47\":\" \\\"jobname\\\": \\\"DREAMPC_CALC_ML_NAMESAPCE\\\"\",\"48\":\" \\\"status\\\": \\\"ENDED NOTOK\\\"\",\"49\":\" \\\"Timestamp\\\": \\\"20240317 13:25:23\\\"\",\"50\":\" }\",\"51\":\" \\\"10\\\": {\",\"52\":\" \\\"jobname\\\": \\\"DREAMPC_MEMORY_AlERT_SIT\\\"\",\"53\":\" \\\"status\\\": \\\"ENDED NOTOK\\\"\",\"54\":\" \\\"Timestamp\\\": \\\"20240317 13:25:23\\\"\",\"55\":\" }\",\"56\":\" \\\"11\\\": {\",\"57\":\" \\\"jobname\\\": \\\"DREAM_BDV_NBR_PRE_REQUISITE_TLX_LSP_3RD_PARTY_TRNS\\\"\",\"58\":\" \\\"status\\\": \\\"ENDED NOTOK\\\"\",\"59\":\" \\\"Timestamp\\\": \\\"20240317 13:25:23\\\"\",\"60\":\" }\",\"61\":\" \\\"12\\\": {\",\"62\":\" \\\"jobname\\\": \\\"DREAM_BDV_NBR_PRE_REQUISITE_TLX_LSP_3RD_PARTY_TRNS_WEEKLY\\\"\",\"63\":\" \\\"status\\\": \\\"ENDED NOTOK\\\"\",\"64\":\" \\\"Timestamp\\\": \\\"20240317 13:25:23\\\"\",\"65\":\" }\",\"66\":\" \\\"13\\\": {\",\"67\":\" \\\"jobname\\\": \\\"DREAM_BDV_NBR_STG_TLX_LSP_3RD_PARTY_TRNS\\\"\",\"68\":\" \\\"status\\\": \\\"ENDED OK\\\"\",\"69\":\" \\\"Timestamp\\\": \\\"20240317 13:25:23\\\"\",\"70\":\" }\",\"71\":\" \\\"14\\\": {\",\"72\":\" \\\"jobname\\\": \\\"DREAM_BDV_NBR_STG_TLX_LSP_3RD_PARTY_TRNS_WEEKLY\\\"\",\"73\":\" \\\"status\\\": \\\"ENDED OK\\\"\",\"74\":\" \\\"Timestamp\\\": \\\"20240317 13:25:23\\\"\",\"75\":\" }\",\"76\":\" \\\"15\\\": {\",\"77\":\" \\\"jobname\\\": \\\"DREAM_BDV_NBR_TLX_LSP_3RD_PARTY_TRNS\\\"\",\"78\":\" \\\"status\\\": \\\"ENDED OK\\\"\",\"79\":\" \\\"Timestamp\\\": \\\"20240317 13:25:23\\\"\",\"80\":\" }\",\"81\":\" \\\"16\\\": {\",\"82\":\" \\\"jobname\\\": \\\"DREAM_BDV_NBR_TLX_LSP_3RD_PARTY_TRNS_WEEKLY\\\"\",\"83\":\" \\\"status\\\": \\\"ENDED OK\\\"\",\"84\":\" \\\"Timestamp\\\": \\\"20240317 13:25:23\\\"\",\"85\":\" }\",\"86\":\" \\\"17\\\": {\",\"87\":\" \\\"jobname\\\": \\\"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_GDH\\\"\",\"88\":\" \\\"status\\\": \\\"ENDED OK\\\"\",\"89\":\" \\\"Timestamp\\\": \\\"20240317 13:25:23\\\"\",\"90\":\" }\",\"91\":\" \\\"18\\\": {\",\"92\":\" \\\"jobname\\\": \\\"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_GDH_WEEKLY\\\"\",\"93\":\" \\\"status\\\": \\\"ENDED OK\\\"\",\"94\":\" \\\"Timestamp\\\": \\\"20240317 13:25:23\\\"\",\"95\":\" }\",\"96\":\" \\\"19\\\": {\",\"97\":\" \\\"jobname\\\": \\\"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_SAMCONTDEPOT\\\"\",\"98\":\" \\\"status\\\": \\\"ENDED NOTOK\\\"\",\"99\":\" \\\"Timestamp\\\": \\\"20240317 13:25:23\\\"\",\"100\":\" }\",\"101\":\" \\\"20\\\": {\",\"102\":\" \\\"jobname\\\": \\\"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TLXLSP_TRXN\\\"\",\"103\":\" \\\"status\\\": \\\"ENDED NOTOK\\\"\",\"104\":\" \\\"Timestamp\\\": \\\"20240317 13:25:23\\\"\",\"105\":\" }\",\"106\":\" \\\"21\\\": {\",\"107\":\" \\\"jobname\\\": \\\"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TRADEABR\\\"\",\"108\":\" \\\"status\\\": \\\"ENDED OK\\\"\",\"109\":\" \\\"Timestamp\\\": \\\"20240317 13:25:23\\\"\",\"110\":\" }\",\"111\":\" \\\"22\\\": {\",\"112\":\" \\\"jobname\\\": \\\"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TRADEABR_WEEKLY\\\"\",\"113\":\" \\\"status\\\": \\\"ENDED OK\\\"\",\"114\":\" \\\"Timestamp\\\": \\\"20240317 13:25:23\\\"\",\"115\":\" }\",\"116\":\" \\\"23\\\": {\",\"117\":\" \\\"jobname\\\": \\\"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TRADESON\\\"\",\"118\":\" \\\"status\\\": \\\"ENDED NOTOK\\\"\",\"119\":\" \\\"Timestamp\\\": \\\"20240317 13:25:23\\\"\",\"120\":\" }\",\"121\":\" \\\"24\\\": {\",\"122\":\" \\\"jobname\\\": \\\"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TRADESON_WEEKLY\\\"\",\"123\":\" \\\"status\\\": \\\"ENDED OK\\\"\",\"124\":\" \\\"Timestamp\\\": \\\"20240317 13:25:23\\\"\",\"125\":\" }\",\"126\":\" \\\"25\\\": {\",\"127\":\" \\\"jobname\\\": \\\"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_ZCI\\\"\",\"128\":\" \\\"status\\\": \\\"ENDED NOTOK\\\"\",\"129\":\" \\\"Timestamp\\\": \\\"20240317 13:25:23\\\"\",\"130\":\" }\",\"131\":\" \\\"26\\\": {\",\"132\":\" \\\"jobname\\\": \\\"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_ZCI_WEEKLY\\\"\",\"133\":\" \\\"status\\\": \\\"ENDED NOTOK\\\"\",\"134\":\" \\\"Timestamp\\\": \\\"20240317 13:25:23\\\"\",\"135\":\" }\" } }" ``` data emulation above ```     As noted above, I artificially inserted two closing curly brackets into _raw.  If the app/equipment/device willfully drops them, you can insert them back with something simple as     | eval _raw = _raw . "}}"     Hope this helps.