Thanks in Advance.
1.I have a json object as "content.List of Batches Processed{}" and Already splunk extract field as "content.List of Batches Processed{}.BatchID" and count it showing as 26 .But in the "content.List of Batches Processed{}.BatchID" we have 134 records. So i want to extract the multiple JSON values as field.From below logs i want to extract all the values from P_REQUEST_ID,P_BATCH_ID,P_TEMPLATE
Query i tried to fetch the data
| eval BatchID=spath("content.List of Batches Processed{}*", "content.List of Batches Processed{}.P_BATCH_ID"),Request=spath(_raw, "content.List of Batches Processed{}.P_REQUEST_ID")|table BatchID Request
"content" : {
"List of Batches Processed" : [ {
"P_REQUEST_ID" : "177",
"P_BATCH_ID" : "1",
"P_TEMPLATE" : "Template",
"P_PERIOD" : "24",
"P_MORE_BATCHES_EXISTS" : "Y",
"P_ZUORA_FILE_NAME" : "Template20240306102852.csv",
"P_MESSAGE" : "Data loaded in RevPro Successfully - Success: 10000 Failed: 0",
"P_RETURN_STATUS" : "SUCCESS"
}, {
"P_REQUEST_ID" : "1r7",
"P_BATCH_ID" : "2",
"P_TEMPLATE" : "Template",
"P_PERIOD" : "24",
"P_MORE_BATCHES_EXISTS" : "Y",
"P_ZUORA_FILE_NAME" : "Template20240306102852.csv",
"P_MESSAGE" : "Data loaded in RevPro Successfully - Success: 10000 Failed: 0",
"P_RETURN_STATUS" : "SUCCESS"
}, {
"P_REQUEST_ID" : "1577",
"P_BATCH_ID" : "3",
"P_TEMPLATE" : "Template",
"P_PERIOD" : "24",
"P_MORE_BATCHES_EXISTS" : "Y",
"P_ZUORA_FILE_NAME" : "Template20240306102852.csv",
"P_MESSAGE" : "Data loaded in RevPro Successfully - Success: 10000 Failed: 0",
"P_RETURN_STATUS" : "SUCCESS"
}, {
"P_REQUEST_ID" : "16577",
"P_BATCH_ID" : "4",
"P_TEMPLATE" : "Template",
"P_PERIOD" : "24",
"P_MORE_BATCHES_EXISTS" : "Y",
"P_ZUORA_FILE_NAME" : "Template20240306102852.csv",
"P_MESSAGE" : "Data loaded in RevPro Successfully - Success: 10000 Failed: 0",
"P_RETURN_STATUS" : "SUCCESS"
}
.
Having more accurate representation of data is definitely an improvement. But you still need to answer the questions about data characteristics (lots of possibilities regarding triplets P_BATCH_ID, P_REQUEST_ID, and P_RETURN_STATUS), and those about desired results (why multiple columns with ordered lists, should there be dedup of triplets, order of triplets, etc.) Because each combination requires a different solution, and can give you very different results. Other people's mind-reading is more wrong than correct.
Let me try two mind-readings to illustrate.
First, you want to preserve every triplet even if they repeat, and you want to present them in the order of event arrival as well as in the order they appear inside each event, except being grouped by correlationId. Absolutely no dedup. (Although this looks to have the least commands, the "solution" is the most demanding in memory.)
| rename content."List of Batches Processed"{}.* as *
| fields P_BATCH_ID P_REQUEST_ID P_RETURN_STATUS correlationId
| stats list(P_*) as * by correlationId
``` mind-reading #1 ```
Using a composite emulation based on samples you provided (see end of this post), you will get
correlationId | BATCH_ID | REQUEST_ID | RETURN_STATUS |
490cfba0e9f3c770b40 | 1 2 3 4 1 2 3 4 5 6 | 177 1r7 1577 16577 1005377 1005177 1005377 1005377 1005377 100532177 | SUCCESS SUCCESS SUCCESS SUCCESS SUCCESS SUCCESS SUCCESS SUCCESS SUCCESS SUCCESS |
Does this look like something you need?
Or, mind-reading 2. You don't want any duplicate triplet; neither the order these triplets arrive with events nor the order they appear in individual events matters. You want maximum dedup, just group by correlationId.
| spath path=content."List of Batches Processed"{}
| mvexpand content."List of Batches Processed"{}
| spath input=content."List of Batches Processed"{}
| stats count by P_BATCH_ID P_REQUEST_ID P_RETURN_STATUS correlationId
| stats list(P_*) as * by correlationId
``` mind-reading #2 ```
The same emulation will give
correlationId | BATCH_ID | REQUEST_ID | RETURN_STATUS |
490cfba0e9f3c770b40 | 1 1 2 2 3 3 4 4 5 6 | 1005377 177 1005177 1r7 1005377 1577 1005377 16577 1005377 100532177 | SUCCESS SUCCESS SUCCESS SUCCESS SUCCESS SUCCESS SUCCESS SUCCESS SUCCESS SUCCESS |
Note: Even though this mock result is only superficially different from the previous one, the two can be materially very different if real data contain lots of duplicate triplets.
As a bonus, I want to throw in a third mind-reading: You don't care about triplets at all. You only want to know which values are present in each of P_BATCH_ID, P_REQUEST_ID, and P_RETURN_STATUS. (This one is the least demanding in memory, and computationally light.)
| rename content."List of Batches Processed"{}.* as *
| fields P_BATCH_ID P_REQUEST_ID P_RETURN_STATUS correlationId
| stats values(P_*) as * by correlationId
``` mind-reading extremo ```
The emulated data will give
correlationId | BATCH_ID | REQUEST_ID | RETURN_STATUS |
490cfba0e9f3c770b40 | 1 2 3 4 5 6 | 1005177 100532177 1005377 1577 16577 177 1r7 | SUCCESS |
Is this closer to what you want?
The above three very different results are derived from assuming that the two sample/mock JSON data contain identical correlationId, like emulated below. They all kind of fit into the mock result table you showed. How can volunteers tell?
| makeresults
| fields - _*
| eval data = mvappend("{ \"correlationId\" : \"490cfba0e9f3c770b40\",
\"content\" : {
\"List of Batches Processed\" : [ {
\"P_REQUEST_ID\" : \"177\",
\"P_BATCH_ID\" : \"1\",
\"P_TEMPLATE\" : \"Template\",
\"P_PERIOD\" : \"24\",
\"P_MORE_BATCHES_EXISTS\" : \"Y\",
\"P_ZUORA_FILE_NAME\" : \"Template20240306102852.csv\",
\"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
\"P_RETURN_STATUS\" : \"SUCCESS\"
}, {
\"P_REQUEST_ID\" : \"1r7\",
\"P_BATCH_ID\" : \"2\",
\"P_TEMPLATE\" : \"Template\",
\"P_PERIOD\" : \"24\",
\"P_MORE_BATCHES_EXISTS\" : \"Y\",
\"P_ZUORA_FILE_NAME\" : \"Template20240306102852.csv\",
\"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
\"P_RETURN_STATUS\" : \"SUCCESS\"
}, {
\"P_REQUEST_ID\" : \"1577\",
\"P_BATCH_ID\" : \"3\",
\"P_TEMPLATE\" : \"Template\",
\"P_PERIOD\" : \"24\",
\"P_MORE_BATCHES_EXISTS\" : \"Y\",
\"P_ZUORA_FILE_NAME\" : \"Template20240306102852.csv\",
\"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
\"P_RETURN_STATUS\" : \"SUCCESS\"
}, {
\"P_REQUEST_ID\" : \"16577\",
\"P_BATCH_ID\" : \"4\",
\"P_TEMPLATE\" : \"Template\",
\"P_PERIOD\" : \"24\",
\"P_MORE_BATCHES_EXISTS\" : \"Y\",
\"P_ZUORA_FILE_NAME\" : \"Template20240306102852.csv\",
\"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
\"P_RETURN_STATUS\" : \"SUCCESS\"
}]
}
}", "{
\"correlationId\" : \"490cfba0e9f3c770b40\",
\"message\" : \"Processed all revenueData\",
\"tracePoint\" : \"FLOW\",
\"priority\" : \"INFO\",
\"category\" : \"prc-api\",
\"elapsed\" : 472,
\"locationInfo\" : {
\"lineInFile\" : \"205\",
\"component\" : \"json-logger:logger\",
\"fileName\" : \"G.xml\",
\"rootContainer\" : \"syncFlow\"
},
\"timestamp\" : \"2024-03-06T20:57:17.119Z\",
\"content\" : {
\"List of Batches Processed\" : [ {
\"P_REQUEST_ID\" : \"1005377\",
\"P_BATCH_ID\" : \"1\",
\"P_TEMPLATE\" : \"Template\",
\"P_PERIOD\" : \"MAR-24\",
\"P_MORE_BATCHES_EXISTS\" : \"Y\",
\"P_FILE_NAME\" : \"Template20240306102852.csv\",
\"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
\"P_RETURN_STATUS\" : \"SUCCESS\"
}, {
\"P_REQUEST_ID\" : \"1005177\",
\"P_BATCH_ID\" : \"2\",
\"P_TEMPLATE\" : \"Template\",
\"P_PERIOD\" : \"MAR-24\",
\"P_MORE_BATCHES_EXISTS\" : \"Y\",
\"P_FILE_NAME\" : \"Template20240306102959.csv\",
\"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
\"P_RETURN_STATUS\" : \"SUCCESS\"
}, {
\"P_REQUEST_ID\" : \"1005377\",
\"P_BATCH_ID\" : \"3\",
\"P_TEMPLATE\" : \"Template\",
\"P_PERIOD\" : \"MAR-24\",
\"P_MORE_BATCHES_EXISTS\" : \"Y\",
\"P_ZUORA_FILE_NAME\" : \"Template20240306103103.csv\",
\"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
\"P_RETURN_STATUS\" : \"SUCCESS\"
}, {
\"P_REQUEST_ID\" : \"1005377\",
\"P_BATCH_ID\" : \"4\",
\"P_TEMPLATE\" : \"Template\",
\"P_PERIOD\" : \"MAR-24\",
\"P_MORE_BATCHES_EXISTS\" : \"Y\",
\"P_ZUORA_FILE_NAME\" : \"Template20240306103205.csv\",
\"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
\"P_RETURN_STATUS\" : \"SUCCESS\"
}, {
\"P_REQUEST_ID\" : \"1005377\",
\"P_BATCH_ID\" : \"5\",
\"P_TEMPLATE\" : \"Template\",
\"P_PERIOD\" : \"MAR-24\",
\"P_MORE_BATCHES_EXISTS\" : \"Y\",
\"P_FILE_NAME\" : \"Template20240306103306.csv\",
\"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
\"P_RETURN_STATUS\" : \"SUCCESS\"
}, {
\"P_REQUEST_ID\" : \"100532177\",
\"P_BATCH_ID\" : \"6\",
\"P_TEMPLATE\" : \"ATVI_Transaction_Template\",
\"P_PERIOD\" : \"MAR-24\",
\"P_MORE_BATCHES_EXISTS\" : \"Y\",
\"P_ZUORA_FILE_NAME\" : \"rev_ATVI_Transaction_Template20240306103407.csv\",
\"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
\"P_RETURN_STATUS\" : \"SUCCESS\"
}")
| mvexpand data
| rename data AS _raw
| spath
``` data emulation above ```
Use path parameter/argument in spath to lock in a JSON array.
| spath path=content."List of Batches Processed"{}
| mvexpand content."List of Batches Processed"{}
| spath input=content."List of Batches Processed"{}
| fields - _* content.*
Note your sample data is non-compliant. Correcting for syntax, it should give
P_BATCH_ID | P_MESSAGE | P_MORE_BATCHES_EXISTS | P_PERIOD | P_REQUEST_ID | P_RETURN_STATUS | P_TEMPLATE | P_ZUORA_FILE_NAME |
1 | Data loaded in RevPro Successfully - Success: 10000 Failed: 0 | Y | 24 | 177 | SUCCESS | Template | Template20240306102852.csv |
2 | Data loaded in RevPro Successfully - Success: 10000 Failed: 0 | Y | 24 | 1r7 | SUCCESS | Template | Template20240306102852.csv |
3 | Data loaded in RevPro Successfully - Success: 10000 Failed: 0 | Y | 24 | 1577 | SUCCESS | Template | Template20240306102852.csv |
4 | Data loaded in RevPro Successfully - Success: 10000 Failed: 0 | Y | 24 | 16577 | SUCCESS | Template | Template20240306102852.csv |
Here is an emulation of the compliant JSON.
| makeresults
| eval _raw = "{\"content\" : {
\"List of Batches Processed\" : [ {
\"P_REQUEST_ID\" : \"177\",
\"P_BATCH_ID\" : \"1\",
\"P_TEMPLATE\" : \"Template\",
\"P_PERIOD\" : \"24\",
\"P_MORE_BATCHES_EXISTS\" : \"Y\",
\"P_ZUORA_FILE_NAME\" : \"Template20240306102852.csv\",
\"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
\"P_RETURN_STATUS\" : \"SUCCESS\"
}, {
\"P_REQUEST_ID\" : \"1r7\",
\"P_BATCH_ID\" : \"2\",
\"P_TEMPLATE\" : \"Template\",
\"P_PERIOD\" : \"24\",
\"P_MORE_BATCHES_EXISTS\" : \"Y\",
\"P_ZUORA_FILE_NAME\" : \"Template20240306102852.csv\",
\"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
\"P_RETURN_STATUS\" : \"SUCCESS\"
}, {
\"P_REQUEST_ID\" : \"1577\",
\"P_BATCH_ID\" : \"3\",
\"P_TEMPLATE\" : \"Template\",
\"P_PERIOD\" : \"24\",
\"P_MORE_BATCHES_EXISTS\" : \"Y\",
\"P_ZUORA_FILE_NAME\" : \"Template20240306102852.csv\",
\"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
\"P_RETURN_STATUS\" : \"SUCCESS\"
}, {
\"P_REQUEST_ID\" : \"16577\",
\"P_BATCH_ID\" : \"4\",
\"P_TEMPLATE\" : \"Template\",
\"P_PERIOD\" : \"24\",
\"P_MORE_BATCHES_EXISTS\" : \"Y\",
\"P_ZUORA_FILE_NAME\" : \"Template20240306102852.csv\",
\"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
\"P_RETURN_STATUS\" : \"SUCCESS\"
}]
}
}"
``` data emulation above ```
The value which we are seeing it is in single corellationId.so i want to display like
correlationID | BatchId | RequestID | Status |
125dfe5 | 1 2 3 | 117 112| 1156 | Success Success Success |
32435sf53 | 1 2 | 324 643 | Success |
I don't see a correlationID in your sample data. Is it a root node in JSON or is it in contents as well? I will assume a root JSON. (See my new emulation below.)
Importantly, I am speculating that you want to group all values of these three fields by correlationID. Is this the requirement? I will assume yes. Normally, data people will want to go the path in my previous comment because you don't want to mix-and-match BatchId, RequestID, and Status. Do you care whether the order are mixed up? The result display you illustrated doesn't answer this question.
I will assume that you do care about order of appearance. (But I do want to warn you that three ordered list of > 100 values are not good for users. I, for one, would hate to look at such a table.) If so, you must answer yet more important questions: Do your data contain identical triplets (BatchId, RequestID, Status) with any given correlationID? If they do, do you care to preserve all triplets? If you want to preserve all triplets, do you care about the order of events that carry them? Or do you want to filter out all duplicates? If you want to remove duplicate triplets, do you care about the order of events that carry them? If you care about the order, what are the criteria to order them?
See, volunteers in this board have no intimate knowledge about your dataset or your use case. Illustrating a subset of data in text is an excellent start. But you still need to define the problem to the extent another person who don't have your knowledge to sieve through your data and arrive at the same conclusion as you would get - all without SPL. If the other person is to read your mind, 9 times out of 10 the mindreader will be wrong.
Below I give an emulation that includes a correlationID as root node:
| makeresults
| eval _raw = "{\"content\" : {
\"List of Batches Processed\" : [ {
\"P_REQUEST_ID\" : \"177\",
\"P_BATCH_ID\" : \"1\",
\"P_TEMPLATE\" : \"Template\",
\"P_PERIOD\" : \"24\",
\"P_MORE_BATCHES_EXISTS\" : \"Y\",
\"P_ZUORA_FILE_NAME\" : \"Template20240306102852.csv\",
\"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
\"P_RETURN_STATUS\" : \"SUCCESS\"
}, {
\"P_REQUEST_ID\" : \"1r7\",
\"P_BATCH_ID\" : \"2\",
\"P_TEMPLATE\" : \"Template\",
\"P_PERIOD\" : \"24\",
\"P_MORE_BATCHES_EXISTS\" : \"Y\",
\"P_ZUORA_FILE_NAME\" : \"Template20240306102852.csv\",
\"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
\"P_RETURN_STATUS\" : \"SUCCESS\"
}, {
\"P_REQUEST_ID\" : \"1577\",
\"P_BATCH_ID\" : \"3\",
\"P_TEMPLATE\" : \"Template\",
\"P_PERIOD\" : \"24\",
\"P_MORE_BATCHES_EXISTS\" : \"Y\",
\"P_ZUORA_FILE_NAME\" : \"Template20240306102852.csv\",
\"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
\"P_RETURN_STATUS\" : \"SUCCESS\"
}, {
\"P_REQUEST_ID\" : \"16577\",
\"P_BATCH_ID\" : \"4\",
\"P_TEMPLATE\" : \"Template\",
\"P_PERIOD\" : \"24\",
\"P_MORE_BATCHES_EXISTS\" : \"Y\",
\"P_ZUORA_FILE_NAME\" : \"Template20240306102852.csv\",
\"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
\"P_RETURN_STATUS\" : \"SUCCESS\"
}]
},
\"correlationID\": \"125dfe5\"
}"
| spath
``` data emulation above ```
Hi @yuanliu
I attached with correlationId .So if we extract the result the result go beyond pagination in the table as well right.
{
"correlationId" : "490cfba0e9f3c770b40",
"message" : "Processed all revenueData",
"tracePoint" : "FLOW",
"priority" : "INFO",
"category" : "prc-api",
"elapsed" : 472,
"locationInfo" : {
"lineInFile" : "205",
"component" : "json-logger:logger",
"fileName" : "G.xml",
"rootContainer" : "syncFlow"
},
"timestamp" : "2024-03-06T20:57:17.119Z",
"content" : {
"List of Batches Processed" : [ {
"P_REQUEST_ID" : "1005377",
"P_BATCH_ID" : "1",
"P_TEMPLATE" : "Template",
"P_PERIOD" : "MAR-24",
"P_MORE_BATCHES_EXISTS" : "Y",
"P_FILE_NAME" : "Template20240306102852.csv",
"P_MESSAGE" : "Data loaded in RevPro Successfully - Success: 10000 Failed: 0",
"P_RETURN_STATUS" : "SUCCESS"
}, {
"P_REQUEST_ID" : "1005177",
"P_BATCH_ID" : "2",
"P_TEMPLATE" : "Template",
"P_PERIOD" : "MAR-24",
"P_MORE_BATCHES_EXISTS" : "Y",
"P_FILE_NAME" : "Template20240306102959.csv",
"P_MESSAGE" : "Data loaded in RevPro Successfully - Success: 10000 Failed: 0",
"P_RETURN_STATUS" : "SUCCESS"
}, {
"P_REQUEST_ID" : "1005377",
"P_BATCH_ID" : "3",
"P_TEMPLATE" : "Template",
"P_PERIOD" : "MAR-24",
"P_MORE_BATCHES_EXISTS" : "Y",
"P_ZUORA_FILE_NAME" : "Template20240306103103.csv",
"P_MESSAGE" : "Data loaded in RevPro Successfully - Success: 10000 Failed: 0",
"P_RETURN_STATUS" : "SUCCESS"
}, {
"P_REQUEST_ID" : "1005377",
"P_BATCH_ID" : "4",
"P_TEMPLATE" : "Template",
"P_PERIOD" : "MAR-24",
"P_MORE_BATCHES_EXISTS" : "Y",
"P_ZUORA_FILE_NAME" : "Template20240306103205.csv",
"P_MESSAGE" : "Data loaded in RevPro Successfully - Success: 10000 Failed: 0",
"P_RETURN_STATUS" : "SUCCESS"
}, {
"P_REQUEST_ID" : "1005377",
"P_BATCH_ID" : "5",
"P_TEMPLATE" : "Template",
"P_PERIOD" : "MAR-24",
"P_MORE_BATCHES_EXISTS" : "Y",
"P_FILE_NAME" : "Template20240306103306.csv",
"P_MESSAGE" : "Data loaded in RevPro Successfully - Success: 10000 Failed: 0",
"P_RETURN_STATUS" : "SUCCESS"
}, {
"P_REQUEST_ID" : "100532177",
"P_BATCH_ID" : "6",
"P_TEMPLATE" : "ATVI_Transaction_Template",
"P_PERIOD" : "MAR-24",
"P_MORE_BATCHES_EXISTS" : "Y",
"P_ZUORA_FILE_NAME" : "rev_ATVI_Transaction_Template20240306103407.csv",
"P_MESSAGE" : "Data loaded in RevPro Successfully - Success: 10000 Failed: 0",
"P_RETURN_STATUS" : "SUCCESS"
}
Having more accurate representation of data is definitely an improvement. But you still need to answer the questions about data characteristics (lots of possibilities regarding triplets P_BATCH_ID, P_REQUEST_ID, and P_RETURN_STATUS), and those about desired results (why multiple columns with ordered lists, should there be dedup of triplets, order of triplets, etc.) Because each combination requires a different solution, and can give you very different results. Other people's mind-reading is more wrong than correct.
Let me try two mind-readings to illustrate.
First, you want to preserve every triplet even if they repeat, and you want to present them in the order of event arrival as well as in the order they appear inside each event, except being grouped by correlationId. Absolutely no dedup. (Although this looks to have the least commands, the "solution" is the most demanding in memory.)
| rename content."List of Batches Processed"{}.* as *
| fields P_BATCH_ID P_REQUEST_ID P_RETURN_STATUS correlationId
| stats list(P_*) as * by correlationId
``` mind-reading #1 ```
Using a composite emulation based on samples you provided (see end of this post), you will get
correlationId | BATCH_ID | REQUEST_ID | RETURN_STATUS |
490cfba0e9f3c770b40 | 1 2 3 4 1 2 3 4 5 6 | 177 1r7 1577 16577 1005377 1005177 1005377 1005377 1005377 100532177 | SUCCESS SUCCESS SUCCESS SUCCESS SUCCESS SUCCESS SUCCESS SUCCESS SUCCESS SUCCESS |
Does this look like something you need?
Or, mind-reading 2. You don't want any duplicate triplet; neither the order these triplets arrive with events nor the order they appear in individual events matters. You want maximum dedup, just group by correlationId.
| spath path=content."List of Batches Processed"{}
| mvexpand content."List of Batches Processed"{}
| spath input=content."List of Batches Processed"{}
| stats count by P_BATCH_ID P_REQUEST_ID P_RETURN_STATUS correlationId
| stats list(P_*) as * by correlationId
``` mind-reading #2 ```
The same emulation will give
correlationId | BATCH_ID | REQUEST_ID | RETURN_STATUS |
490cfba0e9f3c770b40 | 1 1 2 2 3 3 4 4 5 6 | 1005377 177 1005177 1r7 1005377 1577 1005377 16577 1005377 100532177 | SUCCESS SUCCESS SUCCESS SUCCESS SUCCESS SUCCESS SUCCESS SUCCESS SUCCESS SUCCESS |
Note: Even though this mock result is only superficially different from the previous one, the two can be materially very different if real data contain lots of duplicate triplets.
As a bonus, I want to throw in a third mind-reading: You don't care about triplets at all. You only want to know which values are present in each of P_BATCH_ID, P_REQUEST_ID, and P_RETURN_STATUS. (This one is the least demanding in memory, and computationally light.)
| rename content."List of Batches Processed"{}.* as *
| fields P_BATCH_ID P_REQUEST_ID P_RETURN_STATUS correlationId
| stats values(P_*) as * by correlationId
``` mind-reading extremo ```
The emulated data will give
correlationId | BATCH_ID | REQUEST_ID | RETURN_STATUS |
490cfba0e9f3c770b40 | 1 2 3 4 5 6 | 1005177 100532177 1005377 1577 16577 177 1r7 | SUCCESS |
Is this closer to what you want?
The above three very different results are derived from assuming that the two sample/mock JSON data contain identical correlationId, like emulated below. They all kind of fit into the mock result table you showed. How can volunteers tell?
| makeresults
| fields - _*
| eval data = mvappend("{ \"correlationId\" : \"490cfba0e9f3c770b40\",
\"content\" : {
\"List of Batches Processed\" : [ {
\"P_REQUEST_ID\" : \"177\",
\"P_BATCH_ID\" : \"1\",
\"P_TEMPLATE\" : \"Template\",
\"P_PERIOD\" : \"24\",
\"P_MORE_BATCHES_EXISTS\" : \"Y\",
\"P_ZUORA_FILE_NAME\" : \"Template20240306102852.csv\",
\"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
\"P_RETURN_STATUS\" : \"SUCCESS\"
}, {
\"P_REQUEST_ID\" : \"1r7\",
\"P_BATCH_ID\" : \"2\",
\"P_TEMPLATE\" : \"Template\",
\"P_PERIOD\" : \"24\",
\"P_MORE_BATCHES_EXISTS\" : \"Y\",
\"P_ZUORA_FILE_NAME\" : \"Template20240306102852.csv\",
\"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
\"P_RETURN_STATUS\" : \"SUCCESS\"
}, {
\"P_REQUEST_ID\" : \"1577\",
\"P_BATCH_ID\" : \"3\",
\"P_TEMPLATE\" : \"Template\",
\"P_PERIOD\" : \"24\",
\"P_MORE_BATCHES_EXISTS\" : \"Y\",
\"P_ZUORA_FILE_NAME\" : \"Template20240306102852.csv\",
\"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
\"P_RETURN_STATUS\" : \"SUCCESS\"
}, {
\"P_REQUEST_ID\" : \"16577\",
\"P_BATCH_ID\" : \"4\",
\"P_TEMPLATE\" : \"Template\",
\"P_PERIOD\" : \"24\",
\"P_MORE_BATCHES_EXISTS\" : \"Y\",
\"P_ZUORA_FILE_NAME\" : \"Template20240306102852.csv\",
\"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
\"P_RETURN_STATUS\" : \"SUCCESS\"
}]
}
}", "{
\"correlationId\" : \"490cfba0e9f3c770b40\",
\"message\" : \"Processed all revenueData\",
\"tracePoint\" : \"FLOW\",
\"priority\" : \"INFO\",
\"category\" : \"prc-api\",
\"elapsed\" : 472,
\"locationInfo\" : {
\"lineInFile\" : \"205\",
\"component\" : \"json-logger:logger\",
\"fileName\" : \"G.xml\",
\"rootContainer\" : \"syncFlow\"
},
\"timestamp\" : \"2024-03-06T20:57:17.119Z\",
\"content\" : {
\"List of Batches Processed\" : [ {
\"P_REQUEST_ID\" : \"1005377\",
\"P_BATCH_ID\" : \"1\",
\"P_TEMPLATE\" : \"Template\",
\"P_PERIOD\" : \"MAR-24\",
\"P_MORE_BATCHES_EXISTS\" : \"Y\",
\"P_FILE_NAME\" : \"Template20240306102852.csv\",
\"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
\"P_RETURN_STATUS\" : \"SUCCESS\"
}, {
\"P_REQUEST_ID\" : \"1005177\",
\"P_BATCH_ID\" : \"2\",
\"P_TEMPLATE\" : \"Template\",
\"P_PERIOD\" : \"MAR-24\",
\"P_MORE_BATCHES_EXISTS\" : \"Y\",
\"P_FILE_NAME\" : \"Template20240306102959.csv\",
\"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
\"P_RETURN_STATUS\" : \"SUCCESS\"
}, {
\"P_REQUEST_ID\" : \"1005377\",
\"P_BATCH_ID\" : \"3\",
\"P_TEMPLATE\" : \"Template\",
\"P_PERIOD\" : \"MAR-24\",
\"P_MORE_BATCHES_EXISTS\" : \"Y\",
\"P_ZUORA_FILE_NAME\" : \"Template20240306103103.csv\",
\"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
\"P_RETURN_STATUS\" : \"SUCCESS\"
}, {
\"P_REQUEST_ID\" : \"1005377\",
\"P_BATCH_ID\" : \"4\",
\"P_TEMPLATE\" : \"Template\",
\"P_PERIOD\" : \"MAR-24\",
\"P_MORE_BATCHES_EXISTS\" : \"Y\",
\"P_ZUORA_FILE_NAME\" : \"Template20240306103205.csv\",
\"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
\"P_RETURN_STATUS\" : \"SUCCESS\"
}, {
\"P_REQUEST_ID\" : \"1005377\",
\"P_BATCH_ID\" : \"5\",
\"P_TEMPLATE\" : \"Template\",
\"P_PERIOD\" : \"MAR-24\",
\"P_MORE_BATCHES_EXISTS\" : \"Y\",
\"P_FILE_NAME\" : \"Template20240306103306.csv\",
\"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
\"P_RETURN_STATUS\" : \"SUCCESS\"
}, {
\"P_REQUEST_ID\" : \"100532177\",
\"P_BATCH_ID\" : \"6\",
\"P_TEMPLATE\" : \"ATVI_Transaction_Template\",
\"P_PERIOD\" : \"MAR-24\",
\"P_MORE_BATCHES_EXISTS\" : \"Y\",
\"P_ZUORA_FILE_NAME\" : \"rev_ATVI_Transaction_Template20240306103407.csv\",
\"P_MESSAGE\" : \"Data loaded in RevPro Successfully - Success: 10000 Failed: 0\",
\"P_RETURN_STATUS\" : \"SUCCESS\"
}")
| mvexpand data
| rename data AS _raw
| spath
``` data emulation above ```
Thanks you made my day.I need to show in the dashbaord table how to use table after stats and i am getting warning message
And your mind-reading one and three is working as expected result and i have a queries that my content list have 134 batch_ID .But the splunk extracted and shows 26 counts, rest of the things are not showing .how can i handle this.I need to fix that issue.Will this need to be extract from props and transform.conf file while indexing the data.Please help me to fix it.
If you get Limit of '100', you must be picking mind-reading #1 as you didn't pick #2. That is just the problem with list. You can increase this limit somewhat (see [stats|sistats]). But be very careful.
As to BATCH_ID, I still don't know what 134 and 26 mean. One correlationId? All events? Is it because of the 100 limit on list_maxsize? You should probably post a new question with proper set up and detailed explanation.