All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

(The steps are a bit long so this post is split into two) Part 1. Even if you accidentally extract the data you wanted, your code will not be robust.  Instead of trying to rex the piece of info you... See more...
(The steps are a bit long so this post is split into two) Part 1. Even if you accidentally extract the data you wanted, your code will not be robust.  Instead of trying to rex the piece of info you are seeking, try to restore the underlying data structure first, i.e., try to rex and restore the compliant JSON. Is it correct that the data you illustrated is just one part in a stream of data that make up a larger frame?  Is it possible to illustrate an entire frame, however many events there may be?  If my speculation has any merit, I suspect that this data stream is formulated such that once you string together the _c0.1, and _c0.2, c0.100, etc., you would get a valid JSON object, or a fragment of a valid JSON for key _c0. Let's test this out step by step.  Note: the data you illustrated seems to be missing two closing curly brackets (}).  So I add them in.  There is another problem: Splunk treats leading underscore (_) specially.  For some reason even fromjson is not handling _c0 correctly.  So, I also add a prefix to this key name.  It doesn't change semantics; you can change back to _c0 in the end.     | rex mode=sed "s/^([^_]+)_/\1row_/" ``` prefix key name with "row" ``` | rex "^[^:]+\s*:\s*(?<json_frame>.+)" ``` extract JSON format "row_c0" ``` ```| eval good = if(json_valid(json_frame), "yes", "no")``` | spath input=json_frame path=row_c0 | fields - _* json_frame | eval row_key = json_keys(row_c0) | eval c0 = "" | foreach row_key mode=json_array [eval c0 = c0 . json_extract(row_c0, <<ITEM>>)]     Using the modified sample data (see below), I get c0 row_c0 { "0": { "jobname": "A001_GVE_ADHOC_AUDIT" "status": "ENDED NOTOK" "Timestamp": "20240317 13:25:23" } "1": { "jobname": "BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TSYS" "status": "ENDED NOTOK" "Timestamp": "20240317 13:25:23" } "2": { "jobname": "BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TSYS_WEEKLY" "status": "ENDED NOTOK" "Timestamp": "20240317 13:25:23" } "3": { "jobname": "D001_GVE_SOFT_MATCHING_GDH_CA" "status": "ENDED NOTOK" "Timestamp": "20240317 13:25:23" } "4": { "jobname": "D100_AKS_CDWH_SQOOP_TRX_ORG" "status": "ENDED NOTOK" "Timestamp": "20240317 13:25:23" } "5": { "jobname": "D100_AKS_CDWH_SQOOP_TYP_123" "status": "ENDED NOTOK" "Timestamp": "20240317 13:25:23" } "6": { "jobname": "D100_AKS_CDWH_SQOOP_TYP_45" "status": "ENDED OK" "Timestamp": "20240317 13:25:23" } "7": { "jobname": "D100_AKS_CDWH_SQOOP_TYP_ENPW" "status": "ENDED NOTOK" "Timestamp": "20240317 13:25:23" } "8": { "jobname": "D100_AKS_CDWH_SQOOP_TYP_T" "status": "ENDED NOTOK" "Timestamp": "20240317 13:25:23" } "9": { "jobname": "DREAMPC_CALC_ML_NAMESAPCE" "status": "ENDED NOTOK" "Timestamp": "20240317 13:25:23" } "10": { "jobname": "DREAMPC_MEMORY_AlERT_SIT" "status": "ENDED NOTOK" "Timestamp": "20240317 13:25:23" } "11": { "jobname": "DREAM_BDV_NBR_PRE_REQUISITE_TLX_LSP_3RD_PARTY_TRNS" "status": "ENDED NOTOK" "Timestamp": "20240317 13:25:23" } "12": { "jobname": "DREAM_BDV_NBR_PRE_REQUISITE_TLX_LSP_3RD_PARTY_TRNS_WEEKLY" "status": "ENDED NOTOK" "Timestamp": "20240317 13:25:23" } "13": { "jobname": "DREAM_BDV_NBR_STG_TLX_LSP_3RD_PARTY_TRNS" "status": "ENDED OK" "Timestamp": "20240317 13:25:23" } "14": { "jobname": "DREAM_BDV_NBR_STG_TLX_LSP_3RD_PARTY_TRNS_WEEKLY" "status": "ENDED OK" "Timestamp": "20240317 13:25:23" } "15": { "jobname": "DREAM_BDV_NBR_TLX_LSP_3RD_PARTY_TRNS" "status": "ENDED OK" "Timestamp": "20240317 13:25:23" } "16": { "jobname": "DREAM_BDV_NBR_TLX_LSP_3RD_PARTY_TRNS_WEEKLY" "status": "ENDED OK" "Timestamp": "20240317 13:25:23" } "17": { "jobname": "DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_GDH" "status": "ENDED OK" "Timestamp": "20240317 13:25:23" } "18": { "jobname": "DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_GDH_WEEKLY" "status": "ENDED OK" "Timestamp": "20240317 13:25:23" } "19": { "jobname": "DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_SAMCONTDEPOT" "status": "ENDED NOTOK" "Timestamp": "20240317 13:25:23" } "20": { "jobname": "DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TLXLSP_TRXN" "status": "ENDED NOTOK" "Timestamp": "20240317 13:25:23" } "21": { "jobname": "DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TRADEABR" "status": "ENDED OK" "Timestamp": "20240317 13:25:23" } "22": { "jobname": "DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TRADEABR_WEEKLY" "status": "ENDED OK" "Timestamp": "20240317 13:25:23" } "23": { "jobname": "DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TRADESON" "status": "ENDED NOTOK" "Timestamp": "20240317 13:25:23" } "24": { "jobname": "DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TRADESON_WEEKLY" "status": "ENDED OK" "Timestamp": "20240317 13:25:23" } "25": { "jobname": "DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_ZCI" "status": "ENDED NOTOK" "Timestamp": "20240317 13:25:23" } "26": { "jobname": "DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_ZCI_WEEKLY" "status": "ENDED NOTOK" "Timestamp": "20240317 13:25:23" } {"0":"{","1":" \"0\": {","2":" \"jobname\": \"A001_GVE_ADHOC_AUDIT\"","3":" \"status\": \"ENDED NOTOK\"","4":" \"Timestamp\": \"20240317 13:25:23\"","5":" }","6":" \"1\": {","7":" \"jobname\": \"BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TSYS\"","8":" \"status\": \"ENDED NOTOK\"","9":" \"Timestamp\": \"20240317 13:25:23\"","10":" }","11":" \"2\": {","12":" \"jobname\": \"BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TSYS_WEEKLY\"","13":" \"status\": \"ENDED NOTOK\"","14":" \"Timestamp\": \"20240317 13:25:23\"","15":" }","16":" \"3\": {","17":" \"jobname\": \"D001_GVE_SOFT_MATCHING_GDH_CA\"","18":" \"status\": \"ENDED NOTOK\"","19":" \"Timestamp\": \"20240317 13:25:23\"","20":" }","21":" \"4\": {","22":" \"jobname\": \"D100_AKS_CDWH_SQOOP_TRX_ORG\"","23":" \"status\": \"ENDED NOTOK\"","24":" \"Timestamp\": \"20240317 13:25:23\"","25":" }","26":" \"5\": {","27":" \"jobname\": \"D100_AKS_CDWH_SQOOP_TYP_123\"","28":" \"status\": \"ENDED NOTOK\"","29":" \"Timestamp\": \"20240317 13:25:23\"","30":" }","31":" \"6\": {","32":" \"jobname\": \"D100_AKS_CDWH_SQOOP_TYP_45\"","33":" \"status\": \"ENDED OK\"","34":" \"Timestamp\": \"20240317 13:25:23\"","35":" }","36":" \"7\": {","37":" \"jobname\": \"D100_AKS_CDWH_SQOOP_TYP_ENPW\"","38":" \"status\": \"ENDED NOTOK\"","39":" \"Timestamp\": \"20240317 13:25:23\"","40":" }","41":" \"8\": {","42":" \"jobname\": \"D100_AKS_CDWH_SQOOP_TYP_T\"","43":" \"status\": \"ENDED NOTOK\"","44":" \"Timestamp\": \"20240317 13:25:23\"","45":" }","46":" \"9\": {","47":" \"jobname\": \"DREAMPC_CALC_ML_NAMESAPCE\"","48":" \"status\": \"ENDED NOTOK\"","49":" \"Timestamp\": \"20240317 13:25:23\"","50":" }","51":" \"10\": {","52":" \"jobname\": \"DREAMPC_MEMORY_AlERT_SIT\"","53":" \"status\": \"ENDED NOTOK\"","54":" \"Timestamp\": \"20240317 13:25:23\"","55":" }","56":" \"11\": {","57":" \"jobname\": \"DREAM_BDV_NBR_PRE_REQUISITE_TLX_LSP_3RD_PARTY_TRNS\"","58":" \"status\": \"ENDED NOTOK\"","59":" \"Timestamp\": \"20240317 13:25:23\"","60":" }","61":" \"12\": {","62":" \"jobname\": \"DREAM_BDV_NBR_PRE_REQUISITE_TLX_LSP_3RD_PARTY_TRNS_WEEKLY\"","63":" \"status\": \"ENDED NOTOK\"","64":" \"Timestamp\": \"20240317 13:25:23\"","65":" }","66":" \"13\": {","67":" \"jobname\": \"DREAM_BDV_NBR_STG_TLX_LSP_3RD_PARTY_TRNS\"","68":" \"status\": \"ENDED OK\"","69":" \"Timestamp\": \"20240317 13:25:23\"","70":" }","71":" \"14\": {","72":" \"jobname\": \"DREAM_BDV_NBR_STG_TLX_LSP_3RD_PARTY_TRNS_WEEKLY\"","73":" \"status\": \"ENDED OK\"","74":" \"Timestamp\": \"20240317 13:25:23\"","75":" }","76":" \"15\": {","77":" \"jobname\": \"DREAM_BDV_NBR_TLX_LSP_3RD_PARTY_TRNS\"","78":" \"status\": \"ENDED OK\"","79":" \"Timestamp\": \"20240317 13:25:23\"","80":" }","81":" \"16\": {","82":" \"jobname\": \"DREAM_BDV_NBR_TLX_LSP_3RD_PARTY_TRNS_WEEKLY\"","83":" \"status\": \"ENDED OK\"","84":" \"Timestamp\": \"20240317 13:25:23\"","85":" }","86":" \"17\": {","87":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_GDH\"","88":" \"status\": \"ENDED OK\"","89":" \"Timestamp\": \"20240317 13:25:23\"","90":" }","91":" \"18\": {","92":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_GDH_WEEKLY\"","93":" \"status\": \"ENDED OK\"","94":" \"Timestamp\": \"20240317 13:25:23\"","95":" }","96":" \"19\": {","97":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_SAMCONTDEPOT\"","98":" \"status\": \"ENDED NOTOK\"","99":" \"Timestamp\": \"20240317 13:25:23\"","100":" }","101":" \"20\": {","102":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TLXLSP_TRXN\"","103":" \"status\": \"ENDED NOTOK\"","104":" \"Timestamp\": \"20240317 13:25:23\"","105":" }","106":" \"21\": {","107":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TRADEABR\"","108":" \"status\": \"ENDED OK\"","109":" \"Timestamp\": \"20240317 13:25:23\"","110":" }","111":" \"22\": {","112":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TRADEABR_WEEKLY\"","113":" \"status\": \"ENDED OK\"","114":" \"Timestamp\": \"20240317 13:25:23\"","115":" }","116":" \"23\": {","117":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TRADESON\"","118":" \"status\": \"ENDED NOTOK\"","119":" \"Timestamp\": \"20240317 13:25:23\"","120":" }","121":" \"24\": {","122":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TRADESON_WEEKLY\"","123":" \"status\": \"ENDED OK\"","124":" \"Timestamp\": \"20240317 13:25:23\"","125":" }","126":" \"25\": {","127":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_ZCI\"","128":" \"status\": \"ENDED NOTOK\"","129":" \"Timestamp\": \"20240317 13:25:23\"","130":" }","131":" \"26\": {","132":" \"jobname\": \"DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_ZCI_WEEKLY\"","133":" \"status\": \"ENDED NOTOK\"","134":" \"Timestamp\": \"20240317 13:25:23\"","135":" }" } So, my hypothesis is only partially correct. Obviously c0 resembles a JSON object but without proper comma separation; it also doesn't have the closing curly bracket. The intention of c0 appears to be an order list (as opposed to array).  So, I will rectify the format to fulfill my interpretation.     | rex field=c0 mode=sed "s/} *\"/}, \"/g s/\" *\"/\", \"/g s/$/}/" ```| eval good = if(json_valid(c0), "yes", "no")```     You now get the real c0: c0 { "0": { "jobname": "A001_GVE_ADHOC_AUDIT", "status": "ENDED NOTOK", "Timestamp": "20240317 13:25:23" }, "1": { "jobname": "BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TSYS", "status": "ENDED NOTOK", "Timestamp": "20240317 13:25:23" }, "2": { "jobname": "BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TSYS_WEEKLY", "status": "ENDED NOTOK", "Timestamp": "20240317 13:25:23" }, "3": { "jobname": "D001_GVE_SOFT_MATCHING_GDH_CA", "status": "ENDED NOTOK", "Timestamp": "20240317 13:25:23" }, "4": { "jobname": "D100_AKS_CDWH_SQOOP_TRX_ORG", "status": "ENDED NOTOK", "Timestamp": "20240317 13:25:23" }, "5": { "jobname": "D100_AKS_CDWH_SQOOP_TYP_123", "status": "ENDED NOTOK", "Timestamp": "20240317 13:25:23" }, "6": { "jobname": "D100_AKS_CDWH_SQOOP_TYP_45", "status": "ENDED OK", "Timestamp": "20240317 13:25:23" }, "7": { "jobname": "D100_AKS_CDWH_SQOOP_TYP_ENPW", "status": "ENDED NOTOK", "Timestamp": "20240317 13:25:23" }, "8": { "jobname": "D100_AKS_CDWH_SQOOP_TYP_T", "status": "ENDED NOTOK", "Timestamp": "20240317 13:25:23" }, "9": { "jobname": "DREAMPC_CALC_ML_NAMESAPCE", "status": "ENDED NOTOK", "Timestamp": "20240317 13:25:23" }, "10": { "jobname": "DREAMPC_MEMORY_AlERT_SIT", "status": "ENDED NOTOK", "Timestamp": "20240317 13:25:23" }, "11": { "jobname": "DREAM_BDV_NBR_PRE_REQUISITE_TLX_LSP_3RD_PARTY_TRNS", "status": "ENDED NOTOK", "Timestamp": "20240317 13:25:23" }, "12": { "jobname": "DREAM_BDV_NBR_PRE_REQUISITE_TLX_LSP_3RD_PARTY_TRNS_WEEKLY", "status": "ENDED NOTOK", "Timestamp": "20240317 13:25:23" }, "13": { "jobname": "DREAM_BDV_NBR_STG_TLX_LSP_3RD_PARTY_TRNS", "status": "ENDED OK", "Timestamp": "20240317 13:25:23" }, "14": { "jobname": "DREAM_BDV_NBR_STG_TLX_LSP_3RD_PARTY_TRNS_WEEKLY", "status": "ENDED OK", "Timestamp": "20240317 13:25:23" }, "15": { "jobname": "DREAM_BDV_NBR_TLX_LSP_3RD_PARTY_TRNS", "status": "ENDED OK", "Timestamp": "20240317 13:25:23" }, "16": { "jobname": "DREAM_BDV_NBR_TLX_LSP_3RD_PARTY_TRNS_WEEKLY", "status": "ENDED OK", "Timestamp": "20240317 13:25:23" }, "17": { "jobname": "DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_GDH", "status": "ENDED OK", "Timestamp": "20240317 13:25:23" }, "18": { "jobname": "DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_GDH_WEEKLY", "status": "ENDED OK", "Timestamp": "20240317 13:25:23" }, "19": { "jobname": "DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_SAMCONTDEPOT", "status": "ENDED NOTOK", "Timestamp": "20240317 13:25:23" }, "20": { "jobname": "DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TLXLSP_TRXN", "status": "ENDED NOTOK", "Timestamp": "20240317 13:25:23" }, "21": { "jobname": "DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TRADEABR", "status": "ENDED OK", "Timestamp": "20240317 13:25:23" }, "22": { "jobname": "DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TRADEABR_WEEKLY", "status": "ENDED OK", "Timestamp": "20240317 13:25:23" }, "23": { "jobname": "DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TRADESON", "status": "ENDED NOTOK", "Timestamp": "20240317 13:25:23" }, "24": { "jobname": "DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_TRADESON_WEEKLY", "status": "ENDED OK", "Timestamp": "20240317 13:25:23" }, "25": { "jobname": "DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_ZCI", "status": "ENDED NOTOK", "Timestamp": "20240317 13:25:23" }, "26": { "jobname": "DREAM_BDV_NEW_BUSINESS_REPORTING_PRE_REQUISITE_ZCI_WEEKLY", "status": "ENDED NOTOK", "Timestamp": "20240317 13:25:23" }} From here, I will assume that the order of this list has some semantics and apply the same tricks. (You really need to talk to developers or read the manual of this application/equipment/device that send these data frames.) (to continue)
You could normalise the hostname using a lookup such that the primary and secondary of a pair resolve to the same name. Then you can look to see when the last time either pair had an event.
Ideally, you should avoid join if possible. It looks like the first part of your search could be replaced by this index=default-va6* sourcetype="myengine-stage" "API call is True for MyEngine" OR "... See more...
Ideally, you should avoid join if possible. It looks like the first part of your search could be replaced by this index=default-va6* sourcetype="myengine-stage" "API call is True for MyEngine" OR "Target language count" | rex field=_raw "request_id=(?<reqID>.+?) - " | rex field=_raw "Target language count (?<num_target>\d+)" | stats first(num_target) as num_target by reqID For the second join, x-request-id is not returned by the subsearch so the join will fail anyway. Perhaps there is another way to approach this, but for that we would need some (anonymised) sample events from your data sources, and perhaps a non-SPL definition of what it is you are trying to achieve.
All, I am looking for a solution to identify the hosts that have stopped reporting to Splunk using lookup table.  However, the condition is there are Primary and Secondary hosts for some data types... See more...
All, I am looking for a solution to identify the hosts that have stopped reporting to Splunk using lookup table.  However, the condition is there are Primary and Secondary hosts for some data types. I do not want to get alerted if either of the hosts (Primary or Secondary) is reporting. At the same time I would like to map these hosts to their respective index. So if a host(both primary and secondary in some cases) from a particular index stops reporting an alert should trigger (will probably have another column for index mapping the hosts). Any solution would be highly appreciated!! 
I'm trying to calculate the data throughput for a cloud computing solution that will be charging based on outgoing data throughput. We're collecting on the link using security onion and forwarding ... See more...
I'm trying to calculate the data throughput for a cloud computing solution that will be charging based on outgoing data throughput. We're collecting on the link using security onion and forwarding those zeek logs to our splunk instance.  index="zeek" source="conn.log" ((id.orig_h IN `front end`) AND NOT (id.resp_h IN `backend`)) OR ((id.resp_h IN `front end`) AND NOT (id.orig_h IN `backend`)) | fields orig_bytes, resp_bytes | eval terabytes=(((((resp_bytes+orig_bytes)/1024)/1024)/1024)/1024) | stats sum (terabytes) This gives me traffic throughput in and out of the network for external connections however what I need is to calculate orig_bytes only when the id.orig_h is my `frontend` and resp_bytes when id.resp_h is `frontend`. I can get them separately by just doing two different searches and then adding the results up by hand. But I'm sure theres a way to do what I want to in one search using some sort of conditional. I've tried using where and eval if but I'm just not skilled enough it seems. 
I'm attempting to compute the total number of API calls from our backend engine. Initially, I process API identification text logs as events in the engine's index, enabling me to filter respective re... See more...
I'm attempting to compute the total number of API calls from our backend engine. Initially, I process API identification text logs as events in the engine's index, enabling me to filter respective request IDs. Simultaneously, I process the target_num count within the same index/source. By merging these two logs through a join operation, I filter out all relevant requests to compute the total API calls accurately, achieving the desired outcome. Subsequently, I aim to enhance this by joining the filtered request IDs with another platform's index/source. Here, I intend to determine the success or failure status of each request at the platform level and then multiply it by the original value of target_num. However, upon combining these queries, I'm experiencing discrepancies in the execution results. I'm uncertain about the missing piece causing this issue. My Final Query : <x-request-id is an existing field on platform index and there is no rex I am using> ---------------------- index=default-va6* sourcetype="myengine-stage" "API call is True for MyEngine" | rex field=_raw "request_id=(?<reqID>.+?) - " | dedup reqID | join reqID [ search index=default-va6* sourcetype="myengine-stage" "Target language count" | rex field=_raw "request_id=(?<reqID>.+?) - " | rex field=_raw "Target language count (?<num_target>\d+)" | dedup reqID | fields reqID, num_target ] | fields reqID, num_target | stats count("reqID") as total_calls by num_target | eval total_api_calls = total_calls * num_target | stats sum(total_api_calls) as Total_Requests_Received | rename reqID AS "x-request-id" | join "x-request-id" [ search index=platform-va6 sourcetype="platform-ue*" "Marked request as" | eval num_succeed = if(like(message, "Marked request as succeed%"), 1, 0) | eval num_failed = if(like(message, "Marked request as failed%"), 1, 0) | fields num_succeed, num_failed ] | fields num_succeed, num_failed | stats sum(num_succeed) as num_succeed, sum(num_failed) as num_failed | eval total_succeed_calls = num_succeed * num_target, total_failed_calls = num_failed * num_target
<search> <query>index="ourIndex" sourcetype=$stype$ABC AND Is_Service_Account="True" OR Is_Service_Account="False" earliest=-48h | eval DC=upper(DC) | eval env1=case(DC like "%Q%","QA", DC like "... See more...
<search> <query>index="ourIndex" sourcetype=$stype$ABC AND Is_Service_Account="True" OR Is_Service_Account="False" earliest=-48h | eval DC=upper(DC) | eval env1=case(DC like "%Q%","QA", DC like "%DEV%","DEV", true(), "PROD") | search env1=$envPure$ AND $domainPure$ |rename DC AS domainPure | stats count </query> <earliest>0</earliest> <latest></latest> </search>   If earliest=-48h and within the source code there is <earliest>0</earliest>, then if we enable an admission rule that disables All Time searches what would happen? 
Here is an idea: Select events in which list{}.name has one unique value "Hello", and has a value of "code" as the first element of list{}.type.   | where mvindex('list{}.type', 0) == "code" AND 'l... See more...
Here is an idea: Select events in which list{}.name has one unique value "Hello", and has a value of "code" as the first element of list{}.type.   | where mvindex('list{}.type', 0) == "code" AND 'list{}.name' == "Hello" AND mvcount(mvdedup('list{}.name')) == 1   However, given that list is an array, selecting only the first element for matching may not be what the use case demands. (Work with developers to figure out what semantics array order may convey.)  Here is one to select any element with value "code".   | where 'list{}.type' == "code" AND 'list{}.name' == "Hello" AND mvcount(mvdedup('list{}.name')) == 1   Here is an emulation of your mock data for you to play with and compare with real data   | makeresults | fields - _* | eval data = mvappend("{ \"list\": [ {\"name\": \"Hello\", \"type\": \"code\"}, {\"name\": \"Hello\", \"type\": \"document\"} ] }", "{ \"list\": [ {\"name\": \"Hello\", \"type\": \"code\"}, {\"name\": \"World\", \"type\": \"document\"} ] }", "{ \"list\": [ {\"name\": \"Hello\", \"type\": \"document\"}, {\"name\": \"Hello\", \"type\": \"document\"} ] }") | mvexpand data | rename data AS _raw | spath ``` data emulation above ```   With this data, output is the same for both variants _raw list{}.name list{}.type { "list": [ {"name": "Hello", "type": "code"}, {"name": "Hello", "type": "document"} ] } Hello Hello code document
Interesting, thanks for taking time and replying to my queries. @PaulPanther 
Thanks for commenting on my scenario, that is the same conclusion that I came to, but was hoping to find a way around it.  
Hi Where are the Checkpoint values for enabled DB Connect Inputs stored? I did check at folder: /opt/splunk/var/lib/splunk/modinputs/server/splunk_app_db_connect There there are only files ... See more...
Hi Where are the Checkpoint values for enabled DB Connect Inputs stored? I did check at folder: /opt/splunk/var/lib/splunk/modinputs/server/splunk_app_db_connect There there are only files with names of our disabled DB Inputs, but not the ones of our enabled DB Inputs. Splunk Enterprise Version: 9.0.4.1 Splunk DB Connect Version: 3.6.0 Ps. our three enabled DB Inputs do work correctly, and I can see the checkpoint values from the web. Just cannot find where they are stored on the OS best regards Altin
Hello,    I have a panel with a search query e.g.     <row><panel><table> <search> <query>_some_query_ | table A B C D </query> </search> </table></panel></row>        and it displays multi... See more...
Hello,    I have a panel with a search query e.g.     <row><panel><table> <search> <query>_some_query_ | table A B C D </query> </search> </table></panel></row>        and it displays multiple of rows on a dashboard. I am trying to create a button that will send all of the column C data to a different site, so I want to store column C data as a token. Is there a way to do that?  
Hello! As a newcomer to the world of IT and Cyber Security, i am having some trouble. I am trying to set up a splunk homelab environment to get some hands on experience with the application. My ... See more...
Hello! As a newcomer to the world of IT and Cyber Security, i am having some trouble. I am trying to set up a splunk homelab environment to get some hands on experience with the application. My hopeful goal is to be able to import or stream some data to a splunk dashboard to be able to mess a round and learn for starters, but eventually set up my own home network monitoring system. Ive been able to statically import some local logs and read them over, which is fine. Id like to be able to setup a better environment for detecting intrusions and analyzing for IOCs. If anyone has some helpful links or advice i would very much appreciate it!
Hi @richgalloway @isoutamo , thank you for the information and help.
Hi @hfaz , when you say that enabled forwarding to the Indexers, I suppose that you're peaking of logs. Check that you don't have the deploymentclient.conf file in the HF, eventually distributed us... See more...
Hi @hfaz , when you say that enabled forwarding to the Indexers, I suppose that you're peaking of logs. Check that you don't have the deploymentclient.conf file in the HF, eventually distributed using an add-on. Ciao. Giuseppe
Hi @Roopashree, Splunk isn't Excel, so you cannot merge two cels, you could have the NOT_OK value in both the rows: <your_search> | rex 1 | rex 2 | stats count BY Status Reasons please next time a... See more...
Hi @Roopashree, Splunk isn't Excel, so you cannot merge two cels, you could have the NOT_OK value in both the rows: <your_search> | rex 1 | rex 2 | stats count BY Status Reasons please next time add also the sample in text mode. Ciao. Giuseppe
Perfect! Thanks for the tip.
  I need help with a splunk query to return events where an array of object contains certain value for a key in all the objects of an array Event 1: { list: [ ... See more...
  I need help with a splunk query to return events where an array of object contains certain value for a key in all the objects of an array Event 1: { list: [ {"name": "Hello", "type": "code"}, {"name": "Hello", "type": "document"} ] } Event 2: { list: [ {"name": "Hello", "type": "code"}, {"name": "World", "type": "document"} ] } Event 3: { list: [ {"name": "Hello", "type": "document"}, {"name": "Hello", "type": "document"} ] } filters: In the list array, the first object in an array should have "type": "code" In all the items in the list array should have "name": "Hello" Expected output: In the above list of events the query should return 'Event 1', where first item - list[0].type = code and list has all the items with "name": "Hello" I tried multiple ways like search list{}.name="Hello" This was returning the events which had atleast 1 element having name: Hello However i was able to achieve checking for 1st filter as below | eval conflict = mvindex(list, 0) | spath input=conflict | search type=code If someone can help in achieving both the filters in a query that will be helpful. Thanks in advance  
Hello Splunkers, I'm encountering an issue with data model acceleration in my ES instance . A few weeks ago, I enabled several data models in my ES instance to support correlation searches. However,... See more...
Hello Splunkers, I'm encountering an issue with data model acceleration in my ES instance . A few weeks ago, I enabled several data models in my ES instance to support correlation searches. However, I recently noticed that there hasn't been any increase in SVC usage, and upon checking today, I found that the acceleration status for these models was disabled. I'm puzzled by this and would appreciate any insights into why this occurred and how to identify the root cause. Thank you.
Hey DK, Build the PKG, then open terminal and run the command sudo xattr -rd com.apple.quarantine /path/to/the.pkg This will remove the com.apple.quarantine attribute and stop the computer fro... See more...
Hey DK, Build the PKG, then open terminal and run the command sudo xattr -rd com.apple.quarantine /path/to/the.pkg This will remove the com.apple.quarantine attribute and stop the computer from checking it for malicious software. The -d option deletes the noted attribute and the -r option acts recursively. If you would like to check which attributes the .PKG has on it, then run the command: xattr -r /path/to/the.pkg   Hope this helps