All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I am trying to use splunk sdk javascript and I have searched online for this but came up with no answers. I have followed the tutorial in splunk for setting up the sdk and trying to log in with  ... See more...
Hi, I am trying to use splunk sdk javascript and I have searched online for this but came up with no answers. I have followed the tutorial in splunk for setting up the sdk and trying to log in with  var service = new splunkjs.Service  however this gives me the error of a web instance is required. I found a solution to this is to add var http = new splunkjs.ProxyHttp("/proxy")  but this gives me the error splunkjs.ProxyHttp is not a constructor why is this happening?
Hi all,  I am attempting to convert data extracted as a field containing combination of hex and ascii data. Was wondering if it is possible to convert the hex data into ascii without affecting the ... See more...
Hi all,  I am attempting to convert data extracted as a field containing combination of hex and ascii data. Was wondering if it is possible to convert the hex data into ascii without affecting the ascii data?   Thanks in advance 
I have installed the available SignalFX plug-in on my Splunk Enterprise instance on-premise server. I tried searching for tutorials/documentations on how to setup SignalFX on Splunk Enterprise on-pre... See more...
I have installed the available SignalFX plug-in on my Splunk Enterprise instance on-premise server. I tried searching for tutorials/documentations on how to setup SignalFX on Splunk Enterprise on-premise server.  But moving forward I got to know that to collect data, the Splunk Distribution of OpenTelemetry Collector should be used, and the steps for it mentions Splunk Observability Cloud. Please help me understand if SignalFX be used for Splunk Enterprise version, if yes then how. 
I have two saved search reports with below outputs. saved search 1 (totalCountByClient) giving client_name, totalCount as output saved search 2 (monitoringCountByClient) giving client_name, monitor... See more...
I have two saved search reports with below outputs. saved search 1 (totalCountByClient) giving client_name, totalCount as output saved search 2 (monitoringCountByClient) giving client_name, monitoringCount as output Want to show results of (monitoringCount/totalCount*100) by client_name as a timechart. Any help would be appreciated.
I need to monitor privileged access employees who can transfer files form internal to external network. Prvileged access employees include local admins, unlimited internet access users, employees who... See more...
I need to monitor privileged access employees who can transfer files form internal to external network. Prvileged access employees include local admins, unlimited internet access users, employees who can use usb flash drive and send emails to external I need it it from DLP , please guys, help me how to do it with splunk.
Hi, Been banging my head on this brick wall for a while so reaching out for some of expertise. Seems pretty straightforward and regex101 says my expression should work, but I am not getting any dat... See more...
Hi, Been banging my head on this brick wall for a while so reaching out for some of expertise. Seems pretty straightforward and regex101 says my expression should work, but I am not getting any data returned in the new field. Original data  is: 18 dB, 16 dB, 12 dB, 12 dB, 12 dB, 13 dB, 4 dB, 8 dB, 9 dB, 9 dB 9 dB, 9 dB, 9 dB, 9 dB 9 dB 9 dB, 9 dB, 9 dB, 9 dB, 9 dB 7 dB, 9 dB I'm trying to remove the space and the text dB after any number. So the results for the 4th event would read as 9, 9, 9, 9, 9 and the 5th event would be 7, 9 My search returns  the events, but no values for the new field: |rex field=Value "\ dB(?<MicGainText>)" |table Value MicGainText If anyone could assist, it would be greatly appreciated. Thanks in advance John
Hi all,    Here is the use case I'm dealing with. We have a large virtual environment in which a lot of teams like to just clone one VM to another, meaning that the forwarder hostname and guid gets... See more...
Hi all,    Here is the use case I'm dealing with. We have a large virtual environment in which a lot of teams like to just clone one VM to another, meaning that the forwarder hostname and guid gets cloned, which messes with our reporting.      I am trying to write a simple script that does the following: 1. Detects if a UF's hostname is correct or not 2. Runs a simple scripted input to clear out any cloned configs 3. Restarts the forwarder so that the new configs are picked up.      #3 is causing me trouble. If I try to put a "splunk restart" command in the main body of the script, then Splunk will stop, kill the scripted input, and never restart. I've also tried creating a "wrapper" script that invokes a separate script to do the restart, but with no success - Splunk will stop but not start back up. Is there a better way to do this?      All hosts are AWS Linux. 
When trying to help Working with SHA1 value., I encountered some fundamental SPL limitation with large numbers starting around 10000000000000000 (10 trillion).  But Splunk gives no error, just behave... See more...
When trying to help Working with SHA1 value., I encountered some fundamental SPL limitation with large numbers starting around 10000000000000000 (10 trillion).  But Splunk gives no error, just behaves erratically.  Take this example,   | makeresults | eval i = mvrange(1, 20) | eval a = mvmap(i, 1000000000000000 + i) | eval b = mvmap(i, 10000000000000000 - i) | eval c = mvmap(i, 10000000000000000 + i) | eval d = mvmap(i, 100000000000000000 + i) | eval e = mvmap(i, 1000000000000000000 + i)   a b c d e i 1000000000000001 1000000000000002 1000000000000003 1000000000000004 1000000000000005 1000000000000006 1000000000000007 1000000000000008 1000000000000009 1000000000000010 1000000000000011 1000000000000012 1000000000000013 1000000000000014 1000000000000015 1000000000000016 1000000000000017 1000000000000018 1000000000000019 10000000000000000 9999999999999998 9999999999999996 9999999999999996 9999999999999996 9999999999999994 9999999999999992 9999999999999992 9999999999999992 9999999999999990 9999999999999988 9999999999999988 9999999999999988 9999999999999986 9999999999999984 9999999999999984 9999999999999984 9999999999999982 9999999999999980 10000000000000000 10000000000000002 10000000000000004 10000000000000004 10000000000000004 10000000000000006 10000000000000008 10000000000000008 10000000000000008 10000000000000010 10000000000000012 10000000000000012 10000000000000012 10000000000000014 10000000000000016 10000000000000016 10000000000000016 10000000000000018 10000000000000020 100000000000000000 100000000000000000 100000000000000000 100000000000000000 100000000000000000 100000000000000000 100000000000000000 100000000000000000 100000000000000020 100000000000000020 100000000000000020 100000000000000020 100000000000000020 100000000000000020 100000000000000020 100000000000000020 100000000000000020 100000000000000020 100000000000000020 1000000000000000000 1000000000000000000 1000000000000000000 1000000000000000000 1000000000000000000 1000000000000000000 1000000000000000000 1000000000000000000 1000000000000000000 1000000000000000000 1000000000000000000 1000000000000000000 1000000000000000000 1000000000000000000 1000000000000000000 1000000000000000000 1000000000000000000 1000000000000000000 1000000000000000000 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 In other words, as the number approaches 10^13, SPL can no longer maintain sequence.  Certain numbers (10,000,000,000,000,000, 100,000,000,000,000,000, 1,000,000,000,000,000,000, etc.) are still being presented, but not many numbers in between. Is there documentation about this?  Is this configurable in limits.conf? Not having errors makes this a dangerous condition.  So, I definitely consider this a bug.  In modern computing, 10 trillion is not terribly large; even shell can handle it.   $ for i in `seq 20`; do expr 10000000000000000 + $i; done 10000000000000001 10000000000000002 10000000000000003 10000000000000004 10000000000000005 10000000000000006 10000000000000007 10000000000000008 10000000000000009 10000000000000010 10000000000000011 10000000000000012 10000000000000013 10000000000000014 10000000000000015 10000000000000016 10000000000000017 10000000000000018 10000000000000019 10000000000000020    (You can try it in any shell.)
Hi I want to look up, DLP agent not installed on a system, index = mcafee_epo sourcetype = mcafee:epo:syslog source = mcafee:epo:syslog how to do I search dlp agents not installed on a sys... See more...
Hi I want to look up, DLP agent not installed on a system, index = mcafee_epo sourcetype = mcafee:epo:syslog source = mcafee:epo:syslog how to do I search dlp agents not installed on a system.  
I Have a log like this, how do I Parse it into fields??  Is there a way to use Splunk to parse this and extract one value? If so, how? Thank you in advance. Regards, Imam  
Can u help me on this issue. As character are exceeding i am posting the question 3 times total. I am trying to combine 2 queries to get the result, i am getting the result, but not as expecte... See more...
Can u help me on this issue. As character are exceeding i am posting the question 3 times total. I am trying to combine 2 queries to get the result, i am getting the result, but not as expected. Query1: index=app-map-idx shname=niht_map_* | append [| inputlookup customerdata.csv |addinfo | where _time>=info_min_time AND _time<=info_max_time |fields - info* |eval done=1, shname=device."_name", source=device."_name"] |table _time sys operation web pass fail fvb quantity done shname |eventstats sum(done) as done by shname sys |bucket _time span=w |stats sum(*) as * values(done) as done dc(web) as webb by _time shname sys operation |appendpipe [| search sys="wi lapcdmb" operation="login page" OR operation="app page" OR operation="userpage" OR operation="custpage"] |foreach pass fail fvb quantity [|eval <<FIELD>>=<<FILED>>*0.09] |eval sys="pim"] | eval "pass percentage"=pass/quantity*100 |eval iel=fail/quantity*100 |fillnull fvb value=0 |rename pass as Pass, fail as Fail, sys as Sys, fvb as Fvb, quantity as Quantity, operation as Operation, webb as Webb |lookup application.csv Sys OUTPUT App mtr |search mtr=given |table _time pass percentage iel App Sys Operation Webb Pass Fail Quantity Fvb |sort 0 Sys Output: _time App Sys Operation Webb Pass Fail Quantity Fvb pass percentage iel 2022-11-10 Custom Customer sys sigin app - customer 1 20 0 20 1 100 0 2022-11-10 Cli app CA sigin app - application 2 30 10 40 10 99 0.01 2022-11-10 sigin app SG sigin app - client 2 10 1 11 1 100 0 2022-11-10 user app UA sigin app - app1 1 60 0 60 16 80 0.05 2022-11-10 user app UA sigin app - app2 4 50 0 50 0 20 0.9 2022-11-10 user app UA sigin app - app3 5 100 9 109 0 0 0 2022-11-10 user app UA sigin app - app4 8 21 0 21 0 0 0 2022-11-10 user app UA sigin app - app5 9 23 0 23 0 0 0 2022-11-10 suzk app SA sigin app - app6 2 16 8 24 1 99 0 2022-11-10 qualis app QA sigin app - app7 1 100 10 110 8 10 0 2022-11-10 benz app BA sigin app - app8 2 500 11 511 6 20 0 2022-11-10 client app CA sigin app - app1 4 200 0 200 4 100 0 2022-11-10 client app CA sigin app - app2 1 800 1 801 1 100 0.11 2022-11-10 client app CA sigin app - app3 3 220 1 221 0 90 0.22 2022-11-10 client app CA sigin app - app4 2 1000 11 1011 0 99 0.45 2022-11-10 client app CA sigin app - app5 1 1089 12 2178 9 99 0.56 Query2: index=blrm-app-idx  sourcetype=blrm_appl  "QA: ab"  OR "QA: cd"  OR "QA: ef"  OR "QA: gh" (procedure=POST web="/cate/surface*")  OR  (procedure=GET web="/choc/plain/otp*")  OR  (procedure=POST web="/may/duration")  OR  (procedure=GET web="/year/days")  OR   web="/nam/power/error")  OR  web="/move/cloud")  OR   web="/mont/days NOT [|inputlookup  angryer.csv |rename PI as result |fields result] | eval  type=case((faildigit=978 OR faildigit=435 OR faildigit=543 OR faildigit=987), "error", (faildigit>98763 AND faildigit<98765 )  OR faildigit=123 OR faildigit=456 OR faildigit=789,  fvb,  isnull(faildigit), "pass",   isnotnull(faildigit), "error") |search pipe = "error" |eval operation=case(match(web,  "/cate/surface*"),  sigin app - customer,   procedure=GET AND  web="/choc/plain/otp*") ,  sigin app - application,   procedure=POST AND web="/may/duration",  sigin app - client,   procedure=GET  AND web="/year/days", sigin app - app1,  OR   web="/nam/power/error",  sigin app - app2  OR  web="/move/cloud", sigin app - app3 OR   web="/mont/days, sigin app - app4,  1=1, "unknown") |eval  web = procedure." ".web |eval sys= case (cliapp=ab,  applibase,   cliapp=cd,  cusbase,    cliapp=ef,  efffilm,   cliapp=gh    gohome,  1=1, null()) |bin _time span=1h |stats dc(cust_ip) as effect_apps  by _time sys operation web OUTPUT: _time sys operation web effect_apps 2022-11-10 UA sigin app - app1 1 10 2022-11-10 UA sigin app - app2 2 12 2022-11-10 UA sigin app - app3 2 10 2022-11-10 UA sigin app - app4 1 40 2022-11-10 UA sigin app - app5 4 2 2022-11-10 CA sigin app - app1 5 6 2022-11-10 CA sigin app - app2 8 1 2022-11-10 CA sigin app - app3 9 3 2022-11-10 CA sigin app - app4 2 4 2022-11-10 CA sigin app - app5 1 8 contunuation i am repying to this post...........
Hello, I would like to extract specific values from a log and display it in my Dashboard. For example, the value is: ?QuoteId=CA10118&AgentId=12345&state=MN&Category=RetailSales Is it possibl... See more...
Hello, I would like to extract specific values from a log and display it in my Dashboard. For example, the value is: ?QuoteId=CA10118&AgentId=12345&state=MN&Category=RetailSales Is it possible to extract the word "AgentId=12345" and "state=MN"? "AgentId" and "state" will always be the same. The value that follows will always change. I would like to also display each value in a separate column i.e. Agent ID     State 12345          MN Any help would be appreciated. 
I have lookup contains IP and I want to compare to field from event that contains CIDR. I did lookup definition and add the match type into the advance (CIDR(ip)  my lookup: ip exist  1.1.1.1... See more...
I have lookup contains IP and I want to compare to field from event that contains CIDR. I did lookup definition and add the match type into the advance (CIDR(ip)  my lookup: ip exist  1.1.1.1 "yes" 2.2.2.2 my event: cidr  1.1.1.254/32 my query: |makeresults  | lookup ip ip as cidr output exist   what I did wrong? 
Hi ,, i am looking for the way if i could convert sha1 value to integer with base 16 to do the further arithmetic operations on it using eval.  i tried : ..... |eval sha=sha1("xyz123")|eval dec... See more...
Hi ,, i am looking for the way if i could convert sha1 value to integer with base 16 to do the further arithmetic operations on it using eval.  i tried : ..... |eval sha=sha1("xyz123")|eval dec_number=tonumber(sha,16) but dec_number is returning nothing. Could anyone suggest something ?
Hello Splunkers!    We are using Splunk App for Infrastructure version: 2.2.2 and have upgraded to the latest version: 2.2.5 However, the issue here is that after upgrading, the app got stuck w... See more...
Hello Splunkers!    We are using Splunk App for Infrastructure version: 2.2.2 and have upgraded to the latest version: 2.2.5 However, the issue here is that after upgrading, the app got stuck when migrating at "SAI is currently migrating to a newer version, page will automatically reload when migration finishes" and has been in this state forever:   Has anyone encountered such a thing in the past and solved the issue? 
      index="dummy" url="https://www.dummy.com" status="200 OK" | stats count by id | where count > 10     If I apply this above query for 1 day, I would get this result, for example ... See more...
      index="dummy" url="https://www.dummy.com" status="200 OK" | stats count by id | where count > 10     If I apply this above query for 1 day, I would get this result, for example     id count ABC 50 XYZ 60     This would mean ABC called `https://www.dummy.com` 50 times in 1 day, and XYZ called that 60 times. Now I want to check this for 1 day but with every two hours interval Suppose, ABC called that request 25 times at 12:00 AM, and then calls it 25 times at 3:AM, and XYZ called all the 60 requests between 12 AM and 2 AM I want the output to look like this (time format doesn't matter)     id count time XYZ 60 12:00 AM ABC 25 12:00 AM ABC 25 2:00 AM     Also, If I modify the query like this,  count > 30, instead of count > 10, then it should only show the XYZ field, since ABC has 25 counts for both of them.  How do I modify my query? 
Hi All, I am trying to combine 2 queries to get the result, i am getting the result, but not as expected. Query1: index=app-map-idx   shname=niht_map_* | append [| inputlookup   customerdata.c... See more...
Hi All, I am trying to combine 2 queries to get the result, i am getting the result, but not as expected. Query1: index=app-map-idx   shname=niht_map_* | append [| inputlookup   customerdata.csv |addinfo | where  _time>=info_min_time AND _time<=info_max_time  |fields - info* |eval done=1, shname=device."_name",  source=device."_name"] |table _time   sys operation web pass fail fvb  quantity done shname |eventstats sum(done) as done by shname sys |bucket _time span=w |stats sum(*) as * values(done) as done dc(web) as webb by _time shname  sys operation |appendpipe [| search sys="wi lapcdmb" operation="login page" OR operation="app page" OR operation="userpage" OR operation="custpage"] |foreach pass fail  fvb  quantity [|eval  <<FIELD>>=<<FILED>>*0.09] |eval sys="pim"] | eval  "pass percentage"=pass/quantity*100 |eval iel=fail/quantity*100 |fillnull fvb value=0 |rename  pass as Pass,   fail as Fail,  sys as Sys,  fvb as Fvb,   quantity as Quantity,  operation as Operation,  webb as Webb |lookup application.csv Sys  OUTPUT   App  mtr |search mtr=given |table  _time pass percentage iel App Sys  Operation Webb Pass Fail Quantity Fvb   |sort 0 Sys Output: _time App Sys Operation Webb Pass Fail Quantity Fvb pass percentage iel 2022-11-10 Custom Customer sys sigin app - customer 1 20 0 20 1 100 0 2022-11-10 Cli app CA sigin app - application 2 30 10 40 10 99 0.01 2022-11-10 sigin app SG sigin app - client 2 10 1 11 1 100 0 2022-11-10 user app UA sigin app - app1 1 60 0 60 16 80 0.05 2022-11-10 user app UA sigin app - app2 4 50 0 50 0 20 0.9 2022-11-10 user app UA sigin app - app3 5 100 9 109 0 0 0 2022-11-10 user app UA sigin app - app4 8 21 0 21 0 0 0 2022-11-10 user app UA sigin app - app5 9 23 0 23 0 0 0 2022-11-10 suzk app SA sigin app - app6 2 16 8 24 1 99 0 2022-11-10 qualis app QA sigin app - app7 1 100 10 110 8 10 0 2022-11-10 benz app BA sigin app - app8 2 500 11 511 6 20 0 2022-11-10 client app CA sigin app - app1 4 200 0 200 4 100 0 2022-11-10 client app CA sigin app - app2 1 800 1 801 1 100 0.11 2022-11-10 client app CA sigin app - app3 3 220 1 221 0 90 0.22 2022-11-10 client app CA sigin app - app4 2 1000 11 1011 0 99 0.45 2022-11-10 client app CA sigin app - app5 1 1089 12 2178 9 99 0.56 Query2: index=blrm-app-idx  sourcetype=blrm_appl  "QA: ab"  OR "QA: cd"  OR "QA: ef"  OR "QA: gh" (procedure=POST web="/cate/surface*")  OR  (procedure=GET web="/choc/plain/otp*")  OR  (procedure=POST web="/may/duration")  OR  (procedure=GET web="/year/days")  OR   web="/nam/power/error")  OR  web="/move/cloud")  OR   web="/mont/days NOT [|inputlookup  angryer.csv |rename PI as result |fields result] | eval  type=case((faildigit=978 OR faildigit=435 OR faildigit=543 OR faildigit=987), "error", (faildigit>98763 AND faildigit<98765 )  OR faildigit=123 OR faildigit=456 OR faildigit=789,  fvb,  isnull(faildigit), "pass",   isnotnull(faildigit), "error") |search pipe = "error" |eval operation=case(match(web,  "/cate/surface*"),  sigin app - customer,   procedure=GET AND  web="/choc/plain/otp*") ,  sigin app - application,   procedure=POST AND web="/may/duration",  sigin app - client,   procedure=GET  AND web="/year/days", sigin app - app1,  OR   web="/nam/power/error",  sigin app - app2  OR  web="/move/cloud", sigin app - app3 OR   web="/mont/days, sigin app - app4,  1=1, "unknown") |eval  web = procedure." ".web |eval sys= case (cliapp=ab,  applibase,   cliapp=cd,  cusbase,    cliapp=ef,  efffilm,   cliapp=gh    gohome,  1=1, null()) |bin _time span=1h |stats dc(cust_ip) as effect_apps  by _time sys operation web OUTPUT: _time sys operation web effect_apps 2022-11-10 UA sigin app - app1 1 10 2022-11-10 UA sigin app - app2 2 12 2022-11-10 UA sigin app - app3 2 10 2022-11-10 UA sigin app - app4 1 40 2022-11-10 UA sigin app - app5 4 2 2022-11-10 CA sigin app - app1 5 6 2022-11-10 CA sigin app - app2 8 1 2022-11-10 CA sigin app - app3 9 3 2022-11-10 CA sigin app - app4 2 4 2022-11-10 CA sigin app - app5 1 8
We have a rather huge solution with 2000+ servere.    Our company needs that we monitor the IIS logs. Problem is that not all server has IIS and some have logs on  C: drive and other on E: drive.... See more...
We have a rather huge solution with 2000+ servere.    Our company needs that we monitor the IIS logs. Problem is that not all server has IIS and some have logs on  C: drive and other on E: drive. We did then setup to monitor "E:\inetpub\logs\LogFiles" and "C:\inetpub\logs\LogFiles" on all severs.  It would be impossible to select just the servers with IIS running.   The result of this is that we do get lots of warning in our logs. Some server have logs in E and some in C and som do not have IIS logs, so logs are filled with this messages:   11-19-2022 16:22:04.199 +0100 WARN FilesystemChangeWatcher - error getting attributes of path "E:\inetpub\logs\LogFiles": The device is not ready.   1. Does this give any problem?  2. This clogs up the logs, so its harder to find real problem, so if this logs are not giving any problem, can it be turned of for some monitored files?
Hi Everyone, I am using the collect command to write data in summary index and it is giving the values properly when i run the search manually before using collect command but when i put the collec... See more...
Hi Everyone, I am using the collect command to write data in summary index and it is giving the values properly when i run the search manually before using collect command but when i put the collect command the values become 0.     Attached are the screenshots before collect and the job results of saved search If anyone has idea on this pls let me know.     Thanks
I have below json data:  { "source": "Mule", "sourcetype": "_json", "index": "metrics", "event": [ { "date": "2022-11-19T13:57:01.427+05:30", "businessGroup": "testgroup", "businessGroupId"... See more...
I have below json data:  { "source": "Mule", "sourcetype": "_json", "index": "metrics", "event": [ { "date": "2022-11-19T13:57:01.427+05:30", "businessGroup": "testgroup", "businessGroupId": "1234", "coreServicesMetrics": { "users": { "total": 38 }, "environments": { "total": 3 } }, "designCenterMetrics": { "total": 5 }, "exchangeMetrics": { "total": 1, "reuse": { "fragments": 0, "implementedApis": 0 } }, "apiManagerMetrics": { "clients": 0, "apis": { "production": { "total": 0, "contracts": 0, "policiesUsed": [ ], "policiesUsedTotal": 0, "automatedPoliciesUsed": [ ], "automatedPoliciesUsedTotal": 0, "transactions": 0, "details": [ { "environmentName": "PROD", "environmentType": "Production", "contracts": 0, "policiesUsed": [ ], "policiesUsedTotal": 0, "automatedPoliciesUsed": [ ], "automatedPoliciesUsedTotal": 0, "transactions": 0 } ] }, "sandbox": { "total": 0, "contracts": 0, "policiesUsed": [ ], "policiesUsedTotal": 0, "automatedPoliciesUsed": [ ], "automatedPoliciesUsedTotal": 0, "transactions": 0, "details": [ { "environmentName": "DEV", "environmentType": "Sandbox", "policiesUsed": [ ], "policiesUsedTotal": 0, "automatedPoliciesUsed": [ ], "automatedPoliciesUsedTotal": 0, "transactions": 0 }, { "environmentName": "TEST", "environmentType": "Sandbox", "contracts": 0, "policiesUsed": [ ], "policiesUsedTotal": 0, "automatedPoliciesUsed": [ ], "automatedPoliciesUsedTotal": 0, "transactions": 0 } ] } } }, "runtimeManagerMetrics": { "cloudhub": { "networking": { "vpcsTotal": 9, }, "applications": { "production": { "vcoresTotal": 31, "runtimesUsed": [ ], "runtimesUsedTotal": 0, "details": [ { "environmentName": "X", "environmentType": "X", "runtimesUsed": [ ], "runtimesUsedTotal": 0 } ] }, "sandbox": { "vcoresTotal": 14, "runtimesUsed": [ ], "runtimesUsedTotal": 0, "details": [ { "environmentName": "DEV", "environmentType": "Sandbox", "vcoresUsed": 0, "runtimesUsed": [ ], "runtimesUsedTotal": 0 }, { "environmentName": "TEST", "environmentType": "Sandbox", "runtimesUsed": [ ], "runtimesUsedTotal": 0 } ] } } } } }, { "date": "2022-11-19T13:57:01.427+05:30", "businessGroup": "MainGroup", "businessGroupId": "5678", "coreServicesMetrics": { "users": { "total": 16 }, "environments": { "total": 3 } }, "designCenterMetrics": { "total": 11 }, "exchangeMetrics": { "total": 11, "reuse": { "fragments": 10, } }, "apiManagerMetrics": { "clients": 0, "apis": { "production": { "total": 0 "contracts": 0, "policiesUsed": [ ], "policiesUsedTotal": 0, "automatedPoliciesUsed": [ ], "automatedPoliciesUsedTotal": 0, "transactions": 0, "details": [ { "environmentName": "X", "environmentType": "X", "policiesUsed": [ ], "policiesUsedTotal": 0, "automatedPoliciesUsed": [ ], "automatedPoliciesUsedTotal": 0, "transactions": 0 } ] }, "sandbox": { "total": 20, "contracts": 24, "policiesUsed": [ "client-id-enforcement" ], "policiesUsedTotal": 1, "automatedPoliciesUsed": [ ], "automatedPoliciesUsedTotal": 0, "transactions": 1499, "details": [ { "environmentName": "DEV", "environmentType": "Sandbox", "contracts": 11, "policiesUsed": [ "client-id-enforcement" ], "policiesUsedTotal": 1, "automatedPoliciesUsed": [ ], "automatedPoliciesUsedTotal": 0, "transactions": 585 }, { "environmentName": "TEST", "environmentType": "Sandbox", "contracts": 13, "policiesUsed": [ "client-id-enforcement" ], "policiesUsedTotal": 1, "automatedPoliciesUsed": [ ], "automatedPoliciesUsedTotal": 0, "transactions": 914 } ] } } } } ] }   I want to extract below data : event{where businessGroup="MainGroup"}.apiManagerMetrics.apis.sandbox.details{where environmentName="DEV"}.transactions using splunk query. Can you please help me with that