All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I am looking into using splunk to monitor sql server backups, can the splunk app for SQL do this?, can’t see the source type in the release notes. thanks    joe
Hello, I'm pretty new to SPLUNK and I'm looking for help trying to find ASA open connections between two endpoints. Most connections I search for have a 'Built' action and then some time after a co... See more...
Hello, I'm pretty new to SPLUNK and I'm looking for help trying to find ASA open connections between two endpoints. Most connections I search for have a 'Built' action and then some time after a corresponding 'teardown' action. I'm looking for those connections that have the 'Built' action  but not the 'teardown' action. The basic search I have pulls down all of the connections between the two:.  index="cisco" src_ip="10.55.45.12" dest_ip="10.65.45.20" dest_port=445 Is there a way to expand this search to find  these open connections based on the absence of a teardown?   Is there a way to 
Hi, Will Splunk use a more explicit Monitor stanza vs a wildcard stanza.  Since the stanza's are not identical I do not believe Splunk merges and applies lexicographical order so which stanze wins f... See more...
Hi, Will Splunk use a more explicit Monitor stanza vs a wildcard stanza.  Since the stanza's are not identical I do not believe Splunk merges and applies lexicographical order so which stanze wins for the monitored file?  My assumption is the more explicit stanza but I can't fing documentation to back that up.  Example: [monitor:///var/log/] index = linux vs. [monitor:///var/log/secure.log] index = main
Hi All,  we are able to extract data from splunk ,by running unix script which was using ''Curl' utility  /opt/bin/curl -s -S -m 60 -k ${url} -u "${userid}:${passwd}" -o $file_tmp -d output_mode=cs... See more...
Hi All,  we are able to extract data from splunk ,by running unix script which was using ''Curl' utility  /opt/bin/curl -s -S -m 60 -k ${url} -u "${userid}:${passwd}" -o $file_tmp -d output_mode=csv -d search="${src}" We now need to migrate from using curl,is there a replacement utility which we can use to do this now . In unix using shell 
Hi all, I have been trying to monitor a directory with csv files. Let me explain. I have multiple PS scripts running and they are exporting the results to csv files in a directory. I have configured... See more...
Hi all, I have been trying to monitor a directory with csv files. Let me explain. I have multiple PS scripts running and they are exporting the results to csv files in a directory. I have configured a data input on the corresponding directory and whitelisted the csv files. Which gives me the following in the input.conf file.    [monitor://C:\Program Files\Splunk\etc\apps\search\bin\Powershell\Results] disabled = false index = powershell_scripts whitelist = \.csv$   Everytime I run a PS script to test if the input works, the script creates the csv file or updates it but it isn't ingested in Splunk. Does someone knows why this could be?  Thank you, Sasquatchatmars
Hi there,  I'd like to create a search to look for group membership changes in active directory.  So far I've created this search: | tstats dc(All_Changes.user) as Useraccounts from datamodel=Chan... See more...
Hi there,  I'd like to create a search to look for group membership changes in active directory.  So far I've created this search: | tstats dc(All_Changes.user) as Useraccounts from datamodel=Change where All_Changes.result_id="4732" OR All_Changes.result_id="4733" by All_Changes.dest All_Changes.action All_Changes.result  which provides me results: user account blabla added to group  user account blabla removed from group   etc However, I'd like to refine this search more to actually be able to determine if a user has been added to a particular privileged group and removed from that same group within a specific time frame, for instance within an hour.    Thanks in advance Erik    
Hello I would like to understand the exact processing of a subsearch. As far as I know,  subsearches execute first and their results become part of the main search. So in the example below, does i... See more...
Hello I would like to understand the exact processing of a subsearch. As far as I know,  subsearches execute first and their results become part of the main search. So in the example below, does it means that the search after the join command (`wire`) is run first, that these results are crossed with the host list there is in "host.csv" and finally that these results become part of the main search (`CPU` )? Is "their results become part of the main search" means that the results of `wire` search are added in the correspondind events of `CPU` search or does it means that the results of `wire` search  are crossed with the correspondind events of `CPU` search ? By results I mean the results by host because "host" is just the only common field       [| inputlookup host.csv | table host ] `CPU` | lookup fo_all HOSTNAME as host output SITE DESCRIPTION_MODEL BUILDING_CODE | search SITE=XYZ | stats last(DESCRIPTION_MODEL) as Model, count(process_cpu_used_percent) as "Number of CPU alerts", last(SITE) as Site, last(BUILDING_CODE) as Building by host | join host type=outer [| search `wire` | rename USERNAME as host | lookup aps.csv NAME as AP_NAME OUTPUT Building | stats last(AP_NAME) as "Access point", last(Building) as "Geolocation building" by host ] | rename host as Hostname | table Hostname Model Site Building Room "Access point" "Geolocation building" "Number of CPU alerts" | sort -"Number of CPU alerts"     Other question If I look the "Number of CPU alerts" for the host "RE2345", i have 34 CPU alerts But if I put the main search in the subsearch and subsearch in the main search like below, I have 6 CPU alerts   [| inputlookup host.csv | table host ] `wire` | rename USERNAME as host | lookup aps.csv NAME as AP_NAME OUTPUT Building | stats last(AP_NAME) as "Access point", last(Building) as "Geolocation building" by host | join host type=outer [| search `CPU` | lookup fo_all HOSTNAME as host output SITE DESCRIPTION_MODEL BUILDING_CODE | search SITE=XYZ | stats last(DESCRIPTION_MODEL) as Model, count(process_cpu_used_percent) as "Number of CPU alerts", last(SITE) as Site, last(BUILDING_CODE) as Building by host ] | rename host as Hostname | table Hostname Model Site Building Room "Access point" "Geolocation building" "Number of CPU alerts" | sort -"Number of CPU alerts" How to explain the difference please? Thanks in advance  
I am using search query from indexes using join operator and get result as below , Search Query = index=case_management AND cef_name="Case inserted" | where fname LIKE "%%CMI - IPS%%" | dedup fil... See more...
I am using search query from indexes using join operator and get result as below , Search Query = index=case_management AND cef_name="Case inserted" | where fname LIKE "%%CMI - IPS%%" | dedup fileId | join fname [ search index=case_management AND cef_name="Case updated" ] | rex field=fname "CMI - IPS - \((?<customer_id>[\d]+)\) - CMI (?<Env>[^\s]+) - " | where Env ="Prod" | timechart span=1mon count by flexString2 fixedrange=false cont=false | where _time>=relative_time(now(),"-3mon@mon") AND _time<relative_time(now(),"-0mon@mon")   Result is= _time       Closed Follow-Up Queued 2020-09   113             4                   1 2020-10   26                0                   0 i want to get the same result by writing a query using data model.  @elrich11 
I have a lot of different alerts on our splunk. after every upgrade or change on splunk we just want to check if our alerts work well or not.  how can we ensure the quality of the alerts?  how can w... See more...
I have a lot of different alerts on our splunk. after every upgrade or change on splunk we just want to check if our alerts work well or not.  how can we ensure the quality of the alerts?  how can we report if our alerts work properly as planed? thanks  
Hi, I have written following query where a field consisting of 2 actions as below, Query: sourcetype="my_sourcetype" session_id="1011" |eval indicator=mvappend(src,dest)|mvexpand indicator|stats c... See more...
Hi, I have written following query where a field consisting of 2 actions as below, Query: sourcetype="my_sourcetype" session_id="1011" |eval indicator=mvappend(src,dest)|mvexpand indicator|stats count values(action) by indicator,session_id Result: indicator session_id count value(action) 23.45.6.78 1011 2 allowed teared 23.45.6.79 1045 2 allowed   Now I want to negate the field which contain both allowed and teared. Please suggest any ideas.
Hi all, I´m trying to delete the description that came at the end of some windows events. From the CM I deployed the following configuration in the props.conf: [host::my.windows.host] SEDCMD-strip... See more...
Hi all, I´m trying to delete the description that came at the end of some windows events. From the CM I deployed the following configuration in the props.conf: [host::my.windows.host] SEDCMD-strip_detail_msg = s/(?ims)\s+^This\sevent\sis\generated\s.+//g After looking into the events I can see that no SEDCMD has been applied. I´m receiving these events from a UF that collects the logs via WMI with the Splunk_TA_windows. This TA is also installed on the indexers. Thanks in advance. Best regards.
Hi, Please let me know how to set it up for the first time. 1. I have installed the TA from https://splunkbase.splunk.com/app/3991/ 2. I have created a HEC token 3. I have created 2 new indexes ... See more...
Hi, Please let me know how to set it up for the first time. 1. I have installed the TA from https://splunkbase.splunk.com/app/3991/ 2. I have created a HEC token 3. I have created 2 new indexes 4. What should I do next from splunk end and what Kubernetes team should do?  
Hi team,  I have created a dashboard with 8 panels, but it is running extremely extremely slow. how to improve the performance? Here is the XML source:   <form> <label>CAL Template Configuratio... See more...
Hi team,  I have created a dashboard with 8 panels, but it is running extremely extremely slow. how to improve the performance? Here is the XML source:   <form> <label>CAL Template Configuration</label> <fieldset submitButton="true" autoRun="true"> <input type="time" token="field1"> <label>Please select a time range</label> <default> <earliest>@d</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="DC"> <label>Please select a data center</label> <choice value="*">All</choice> <choice value="DC02">DC02</choice> <choice value="DC04">DC04</choice> <choice value="DC08">DC08</choice> <choice value="DC10">DC10</choice> <choice value="DC12">DC12</choice> <choice value="DC15">DC15</choice> <choice value="DC16">DC16</choice> <choice value="DC17">DC17</choice> <choice value="DC18">DC18</choice> <choice value="DC19">DC19</choice> <choice value="DC22">DC22</choice> <choice value="DC23">DC23</choice> <choice value="DC41">DC41</choice> <choice value="DC42">DC42</choice> <choice value="DC44">DC44</choice> <choice value="DC48">DC48</choice> <default>*</default> </input> <input type="dropdown" token="ENV"> <label>Please select a environment</label> <choice value="pc">Production</choice> <choice value="sc">Preview</choice> <default>pc</default> </input> </fieldset> <row> <panel> <title># of Data Source Usage</title> <chart> <search> <query>index=*bizx_application AND (servername=$ENV$* OR host=$ENV$*) AND SFDC=$DC$ AND sourcetype=perf_log_bizx AND ACT=SAVE_CALIBRATION_TEMPLATE AND C_DS |dedup C_CTID |eval dataSource=replace(replace(C_DS,"\[",""),"\]","") |makemv delim="," dataSource | where dataSource!=others |stats count by dataSource</query> <earliest>$earliest$</earliest> <latest>$latest$</latest> </search> <option name="charting.chart">pie</option> <option name="charting.drilldown">none</option> <option name="refresh.display">progressbar</option> <option name="charting.chart.showLabels">true</option> <option name="charting.chart.showPercent">true</option> </chart> </panel> <panel> <title># of Rating Type Usage</title> <chart> <search> <query>index=*bizx_application AND (servername=$ENV$* OR host=$ENV$*) AND SFDC=$DC$ AND sourcetype=perf_log_bizx AND ACT=SAVE_CALIBRATION_TEMPLATE AND C_RT |dedup C_CTID |eval ratingType=replace(replace(C_RT,"\[",""),"\]","") |makemv delim="," ratingType |stats count by ratingType</query> <earliest>$earliest$</earliest> <latest>$latest$</latest> </search> <option name="charting.chart">pie</option> <option name="charting.drilldown">none</option> <option name="refresh.display">progressbar</option> <option name="charting.chart.showLabels">true</option> <option name="charting.chart.showPercent">true</option> </chart> </panel> </row> <row> <panel> <title># of Decimal Rating Usage</title> <table> <search> <query>index=*bizx_application AND (servername=$ENV$* OR host=$ENV$*) AND SFDC=$DC$ AND sourcetype=perf_log_bizx AND ACT=SAVE_CALIBRATION_TEMPLATE AND C_RT |dedup C_CTID |rex max_match=0 field=C_RT "(?P&lt;ratingEnabled&gt;[^\[,\]]+)" | mvexpand ratingEnabled |rex max_match=0 field=C_RTD "(?P&lt;decimal&gt;[^\[,]+)Decimal" |eval decimaled=if(in(ratingEnabled,decimal), 1,0 ) |stats count sum(decimaled) as decimaledCount by ratingEnabled |eval ratio%=round(decimaledCount*100/count, 2) |sort - ratio%</query> <earliest>$earliest$</earliest> <latest>$latest$</latest> </search> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel> <title># of Guideline Rating Usage</title> <table> <search> <query>index=*bizx_application AND (servername=$ENV$* OR host=$ENV$*) AND SFDC=$DC$ AND sourcetype=perf_log_bizx AND ACT=SAVE_CALIBRATION_TEMPLATE AND C_RT |dedup C_CTID |rex max_match=0 field=C_RT "(?P&lt;ratingEnabled&gt;[^\[,\]]+)" |mvexpand ratingEnabled |rex max_match=0 field=C_RTG "(?P&lt;guideline&gt;[^\[,]+)Guideline" |eval guided=if(in(ratingEnabled,guideline), 1,0 ) |stats count sum(guided) as guidedCount by ratingEnabled |eval ratio%=round(guidedCount*100/count, 2) | sort - ratio%</query> <earliest>$earliest$</earliest> <latest>$latest$</latest> </search> <option name="dataOverlayMode">heatmap</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel> <title># of Advanced Option Usage</title> <table> <search> <query>index=*bizx_application AND (servername=$ENV$* OR host=$ENV$*) AND SFDC=$DC$ AND sourcetype=perf_log_bizx AND ACT=SAVE_CALIBRATION_TEMPLATE AND C_OA |dedup C_CTID |eval advancedOptions=replace(replace(C_OA,"\[",""),"\]","") |makemv delim="," advancedOptions |stats count by advancedOptions</query> <earliest>$earliest$</earliest> <latest>$latest$</latest> </search> <option name="dataOverlayMode">heatmap</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel> <title># of enabled user fields on List View</title> <table> <search> <query>index=*bizx_application AND (servername=$ENV$* OR host=$ENV$*) AND SFDC=$DC$ AND sourcetype=perf_log_bizx AND ACT=SAVE_CALIBRATION_TEMPLATE AND C_FL |dedup C_CTID |eval userFields=replace(replace(C_FL,"\[",""),"\]","") |makemv delim="," userFields |stats count by userFields</query> <earliest>$earliest$</earliest> <latest>$latest$</latest> </search> <option name="dataOverlayMode">heatmap</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel> <title>Buckets Number of Bin View</title> <table> <search> <query>index=*bizx_application AND (servername=$ENV$* OR host=$ENV$*) AND SFDC=$DC$ AND sourcetype=perf_log_bizx AND ACT=SAVE_CALIBRATION_TEMPLATE AND C_BN | dedup C_CTID | rex field=C_BN max_match=0 "(?P&lt;bin&gt;\w+):(?P&lt;buckets&gt;\d+)" | eval zip=mvzip(bin, buckets) | mvexpand zip | eval split=split(zip,",") | eval bin=mvindex(split,0), buckets=mvindex(split,1) | stats count by bin, buckets</query> <earliest>$earliest$</earliest> <latest>$latest$</latest> </search> <option name="dataOverlayMode">heatmap</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel> <title>Scale Type of Matrix View</title> <table> <search> <query>index=*bizx_application AND (servername=$ENV$* OR host=$ENV$*) AND SFDC=$DC$ AND sourcetype=perf_log_bizx AND ACT=SAVE_CALIBRATION_TEMPLATE AND C_BM | dedup C_CTID | rex field=C_BM max_match=0 "(?P&lt;matrix&gt;\w+\*\w+):(?P&lt;buckets&gt;\d+\*\d+)" | eval zip=mvzip(matrix, buckets) | mvexpand zip | eval split=split(zip,",") | eval matrix=mvindex(split,0), buckets=mvindex(split,1) | stats count by matrix, buckets</query> <earliest>$earliest$</earliest> <latest>$latest$</latest> </search> <option name="dataOverlayMode">heatmap</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form>      
Hi, I have json logs which contains  2 time fields :  Horodate and Timestamp {"ObjectLevelN3": "T", "ObjectIdN2": "R", "ObjectLevelN2": "TAG", "Level": "C3", "Version": "1", "ObjectIdN3": "00:... See more...
Hi, I have json logs which contains  2 time fields :  Horodate and Timestamp {"ObjectLevelN3": "T", "ObjectIdN2": "R", "ObjectLevelN2": "TAG", "Level": "C3", "Version": "1", "ObjectIdN3": "00:00:03", "Statut": "OK", "Horodate": "2020-12-01T08:14:25.026Z", "ObjectType": "B", "Category": "CU", "Protocol": "PULL", "ObjectLevelN1": "F", "Environment": "DEV", "ObjectIdN1": "WI", "Id": "03d6a3ef", "IdCorrelation": "17103d7f", "Timestamp": "2020-12-01T08:14:00.442Z"}   My props.conf sets the field _time to be like Horodate field : [test_json] SHOULD_LINEMERGE = 0 pulldown_type = 1 INDEXED_EXTRACTIONS = json TIME_PREFIX = Horodate We are in GMT+1 : so as you can see on the file attached the field _time is like : 12/1/20 9:14:25.026 AM My issue is : how can I set the fields Horodate and Timestamp in GMT+ 1 ?
I'm running splunk 8.1.0.1 and Cisco eStreamer eNcore 4.0.9 and configured cisco FMC for estream integration but it doent show any logs. I have some Errors in splunkd.log and estreamer.log. I dont ... See more...
I'm running splunk 8.1.0.1 and Cisco eStreamer eNcore 4.0.9 and configured cisco FMC for estream integration but it doent show any logs. I have some Errors in splunkd.log and estreamer.log. I dont  receive any result when I search for sourcetype="cisco:estreamer:data" splunkd.log: 12-01-2020 10:55:45.104 +0330 INFO DatabaseDirectoryManager - Finished writing bucket manifest in hotWarmPath=/opt/splunk/var/lib/splunk/_telemetry/db duration=0.000 12-01-2020 10:56:16.088 +0330 WARN LocalAppsAdminHandler - Using deprecated capabilities for write: admin_all_objects or edit_local_apps. See enable_install_apps in limits.conf 12-01-2020 10:56:35.888 +0330 WARN CalcFieldProcessor - Invalid eval expression for 'EVAL-first_pkt_sec' in stanza [cisco:estreamer:data]: The expression is malformed. Expected AND. 12-01-2020 10:56:43.574 +0330 WARN CalcFieldProcessor - Invalid eval expression for 'EVAL-first_pkt_sec' in stanza [cisco:estreamer:data]: The expression is malformed. Expected AND. 12-01-2020 11:00:00.002 +0330 INFO ExecProcessor - setting reschedule_ms=3599998, for command=/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/splunk_instrumentation/bin/instrumentation.py 12-01-2020 11:00:45.541 +0330 ERROR ExecProcessor - message from "/opt/splunk/etc/apps/TA-eStreamer/bin/splencore.sh clean" find: ‘../../data’: No such file or directory 12-01-2020 11:04:45.710 +0330 WARN LocalAppsAdminHandler - Using deprecated capabilities for write: admin_all_objects or edit_local_apps. See enable_install_apps in limits.conf 12-01-2020 11:09:16.851 +0330 WARN CalcFieldProcessor - Invalid eval expression for 'EVAL-first_pkt_sec' in stanza [cisco:estreamer:data]: The expression is malformed. Expected AND. 12-01-2020 11:09:47.042 +0330 WARN CalcFieldProcessor - Invalid eval expression for 'EVAL-first_pkt_sec' in stanza [cisco:estreamer:data]: The expression is malformed. Expected AND.   estreamer.log 2020-12-01 10:57:47,097 Service ERROR [no message or attrs]: PID file already exists 2020-12-01 10:58:58,905 Monitor INFO Running. 3465700 handled; average rate 1604.32 ev/sec; 2020-12-01 10:59:47,105 Service ERROR [no message or attrs]: PID file already exists 2020-12-01 11:00:58,856 Monitor INFO Running. 3642600 handled; average rate 1597.5 ev/sec; 2020-12-01 11:01:47,003 Service ERROR [no message or attrs]: PID file already exists 2020-12-01 11:02:59,543 Monitor INFO Running. 3729700 handled; average rate 1553.92 ev/sec; 2020-12-01 11:03:46,998 Service ERROR [no message or attrs]: PID file already exists 2020-12-01 11:04:59,259 Monitor INFO Running. 3744100 handled; average rate 1485.59 ev/sec; 2020-12-01 11:05:47,086 Service ERROR [no message or attrs]: PID file already exists 2020-12-01 11:06:59,648 Monitor INFO Running. 3759600 handled; average rate 1423.95 ev/sec; 2020-12-01 11:07:47,049 Service ERROR [no message or attrs]: PID file already exists 2020-12-01 11:08:59,299 Monitor INFO Running. 3773900 handled; average rate 1367.29 ev/sec; 2020-12-01 11:09:47,126 Service ERROR [no message or attrs]: PID file already exists 2020-12-01 11:10:59,220 Monitor INFO Running. 3788200 handled; average rate 1315.21 ev/sec;    
Hi, I have 2 different events. these 2 events can be identified by "Id".    I am trying to display it in table in the below format, wherein the records should be in a single row api_name,Id,OpName... See more...
Hi, I have 2 different events. these 2 events can be identified by "Id".    I am trying to display it in table in the below format, wherein the records should be in a single row api_name,Id,OpName,Response,Current,System_Service_Response event 1 api_name=apple||Id=12345||OpName=Update||Response_Code=200||Response_Status=COMPLETED||Response=[{"number":"99999","status":"Welcome back"}]|| event 2 api_name=apple||Id=12345||System_Name=Oracle||Service_Name=Oracle||Operation_Name=test||System_Status_Code=200||System_Service_Status=COMPLETED||System_Service_Response={"number":"99999","status":"Welcome back"}||Current=99999 My search query displays 2 rows, is it possible to group the events and display in 1 row.  
Hi Guys, In my project environment, every splunkd is installed using splunk user. So I need to create an alert if any splunkd on any splunk server (enterprise or UF) gets started with root or any o... See more...
Hi Guys, In my project environment, every splunkd is installed using splunk user. So I need to create an alert if any splunkd on any splunk server (enterprise or UF) gets started with root or any other user post boot or if anyone starts it with any other user than splunk. Please suggest. -Thanks
Good day, I would like to create an alert for the below error, can i get a regex for the higlighted part  and how would i go about creating the alert, or should I just look out for the event and set ... See more...
Good day, I would like to create an alert for the below error, can i get a regex for the higlighted part  and how would i go about creating the alert, or should I just look out for the event and set to trigger whenever its > 0. 2020-11-14 23:04:24 [http-nio-127.0.0.1-7080-exec-7] LdapHealthChecker [ERROR] Error loading the user groups from LDAP server. Please check the ldap.server.url, ldap.bind.dn, ldap.bind.password secure connection properties. Refer to "Manage Configuration Properties Guide". org.springframework.ldap.AuthenticationException: [LDAP: error code 49 - 80090308: LdapErr: DSID-0C090453, comment: AcceptSecurityContext error, data 52e, v3839\x00]; nested exception is javax.naming.AuthenticationException: [LDAP: error code 49 - 80090308: LdapErr: DSID-0C090453, comment: AcceptSecurityContext error, data 52e, v3839\x00]
Hi, We have integrated a S3 bucket with Splunk.  Log path - aaa\folder\out.log aaa\folder\error.log aaa\folder\audit.log aaa\folder\security.log Out of all these, I want to ingest only out.log... See more...
Hi, We have integrated a S3 bucket with Splunk.  Log path - aaa\folder\out.log aaa\folder\error.log aaa\folder\audit.log aaa\folder\security.log Out of all these, I want to ingest only out.log and error.log --> How to design a key name for this?
Hello Friends,  I am trying to fetch value of "F5_device"  from search and use as a input to another search to find the result of "syslog_severity". both fields are belongs to same index and sourcet... See more...
Hello Friends,  I am trying to fetch value of "F5_device"  from search and use as a input to another search to find the result of "syslog_severity". both fields are belongs to same index and sourcetype.  I tried following subsearch query but noticed vast difference between output if ran query separately.  index="f5" sourcetype="f5:enterprise" [search index="f5" sourcetype="f5:enterprise" AND (F5_URL=*abc.com* OR F5_vip=*abc.com* ) | fields + F5_device] |stats count by syslog_severity Also from another  thread i do understand that subsearch have limitation so please suggest if there is any alternative option available for subsearch. I would like to use this query within dashboard. Thank you in advance.