All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi,  Is there a way to integrate Trendmicro SaaS with Splunk Cloud?  Kindly advise the best way to do this.  Thanks in advance.
I have been trying to make heatmap in Splunk dashboard i want to replace "0" with "-" in the cell of chart count by two fields when the cell is no data. How do I accomplish this?   EX) DATA: FI... See more...
I have been trying to make heatmap in Splunk dashboard i want to replace "0" with "-" in the cell of chart count by two fields when the cell is no data. How do I accomplish this?   EX) DATA: FIELD1,FIELD2,FIELD3 a,A,x a,A,x b,B,x a,B,   | chart count(isnotnull(FIELD3)) AS countA by FIELD2,FIELD1   Relults I want:     a  b   A 2   - B 0  1   Now Relults:     a  b   A 2  0 B 0  1
Situation: The data I need resides in the below:     index=X (sourcetypeA=X fieldA=X) OR (sourcetypeB=X fieldB=X) | rename fieldA as fieldB | stats count by fieldC, fieldD, fieldE, fieldB    ... See more...
Situation: The data I need resides in the below:     index=X (sourcetypeA=X fieldA=X) OR (sourcetypeB=X fieldB=X) | rename fieldA as fieldB | stats count by fieldC, fieldD, fieldE, fieldB     Problem: "fieldD" only has a value when I modify the search as such:     index=X (sourcetypeA=X NOT fieldA=X) OR (sourcetypeB=X NOT fieldB=X) | rename fieldA as fieldB | stats count by fieldC, fieldD, fieldE, fieldB     -------------------------------------- Based on my research I presume I am 100% incorrect but I've been trying to use join with no success. I suspect the answer is to use a subsearch however I can't figure out how to construct it so that I can always get a value for "fieldD". Any help would be greatly appreciated.
Hi guys, i need some help. I'm trying to make a time chart to compare how many times my system gets restarted comparing today with 7 days ago. I have this healthcheck log and the first log is w... See more...
Hi guys, i need some help. I'm trying to make a time chart to compare how many times my system gets restarted comparing today with 7 days ago. I have this healthcheck log and the first log is when the user logs in for the first time and the next is the times that the user restarts my app. with the following query works just fine the problem here is that i get the results from (initialization + restart) but i want the result just from the restart.   index=myIndex Title=Healthcheck earliest=-10d@d latest=@d | timechart span=1h count | timewrap d series=short | fields _time s0 s7 | rename s0 as Today, s7 as "7 days ago"   with this other query i have exactly the restart from each user but i cant make it work with time chart.   index=myIndex Title=Healthcheck | stats count by Data.Ip | eval count = count - 1   if it was confused i posted this other question explaining my scenario: https://community.splunk.com/t5/Splunk-Search/How-to-change-the-result-of-my-stats-count/td-p/600364
8.2.5 Enterprise  _internal index has 5 buckets with this error: ClusterSlaveBucketHandler [xxxxxx TcpChannelThread] - Failed to trigger replication (err='Cannot replicate remote storage enabled wa... See more...
8.2.5 Enterprise  _internal index has 5 buckets with this error: ClusterSlaveBucketHandler [xxxxxx TcpChannelThread] - Failed to trigger replication (err='Cannot replicate remote storage enabled warm bucket, bid=_internal~xx~xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx until it's uploaded Restarting the CM dropped the bucket in fixup from 25 to 5, but these 5 remain. Anyone facing this or have any resolution tips? Thanks!
Hello,  using Splunk version 8.1.3. Would you know why there’s a Server Error when we input the below search expression in Splunk? Is it some bug I am running into?   Per my research, its ... See more...
Hello,  using Splunk version 8.1.3. Would you know why there’s a Server Error when we input the below search expression in Splunk? Is it some bug I am running into?   Per my research, its only happening if java.lang. is present in the search string. It can run any variation of the string with wildcards but it only throws an error when java.lang. is present.      It won't let me post the word RunTime after java.lang for some reason, even in this web forum.   Searched up the internal logs but did not find anything.
We are trying to ingest data from our Microsoft GCCH Azure cloud with the "Microsoft Azure Add-on for Splunk" with mixed results that usually end up being a brick wall. In that Add-on you could speci... See more...
We are trying to ingest data from our Microsoft GCCH Azure cloud with the "Microsoft Azure Add-on for Splunk" with mixed results that usually end up being a brick wall. In that Add-on you could specify if you were connecting to an Azure Government or Azure Public cloud, but in the new Data Manager app that is not mentioned anywhere.  Should I interpret that to mean the Data Manager does not support the Azure Government Cloud yet? 
I'm using searches which are relatively noisy and difficult to simply write exclusions for, so one way that I've been writing the search syntax is to use a time-based self suppression in order to onl... See more...
I'm using searches which are relatively noisy and difficult to simply write exclusions for, so one way that I've been writing the search syntax is to use a time-based self suppression in order to only generate results if it hasn't been seen before. This works in the final search, however it seems like even with the suppression the initial results still get written to the index before the search has had a chance to search back far enough in time to discover that it needs to exclude the results. Visually what this looks like is a result will appear, then as the search works back in time the result will disappear. However if I look in the risk index I will see that an entry has already been written to the index before the final search completed which should have excluded that entry. Ultimately I guess the question is: Is there a way to prevent the correlation search writing to the index until the search fully completes?     | tstats `summariesonly` count earliest(_time) AS first_seen latest(_time) AS last_seen values(Processes.src_user) AS src_user values(Processes.process) AS Processes.process values(Processes.parent_process_name) AS parent_process_name values(Processes.process_name) AS process_name from datamodel=Endpoint.Processes where Processes.process="<some filter"> by Processes.Dest | where first_seen > relative_time(now(),"-1h")  
Splunk recently announced a Critical vulnerability for the Splunk deployment server.  Advisory ID: SVD-2022-0608 Published: 2022-06-14 CVSSv3.1 Score: 9.0, Critical CWE: CWE-284 CSAF: 2022-0... See more...
Splunk recently announced a Critical vulnerability for the Splunk deployment server.  Advisory ID: SVD-2022-0608 Published: 2022-06-14 CVSSv3.1 Score: 9.0, Critical CWE: CWE-284 CSAF: 2022-06-14-svd-2022-0608 CVE ID: CVE-2022-32158 Last Update: 2022-06-14 CVSSv3.1 Vector: CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:C/C:H/I:H/A:H Bug ID: SPL-176829 Security Content: Splunk Process Injection Forwarder Bundle Downloads   What can you do to take action right away? My first recommendation would be to shut down your deployment servers as  they are really only need to be online for changes to apps/addons deployed via the deployment server and won't disrupt forwarding of Universal or Heavy Forwarders which subscribe/phone home to said deployment servers. Shutting down the deployment server will NOT undeploy apps/addons on client forwarders. The only impact you should have is you won't be able to make updates to forwarder apps/addons to new or existing forwarders while the deployment server is offline. This will block the threat and give you time to make a plan. At present, the only option is to upgrade to Splunk 9.0 which has only been out for a few days. If you take this course of action, I'd highly recommend that you take a full backup of your SPLUNK_HOME directory - often /opt/splunk on many systems so you can roll back if you encounter problems with the upgrade. Typically deployment servers of higher version usually don't have issues working with forwarders on a few versions lower. Technically, the deployment server functionality is packaged with all versions of Splunk Enterprise. My understanding is should shouldn't have to patch Splunk if you don't use this functionality. i.e. you haven't configured deploymentclient.conf on your Universal or Heavy Forwarders to phone home to a deployment server. An alternative to stopping your deployment server is to disable the deployment server functionality from the command line. $ /opt/splunk/bin/splunk disable deploy-server $ /opt/splunk/bin/splunk restart How can you check whether you are using the deployment server functionality if you are unsure? There are a multiple ways. 1. Run this query on your deployment server or your search heads depending on whether you have deployment server splunkd logs forwarding to your indexers or not. index=_internal sourcetype=splunkd_access "phonehome" This will show clients phoning home to deployment server. The host name in the host field should be your deployment server. 2.  Check the UI of your deployment server under settings/forwarder management. Under the clients tab, look to see the count of clients phoning home. If you see zero, this instance is not actively being used as a deployment server. i.e. nothing is phoning home to it. If you see 1 or more, then this instance is an active deployment server. 3. Run Btool on the command line of a forwarder that you want to check to see if it's using a deployment server.  $ /opt/splunkforwarder/bin/splunk btool deploymentclient list [default] phoneHomeIntervalInSecs = 60 [target-broker:deploymentServer] targetUri = 1.1.123.123:8089 If a targetUri is returned, that's the host/IP of the deployment server this forwarder is trying to use. If you do not get targetUri returned, this forwarder is not using a deployment server. Here's a query you can use to see what classes/apps are pushed out to your clients via deployment server and review for anything suspicious.   index=_internal sourcetype=splunkd component="PackageDownloadRestHandler" | stats values(host) as deployment_server dc(peer) as clients by serverclass app | sort -clients   Here's a dashboard you can drop on either your deployment server or search heads which uses the data found in the deployment server's splunkd.log and will show you deployment server names and hosts checking into your deployment server.   <form theme="dark" version="1.1"> <label>Forwarder Phone Home</label> <fieldset submitButton="false"> <input type="time" token="time" searchWhenChanged="true"> <label>Time Range</label> <default> <earliest>-4h@m</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="deployment_server" searchWhenChanged="true"> <label>Deployment Server</label> <choice value="*">All</choice> <fieldForLabel>host</fieldForLabel> <fieldForValue>host</fieldForValue> <search> <query>index=_internal sourcetype=splunkd_access "phonehome/connection" | dedup host | table host | sort host</query> <earliest>-60m@m</earliest> <latest>now</latest> </search> <default>*</default> </input> <input type="text" token="forwarder_host_pattern" searchWhenChanged="true"> <label>Forwarder Host Pattern</label> <default>*</default> </input> <input type="text" token="forwarder_fqdn_pattern" searchWhenChanged="true"> <label>Forwarder FQDN Pattern</label> <default>*</default> </input> <input type="text" token="forwarder_ip_pattern" searchWhenChanged="true"> <label>Forwarder IP Pattern</label> <default>*</default> </input> <input type="text" token="forwarder_id_pattern"> <label>Forwarder ID Pattern</label> <default>*</default> </input> </fieldset> <row> <panel> <title>Unique Forwarders</title> <single> <search> <query>index=_internal sourcetype=splunkd_access "phonehome/connection" host="$deployment_server$" | rex "phonehome/connection_(?&lt;forwarder_ip&gt;[^\_]+)_80\d\d_(?&lt;forwarder_fqdn&gt;[^\_]+)_(?&lt;forwarder_host&gt;[^\_]+)_(?&lt;forwarder_id&gt;[^\s]+)" | search forwarder_host="*$forwarder_host_pattern$*" forwarder_fqdn="*$forwarder_fqdn_pattern$*" forwarder_ip="*$forwarder_ip_pattern$*" forwarder_id="*$forwarder_id_pattern$*" | dedup forwarder_host forwarder_fqdn forwarder_ip forwarder_id | stats count</query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> <sampleRatio>1</sampleRatio> <refresh>5m</refresh> <refreshType>delay</refreshType> </search> <option name="colorMode">block</option> <option name="drilldown">none</option> <option name="rangeColors">["0x006d9c","0x006d9c"]</option> <option name="rangeValues">[0]</option> <option name="refresh.display">progressbar</option> <option name="useColors">1</option> </single> </panel> </row> <row> <panel> <title>Phone Home Timeline</title> <chart> <search> <query>index=_internal sourcetype=splunkd_access "phonehome/connection" | rex "phonehome/connection_(?&lt;forwarder_ip&gt;[^\_]+)_80\d\d_(?&lt;forwarder_fqdn&gt;[^\_]+)_(?&lt;forwarder_host&gt;[^\_]+)_(?&lt;forwarder_id&gt;[^\s]+)" | search forwarder_host="*$forwarder_host_pattern$*" forwarder_fqdn="*$forwarder_fqdn_pattern$*" forwarder_ip="*$forwarder_ip_pattern$*" forwarder_id="*$forwarder_id_pattern$*" host="$deployment_server$" | eval device=forwarder_host+"-"+forwarder_fqdn+"-"+forwarder_ip+"-"+forwarder_id | timechart partial=true span=10m dc(device) as unqiue_forwarders by host | rename host as deployment_server</query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> <sampleRatio>1</sampleRatio> <refresh>5m</refresh> <refreshType>delay</refreshType> </search> <option name="charting.axisTitleX.visibility">collapsed</option> <option name="charting.axisTitleY.visibility">collapsed</option> <option name="charting.axisY.scale">linear</option> <option name="charting.chart">column</option> <option name="charting.chart.showDataLabels">all</option> <option name="charting.chart.stackMode">stacked</option> <option name="charting.drilldown">none</option> <option name="charting.layout.splitSeries">1</option> <option name="charting.layout.splitSeries.allowIndependentYRanges">1</option> <option name="charting.legend.placement">bottom</option> <option name="refresh.display">progressbar</option> </chart> </panel> </row> <row> <panel> <title>Deployment Server Summary</title> <table> <search> <query>index=_internal sourcetype=splunkd_access "phonehome/connection" host="$deployment_server$" | rex "phonehome/connection_(?&lt;forwarder_ip&gt;[^\_]+)_80\d\d_(?&lt;forwarder_fqdn&gt;[^\_]+)_(?&lt;forwarder_host&gt;[^\_]+)_(?&lt;forwarder_id&gt;[^\s]+)" | search forwarder_host="*$forwarder_host_pattern$*" forwarder_fqdn="*$forwarder_fqdn_pattern$*" forwarder_ip="*$forwarder_ip_pattern$*" forwarder_id="*$forwarder_id_pattern$*" | dedup forwarder_host forwarder_fqdn forwarder_ip forwarder_id | top host | rename host as deployment_server count as unqiue_forwarders</query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> <sampleRatio>1</sampleRatio> <refresh>5m</refresh> <refreshType>delay</refreshType> </search> <option name="count">10</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> <format type="color" field="deployment_server"> <colorPalette type="sharedList"></colorPalette> <scale type="sharedCategory"></scale> </format> </table> </panel> <panel> <title>Duplicate Hosts</title> <table> <title>(hosts expected to be unique)</title> <search> <query>index=_internal sourcetype=splunkd_access "phonehome/connection" host="$deployment_server$" | rex "phonehome/connection_(?&lt;forwarder_ip&gt;[^\_]+)_80\d\d_(?&lt;forwarder_fqdn&gt;[^\_]+)_(?&lt;forwarder_host&gt;[^\_]+)_(?&lt;forwarder_id&gt;[^\s]+)" | search forwarder_host="*$forwarder_host_pattern$*" forwarder_fqdn="*$forwarder_fqdn_pattern$*" forwarder_ip="*$forwarder_ip_pattern$*" forwarder_id="*$forwarder_id_pattern$*" | dedup forwarder_host forwarder_fqdn forwarder_ip forwarder_id | stats count by forwarder_host | search count&gt;1 | sort -count | append [| makeresults | eval forwarder_host="add_zero" | eval count=0 | table forwarder_host count ] | search forwarder_host!="add_zero"</query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> <sampleRatio>1</sampleRatio> <refresh>5m</refresh> <refreshType>delay</refreshType> </search> <option name="count">10</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> <option name="totalsRow">true</option> </table> </panel> <panel> <title>Duplicate Forwarder IDs</title> <table> <title>(indicates cloning post install)</title> <search> <query>index=_internal sourcetype=splunkd_access "phonehome/connection" host="$deployment_server$" | rex "phonehome/connection_(?&lt;forwarder_ip&gt;[^\_]+)_80\d\d_(?&lt;forwarder_fqdn&gt;[^\_]+)_(?&lt;forwarder_host&gt;[^\_]+)_(?&lt;forwarder_id&gt;[^\s]+)" | search forwarder_host="*$forwarder_host_pattern$*" forwarder_fqdn="*$forwarder_fqdn_pattern$*" forwarder_ip="*$forwarder_ip_pattern$*" forwarder_id="*$forwarder_id_pattern$*" | dedup forwarder_host forwarder_fqdn forwarder_ip forwarder_id | stats count by forwarder_id | search count&gt;1 | sort -count | append [| makeresults | eval forwarder_id="add_zero" | eval count=0 | table forwarder_id count ] | search forwarder_id!="add_zero"</query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> <sampleRatio>1</sampleRatio> <refresh>5m</refresh> <refreshType>delay</refreshType> </search> <option name="count">10</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> <option name="totalsRow">true</option> </table> </panel> </row> <row> <panel> <title>Forwarder Summary</title> <table> <search> <query>index=_internal sourcetype=splunkd_access "phonehome/connection" | rex "phonehome/connection_(?&lt;forwarder_ip&gt;[^\_]+)_80\d\d_(?&lt;forwarder_fqdn&gt;[^\_]+)_(?&lt;forwarder_host&gt;[^\_]+)_(?&lt;forwarder_id&gt;[^\s]+)" | search forwarder_host="*$forwarder_host_pattern$*" forwarder_fqdn="*$forwarder_fqdn_pattern$*" forwarder_ip="*$forwarder_ip_pattern$*" forwarder_id="*$forwarder_id_pattern$*" host="$deployment_server$" | dedup forwarder_host forwarder_fqdn forwarder_ip forwarder_id | table _time host forwarder_host forwarder_fqdn forwarder_ip forwarder_id | rename host as deployment_server</query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> <sampleRatio>1</sampleRatio> <refresh>5m</refresh> <refreshType>delay</refreshType> </search> <option name="count">40</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">true</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <format type="color" field="deployment_server"> <colorPalette type="sharedList"></colorPalette> <scale type="sharedCategory"></scale> </format> </table> </panel> </row> </form>                              
We would like to detect and display fluctuations of data. So, let's say the count of network events for a certain sourcetype in this month is double than the previous month, we would like to show it ... See more...
We would like to detect and display fluctuations of data. So, let's say the count of network events for a certain sourcetype in this month is double than the previous month, we would like to show it in a panel and potentially create an alert for it. How can we do it? 
Hello, I want to reset all the filters values to reset back to their default values when the Reset button is clicked. I tried a lot using JS but it's not working. Thanks in advance for your help @... See more...
Hello, I want to reset all the filters values to reset back to their default values when the Reset button is clicked. I tried a lot using JS but it's not working. Thanks in advance for your help @ITWhisperer   
Hello, I see that there is a new vulnerability that affects Splunk and I have a couple of doubts https://www.splunk.com/en_us/product-security/announcements/svd-2022-0608.html Excuse me if the que... See more...
Hello, I see that there is a new vulnerability that affects Splunk and I have a couple of doubts https://www.splunk.com/en_us/product-security/announcements/svd-2022-0608.html Excuse me if the question is silly but what is not clear to me is if I should update the version of Splunk Enterprise as SIEM or if I should update only the agents on the endpoints. Or both? Thank you for your answers     " Description Splunk Enterprise deployment servers in versions before 9.0 let clients deploy forwarder bundles to other deployment clients through the deployment server. An attacker that compromised a Universal Forwarder endpoint could use the vulnerability to execute arbitrary code on all other Universal Forwarder endpoints subscribed to the deployment server.  The Splunk Cloud Platform (SCP) does not offer or use deployment servers and is not affected by the vulnerability. For SCP customers that run deployment servers, upgrade to version 9.0 or higher. At the time of publishing, we have no evidence of exploitation of this vulnerability by external parties.   Solution Upgrade Splunk Enterprise deployment servers to version 9.0 or higher "
Hello All, I am new to Splunk. My Splunk index is already getting data from a Kafka source   index=k_index sourcetype=k_message The query result is something like {Field1=abc,Field2=sdfs,Fi... See more...
Hello All, I am new to Splunk. My Splunk index is already getting data from a Kafka source   index=k_index sourcetype=k_message The query result is something like {Field1=abc,Field2=sdfs,Field3=wertw,Field4=123,Field6=87089R....}   I have got a use case where I have a list of fields and associated datatypes,  I want to compare these predefined fields (fields only - no values) against the Splunk search query results and then for each mismatch in the result, needs to keep count of it and produce it as a percentage of the total. In short, give a score if the incoming events in the last 15 mins are good (like 100% or 90% ….etc) Thanks, Alwyn
I've integrations made with UDP/TCP data inputs that index data correctly but after a while they stopped working. In Splunk we have different types of data inputs configured and only the UDP/TCP sto... See more...
I've integrations made with UDP/TCP data inputs that index data correctly but after a while they stopped working. In Splunk we have different types of data inputs configured and only the UDP/TCP stops working. When this happens, the following validations are performed: Validate iptables and firewall configurations on the server. Validate with tcpdump that the data arrives at the server. Validate that there is no data queuing by reviewing indexing queues. After different tests, data ingestion recovers specifying the parameter disabled=0 in inputs.conf and restarting Splunk. We didn't reach anything conclusive about what could cause this problem. We would like to be clear about what causes this problem to know how to act if the situation repeats itself. Do you know what could cause this problem? Could you guide me or share ideas of what I could investigate?
Hi Team   How to check the indexer status details  for last one month from the Search head by using SPL query
this is my query  earliest=-15m latest=now index=** host="*" LOG_LEVEL=ERROR OR LOG_LEVEL=FATAL OR logLevel=ERROR OR level=error | rex field=MESSAGE "(?<message>.{35})" | search NOT [ search e... See more...
this is my query  earliest=-15m latest=now index=** host="*" LOG_LEVEL=ERROR OR LOG_LEVEL=FATAL OR logLevel=ERROR OR level=error | rex field=MESSAGE "(?<message>.{35})" | search NOT [ search earliest=-3d@d latest=-d@d index=wiweb host="*" LOG_LEVEL=ERROR OR LOG_LEVEL=FATAL OR logLevel=ERROR OR level=error | rex field=MESSAGE "(?<message>.{35})" | dedup message | fields message ] | stats count by message appname | search count>50 | sort appname , -count ALmost all the recurring 'message' is getting ignored but few of them still come in the result even if those are there in last 2 days (which should have been ignored which is what subsearch is doing) is there anything else i can do to run this query with 100% success?
We have upgraded the AppD JAVA agent to version 22.3.0.33637 on SAP JAVA System. After updating javaagent, when we try to start SAP JAVA system, we have seen a couple of issues as below: 1.  SAP sy... See more...
We have upgraded the AppD JAVA agent to version 22.3.0.33637 on SAP JAVA System. After updating javaagent, when we try to start SAP JAVA system, we have seen a couple of issues as below: 1.  SAP system is not getting started. 2. When the system gets started somehow, our SAP NWA URL  is not reachable at all. However, after removing the parameter "-javaagent:/usr/sap/<SID>/appdyanmics/javaagent.jar" from SAP Configtool, followed by a full system restart, we are then able to access the NWA URL. But after doing so, no data gets populated in AppD Dashboard.
Below is my splunk raw event data { "additional": { "method": "POST", "url": "/api/resource/getContentEditorData", "headers": { "cloudfront-viewer-country": "US", "origin": "https://www.site... See more...
Below is my splunk raw event data { "additional": { "method": "POST", "url": "/api/resource/getContentEditorData", "headers": { "cloudfront-viewer-country": "US", "origin": "https://www.site1.com", "sec-ch-ua-platform": "\"Android\"", } }, "level": "notice", "message": "INCOMING REQUEST: POST /api/resource/getContentEditorData" } I need count of cloudfront-viewer-country and sec-ch-ua-platform for each Origin Please help. Expected Result: Origin Platform Platform Count Country Country Count https://www.site1.com Android 10 US 22   macOS 12 UK 3   Windows 6 AU 1 https://www.site2.com Android 4 US 8   macOS 4 UK 1   Windows 2 AU 1          
I am wanting to use a lookup file to drive search for an alert.  This seems a bit unique as I am not wanting to use event data from results to drive the lookup, but rather have all the lookup entries... See more...
I am wanting to use a lookup file to drive search for an alert.  This seems a bit unique as I am not wanting to use event data from results to drive the lookup, but rather have all the lookup entries dynamically added to the search itself. Below is the example use-case: CSV file example: Index, ErrorKey "index1","Error string 1" "index1","Error string 2" "index2","Error string 3" Looking to use it to scale a search like this: index=index1 OR index=index 2 ("Error string 1" OR "Error string 2" OR "Error string 3") Basically the index/error string combo could be managed in the csv file as opposed to the alert search itself. Making it easier to add/scale/maintain the search criteria. Is this possible?
Hi, I'm able to get the response in a tabular format using the command: table clientName, apiMethod, sourceSystem, httpStatus, version, timeTaken What I want is to do some aggregation on t... See more...
Hi, I'm able to get the response in a tabular format using the command: table clientName, apiMethod, sourceSystem, httpStatus, version, timeTaken What I want is to do some aggregation on them and get the result like: Basically, group by clientName, apiMethod, sourceSystem, httpStatus, and version to get the total calls and the average time. The below command is clearly misleading: stats count(clientName) as TotalCalls, avg(timeTaken) as avgTimeTakenS by clientName, apiMethod, sourceSystem, httpStatus, version Please help. Thanks, Arun