All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All,   I have provided multiple inputs in my dashboard studio and their tokens are being  used across multiple searches in the same Dashboard. I am seeing this issue:   I have to cl... See more...
Hi All,   I have provided multiple inputs in my dashboard studio and their tokens are being  used across multiple searches in the same Dashboard. I am seeing this issue:   I have to click submit  button  multiple times. My multi select inputs look like this and  they are interdependent on each other:   Example: Values of B is dependent on A Values of C is dependent on A, B Values of D is dependent on A, B, C Values of  E is dependent on A,B,C,D   like that. Now say if i have selected A,B,C ....D will not  update its values until and  unless i click submit button.......similarly E will also not update its new values until and unless after selecting D, i press again submit button. Is there any option for refresh button ? If i remove submit button, my whole dashboard starts refreshing  with each selection of input and this eats away a  lot of time. 
I have field called URN, ControlFlowID, RequestID and SpanID Requirement is to get data for each URN,how many controlflowid and for each controlflowID, how many requestID and for each requestID how ... See more...
I have field called URN, ControlFlowID, RequestID and SpanID Requirement is to get data for each URN,how many controlflowid and for each controlflowID, how many requestID and for each requestID how many SpanID needs to populate data in a table view by merging multivalue in a single row. can anyone help me on this. Eg: URN    ControlFlowID     RequestID      SpanID URN1    CTRLFLOW1       REQ1               SpanID1 URN1     CTRLFLOW1       REQ2             SpanID2 URN1      CTRLFLOW1     REQ3               SpanID3 Requirement as below: URN    ControlFlowID     RequestID      SpanID                 CTRLFLOW1        REQ1              SpanID1 URN1                                       REQ2               SpanID2                  CTRLFLOW2        REQ3                SpanID3
i am trying to setup alert for one event , am running on query at specific time.   If there are 8 records , email should sent as Success , else it should sent as fail.   Currently i have setup a ... See more...
i am trying to setup alert for one event , am running on query at specific time.   If there are 8 records , email should sent as Success , else it should sent as fail.   Currently i have setup a cron susessfully and reeivng proper alert. So now incase there less than 8 rows i need to get failure email i e missing am unable to to find the settings for the same.    
Would like to know what is the main difference in lantern.splunk use case library and research.splunk detections/analytic stories? Quite new to enterprise security. Not sure which one i should star... See more...
Would like to know what is the main difference in lantern.splunk use case library and research.splunk detections/analytic stories? Quite new to enterprise security. Not sure which one i should start with.    
Hello everyone I'm fairly familiar with routing data based on the logs themselves, however, I was wondering if there was a way to call an external mapping table in the transfoms.conf file.   Logs ... See more...
Hello everyone I'm fairly familiar with routing data based on the logs themselves, however, I was wondering if there was a way to call an external mapping table in the transfoms.conf file.   Logs would contain one identifiable serial number   Firewall 1 with serial number xxxxxxxxxxxx Firewall 2 with serial number yyyyyyyyyyyy Firewall 3 with serial number zzzzzzzzzzzz   And we would like to send each log to a different indexer depending on that serial number. Serial numbers are included in the logs and we have a mapping table  that looks like this:   serial number     indexer xxxxxxxxxxxx    indexer 1 yyyyyyyyyyyy    indexer 2 zzzzzzzzzzzz    indexer 3   and so on...   The only way I see right now is to create one manual entry in the props and transform files and I was wondering if there was a way to call an external mapping table, that way, whenever a new firewall comes into play, we would only need to update the table and not props and transforms files.   Thank you
Hi everyone. I am a new user to Splunk.  Recently, I have met some trouble with trying to extract a certain message out from a field I want. I have a field called Message, which logs the message sen... See more...
Hi everyone. I am a new user to Splunk.  Recently, I have met some trouble with trying to extract a certain message out from a field I want. I have a field called Message, which logs the message sent to a web server. However, I only want to retrieve a specific field when the message contains the desired field that I want.  Example: I want to retrieve the user's name when service is invoked.  Time Message 2021-05-15T01:51:52.321Z Session ID 1234 has been created 2021-05-15T01:51:52.321Z Invoked by user David from IP 127.256.25.16 2021-05-15T01:51:52.321Z Configuration Reading - Start   Hence, I only want to extract the name David, when that specific message log containing the name appears. Does anyone have any clue how I can extract that field specifically when it appears? Thanks in advance.  EDITED: Hey Splunk Users,  If you met the same problem as I did, where the message logs change constantly, do make sure to search for the message you are looking for first, before drilling down for the specific field.  In my case: | search Message="Invoked by user *" | rex field=Message "Invoked by user (?<user>\w+)"
Hello everyone! I want to combine two searches or find another solution. Here my problem: I need a timechart where i can show the occurences of some ID´s (example for an ID: 345FsdEE344FED-... See more...
Hello everyone! I want to combine two searches or find another solution. Here my problem: I need a timechart where i can show the occurences of some ID´s (example for an ID: 345FsdEE344FED- 354235werfDF2) and put an average line over it. Graph Idea: Orange: Timechart with a distinct count for the ID´s Green: Stats with average for the count of the ID´s       index=example_dev | bin span=1m _time | stats dc(TEST_ID) as count_of_testid by _time     For the timeframe i want to be flexibel but for the span 15 minutes are ok. Thank you all a lot and have a nice day.
After following the jboss setup tutorial https://docs.splunk.com/Documentation/AddOns/released/JBoss/Setup I am able to search wildfly23 server jmx logs with index="main" sourcetype="jboss:jmx"  bu... See more...
After following the jboss setup tutorial https://docs.splunk.com/Documentation/AddOns/released/JBoss/Setup I am able to search wildfly23 server jmx logs with index="main" sourcetype="jboss:jmx"  but when I search for server log with index="main" sourcetype="jboss:server:log" it is not showing any result. Below are the input.conf file [jboss://dumpAllThreads] disabled = 0 account = wildfly duration = 10 index = main sourcetype = jboss:jmx [monitor://D:/Wildfly_9/wildfly-23.0.2.Final/standalone/log/server.log*] disabled = false followTail = false index = main sourcetype = jboss:server:log [monitor://D:/Wildfly_9/wildfly-23.0.2.Final/standalone/log/*gc.log*] disabled = false followTail = false index = main sourcetype = jboss:gc:log [monitor://D:/Wildfly_9/wildfly-23.0.2.Final/standalone/log/access.log*] disabled = false followTail = false index = main sourcetype = jboss:access:log
Hi,  Is there a way to integrate Trendmicro SaaS with Splunk Cloud?  Kindly advise the best way to do this.  Thanks in advance.
I have been trying to make heatmap in Splunk dashboard i want to replace "0" with "-" in the cell of chart count by two fields when the cell is no data. How do I accomplish this?   EX) DATA: FI... See more...
I have been trying to make heatmap in Splunk dashboard i want to replace "0" with "-" in the cell of chart count by two fields when the cell is no data. How do I accomplish this?   EX) DATA: FIELD1,FIELD2,FIELD3 a,A,x a,A,x b,B,x a,B,   | chart count(isnotnull(FIELD3)) AS countA by FIELD2,FIELD1   Relults I want:     a  b   A 2   - B 0  1   Now Relults:     a  b   A 2  0 B 0  1
Situation: The data I need resides in the below:     index=X (sourcetypeA=X fieldA=X) OR (sourcetypeB=X fieldB=X) | rename fieldA as fieldB | stats count by fieldC, fieldD, fieldE, fieldB    ... See more...
Situation: The data I need resides in the below:     index=X (sourcetypeA=X fieldA=X) OR (sourcetypeB=X fieldB=X) | rename fieldA as fieldB | stats count by fieldC, fieldD, fieldE, fieldB     Problem: "fieldD" only has a value when I modify the search as such:     index=X (sourcetypeA=X NOT fieldA=X) OR (sourcetypeB=X NOT fieldB=X) | rename fieldA as fieldB | stats count by fieldC, fieldD, fieldE, fieldB     -------------------------------------- Based on my research I presume I am 100% incorrect but I've been trying to use join with no success. I suspect the answer is to use a subsearch however I can't figure out how to construct it so that I can always get a value for "fieldD". Any help would be greatly appreciated.
Hi guys, i need some help. I'm trying to make a time chart to compare how many times my system gets restarted comparing today with 7 days ago. I have this healthcheck log and the first log is w... See more...
Hi guys, i need some help. I'm trying to make a time chart to compare how many times my system gets restarted comparing today with 7 days ago. I have this healthcheck log and the first log is when the user logs in for the first time and the next is the times that the user restarts my app. with the following query works just fine the problem here is that i get the results from (initialization + restart) but i want the result just from the restart.   index=myIndex Title=Healthcheck earliest=-10d@d latest=@d | timechart span=1h count | timewrap d series=short | fields _time s0 s7 | rename s0 as Today, s7 as "7 days ago"   with this other query i have exactly the restart from each user but i cant make it work with time chart.   index=myIndex Title=Healthcheck | stats count by Data.Ip | eval count = count - 1   if it was confused i posted this other question explaining my scenario: https://community.splunk.com/t5/Splunk-Search/How-to-change-the-result-of-my-stats-count/td-p/600364
8.2.5 Enterprise  _internal index has 5 buckets with this error: ClusterSlaveBucketHandler [xxxxxx TcpChannelThread] - Failed to trigger replication (err='Cannot replicate remote storage enabled wa... See more...
8.2.5 Enterprise  _internal index has 5 buckets with this error: ClusterSlaveBucketHandler [xxxxxx TcpChannelThread] - Failed to trigger replication (err='Cannot replicate remote storage enabled warm bucket, bid=_internal~xx~xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx until it's uploaded Restarting the CM dropped the bucket in fixup from 25 to 5, but these 5 remain. Anyone facing this or have any resolution tips? Thanks!
Hello,  using Splunk version 8.1.3. Would you know why there’s a Server Error when we input the below search expression in Splunk? Is it some bug I am running into?   Per my research, its ... See more...
Hello,  using Splunk version 8.1.3. Would you know why there’s a Server Error when we input the below search expression in Splunk? Is it some bug I am running into?   Per my research, its only happening if java.lang. is present in the search string. It can run any variation of the string with wildcards but it only throws an error when java.lang. is present.      It won't let me post the word RunTime after java.lang for some reason, even in this web forum.   Searched up the internal logs but did not find anything.
We are trying to ingest data from our Microsoft GCCH Azure cloud with the "Microsoft Azure Add-on for Splunk" with mixed results that usually end up being a brick wall. In that Add-on you could speci... See more...
We are trying to ingest data from our Microsoft GCCH Azure cloud with the "Microsoft Azure Add-on for Splunk" with mixed results that usually end up being a brick wall. In that Add-on you could specify if you were connecting to an Azure Government or Azure Public cloud, but in the new Data Manager app that is not mentioned anywhere.  Should I interpret that to mean the Data Manager does not support the Azure Government Cloud yet? 
I'm using searches which are relatively noisy and difficult to simply write exclusions for, so one way that I've been writing the search syntax is to use a time-based self suppression in order to onl... See more...
I'm using searches which are relatively noisy and difficult to simply write exclusions for, so one way that I've been writing the search syntax is to use a time-based self suppression in order to only generate results if it hasn't been seen before. This works in the final search, however it seems like even with the suppression the initial results still get written to the index before the search has had a chance to search back far enough in time to discover that it needs to exclude the results. Visually what this looks like is a result will appear, then as the search works back in time the result will disappear. However if I look in the risk index I will see that an entry has already been written to the index before the final search completed which should have excluded that entry. Ultimately I guess the question is: Is there a way to prevent the correlation search writing to the index until the search fully completes?     | tstats `summariesonly` count earliest(_time) AS first_seen latest(_time) AS last_seen values(Processes.src_user) AS src_user values(Processes.process) AS Processes.process values(Processes.parent_process_name) AS parent_process_name values(Processes.process_name) AS process_name from datamodel=Endpoint.Processes where Processes.process="<some filter"> by Processes.Dest | where first_seen > relative_time(now(),"-1h")  
Splunk recently announced a Critical vulnerability for the Splunk deployment server.  Advisory ID: SVD-2022-0608 Published: 2022-06-14 CVSSv3.1 Score: 9.0, Critical CWE: CWE-284 CSAF: 2022-0... See more...
Splunk recently announced a Critical vulnerability for the Splunk deployment server.  Advisory ID: SVD-2022-0608 Published: 2022-06-14 CVSSv3.1 Score: 9.0, Critical CWE: CWE-284 CSAF: 2022-06-14-svd-2022-0608 CVE ID: CVE-2022-32158 Last Update: 2022-06-14 CVSSv3.1 Vector: CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:C/C:H/I:H/A:H Bug ID: SPL-176829 Security Content: Splunk Process Injection Forwarder Bundle Downloads   What can you do to take action right away? My first recommendation would be to shut down your deployment servers as  they are really only need to be online for changes to apps/addons deployed via the deployment server and won't disrupt forwarding of Universal or Heavy Forwarders which subscribe/phone home to said deployment servers. Shutting down the deployment server will NOT undeploy apps/addons on client forwarders. The only impact you should have is you won't be able to make updates to forwarder apps/addons to new or existing forwarders while the deployment server is offline. This will block the threat and give you time to make a plan. At present, the only option is to upgrade to Splunk 9.0 which has only been out for a few days. If you take this course of action, I'd highly recommend that you take a full backup of your SPLUNK_HOME directory - often /opt/splunk on many systems so you can roll back if you encounter problems with the upgrade. Typically deployment servers of higher version usually don't have issues working with forwarders on a few versions lower. Technically, the deployment server functionality is packaged with all versions of Splunk Enterprise. My understanding is should shouldn't have to patch Splunk if you don't use this functionality. i.e. you haven't configured deploymentclient.conf on your Universal or Heavy Forwarders to phone home to a deployment server. An alternative to stopping your deployment server is to disable the deployment server functionality from the command line. $ /opt/splunk/bin/splunk disable deploy-server $ /opt/splunk/bin/splunk restart How can you check whether you are using the deployment server functionality if you are unsure? There are a multiple ways. 1. Run this query on your deployment server or your search heads depending on whether you have deployment server splunkd logs forwarding to your indexers or not. index=_internal sourcetype=splunkd_access "phonehome" This will show clients phoning home to deployment server. The host name in the host field should be your deployment server. 2.  Check the UI of your deployment server under settings/forwarder management. Under the clients tab, look to see the count of clients phoning home. If you see zero, this instance is not actively being used as a deployment server. i.e. nothing is phoning home to it. If you see 1 or more, then this instance is an active deployment server. 3. Run Btool on the command line of a forwarder that you want to check to see if it's using a deployment server.  $ /opt/splunkforwarder/bin/splunk btool deploymentclient list [default] phoneHomeIntervalInSecs = 60 [target-broker:deploymentServer] targetUri = 1.1.123.123:8089 If a targetUri is returned, that's the host/IP of the deployment server this forwarder is trying to use. If you do not get targetUri returned, this forwarder is not using a deployment server. Here's a query you can use to see what classes/apps are pushed out to your clients via deployment server and review for anything suspicious.   index=_internal sourcetype=splunkd component="PackageDownloadRestHandler" | stats values(host) as deployment_server dc(peer) as clients by serverclass app | sort -clients   Here's a dashboard you can drop on either your deployment server or search heads which uses the data found in the deployment server's splunkd.log and will show you deployment server names and hosts checking into your deployment server.   <form theme="dark" version="1.1"> <label>Forwarder Phone Home</label> <fieldset submitButton="false"> <input type="time" token="time" searchWhenChanged="true"> <label>Time Range</label> <default> <earliest>-4h@m</earliest> <latest>now</latest> </default> </input> <input type="dropdown" token="deployment_server" searchWhenChanged="true"> <label>Deployment Server</label> <choice value="*">All</choice> <fieldForLabel>host</fieldForLabel> <fieldForValue>host</fieldForValue> <search> <query>index=_internal sourcetype=splunkd_access "phonehome/connection" | dedup host | table host | sort host</query> <earliest>-60m@m</earliest> <latest>now</latest> </search> <default>*</default> </input> <input type="text" token="forwarder_host_pattern" searchWhenChanged="true"> <label>Forwarder Host Pattern</label> <default>*</default> </input> <input type="text" token="forwarder_fqdn_pattern" searchWhenChanged="true"> <label>Forwarder FQDN Pattern</label> <default>*</default> </input> <input type="text" token="forwarder_ip_pattern" searchWhenChanged="true"> <label>Forwarder IP Pattern</label> <default>*</default> </input> <input type="text" token="forwarder_id_pattern"> <label>Forwarder ID Pattern</label> <default>*</default> </input> </fieldset> <row> <panel> <title>Unique Forwarders</title> <single> <search> <query>index=_internal sourcetype=splunkd_access "phonehome/connection" host="$deployment_server$" | rex "phonehome/connection_(?&lt;forwarder_ip&gt;[^\_]+)_80\d\d_(?&lt;forwarder_fqdn&gt;[^\_]+)_(?&lt;forwarder_host&gt;[^\_]+)_(?&lt;forwarder_id&gt;[^\s]+)" | search forwarder_host="*$forwarder_host_pattern$*" forwarder_fqdn="*$forwarder_fqdn_pattern$*" forwarder_ip="*$forwarder_ip_pattern$*" forwarder_id="*$forwarder_id_pattern$*" | dedup forwarder_host forwarder_fqdn forwarder_ip forwarder_id | stats count</query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> <sampleRatio>1</sampleRatio> <refresh>5m</refresh> <refreshType>delay</refreshType> </search> <option name="colorMode">block</option> <option name="drilldown">none</option> <option name="rangeColors">["0x006d9c","0x006d9c"]</option> <option name="rangeValues">[0]</option> <option name="refresh.display">progressbar</option> <option name="useColors">1</option> </single> </panel> </row> <row> <panel> <title>Phone Home Timeline</title> <chart> <search> <query>index=_internal sourcetype=splunkd_access "phonehome/connection" | rex "phonehome/connection_(?&lt;forwarder_ip&gt;[^\_]+)_80\d\d_(?&lt;forwarder_fqdn&gt;[^\_]+)_(?&lt;forwarder_host&gt;[^\_]+)_(?&lt;forwarder_id&gt;[^\s]+)" | search forwarder_host="*$forwarder_host_pattern$*" forwarder_fqdn="*$forwarder_fqdn_pattern$*" forwarder_ip="*$forwarder_ip_pattern$*" forwarder_id="*$forwarder_id_pattern$*" host="$deployment_server$" | eval device=forwarder_host+"-"+forwarder_fqdn+"-"+forwarder_ip+"-"+forwarder_id | timechart partial=true span=10m dc(device) as unqiue_forwarders by host | rename host as deployment_server</query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> <sampleRatio>1</sampleRatio> <refresh>5m</refresh> <refreshType>delay</refreshType> </search> <option name="charting.axisTitleX.visibility">collapsed</option> <option name="charting.axisTitleY.visibility">collapsed</option> <option name="charting.axisY.scale">linear</option> <option name="charting.chart">column</option> <option name="charting.chart.showDataLabels">all</option> <option name="charting.chart.stackMode">stacked</option> <option name="charting.drilldown">none</option> <option name="charting.layout.splitSeries">1</option> <option name="charting.layout.splitSeries.allowIndependentYRanges">1</option> <option name="charting.legend.placement">bottom</option> <option name="refresh.display">progressbar</option> </chart> </panel> </row> <row> <panel> <title>Deployment Server Summary</title> <table> <search> <query>index=_internal sourcetype=splunkd_access "phonehome/connection" host="$deployment_server$" | rex "phonehome/connection_(?&lt;forwarder_ip&gt;[^\_]+)_80\d\d_(?&lt;forwarder_fqdn&gt;[^\_]+)_(?&lt;forwarder_host&gt;[^\_]+)_(?&lt;forwarder_id&gt;[^\s]+)" | search forwarder_host="*$forwarder_host_pattern$*" forwarder_fqdn="*$forwarder_fqdn_pattern$*" forwarder_ip="*$forwarder_ip_pattern$*" forwarder_id="*$forwarder_id_pattern$*" | dedup forwarder_host forwarder_fqdn forwarder_ip forwarder_id | top host | rename host as deployment_server count as unqiue_forwarders</query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> <sampleRatio>1</sampleRatio> <refresh>5m</refresh> <refreshType>delay</refreshType> </search> <option name="count">10</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> <format type="color" field="deployment_server"> <colorPalette type="sharedList"></colorPalette> <scale type="sharedCategory"></scale> </format> </table> </panel> <panel> <title>Duplicate Hosts</title> <table> <title>(hosts expected to be unique)</title> <search> <query>index=_internal sourcetype=splunkd_access "phonehome/connection" host="$deployment_server$" | rex "phonehome/connection_(?&lt;forwarder_ip&gt;[^\_]+)_80\d\d_(?&lt;forwarder_fqdn&gt;[^\_]+)_(?&lt;forwarder_host&gt;[^\_]+)_(?&lt;forwarder_id&gt;[^\s]+)" | search forwarder_host="*$forwarder_host_pattern$*" forwarder_fqdn="*$forwarder_fqdn_pattern$*" forwarder_ip="*$forwarder_ip_pattern$*" forwarder_id="*$forwarder_id_pattern$*" | dedup forwarder_host forwarder_fqdn forwarder_ip forwarder_id | stats count by forwarder_host | search count&gt;1 | sort -count | append [| makeresults | eval forwarder_host="add_zero" | eval count=0 | table forwarder_host count ] | search forwarder_host!="add_zero"</query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> <sampleRatio>1</sampleRatio> <refresh>5m</refresh> <refreshType>delay</refreshType> </search> <option name="count">10</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> <option name="totalsRow">true</option> </table> </panel> <panel> <title>Duplicate Forwarder IDs</title> <table> <title>(indicates cloning post install)</title> <search> <query>index=_internal sourcetype=splunkd_access "phonehome/connection" host="$deployment_server$" | rex "phonehome/connection_(?&lt;forwarder_ip&gt;[^\_]+)_80\d\d_(?&lt;forwarder_fqdn&gt;[^\_]+)_(?&lt;forwarder_host&gt;[^\_]+)_(?&lt;forwarder_id&gt;[^\s]+)" | search forwarder_host="*$forwarder_host_pattern$*" forwarder_fqdn="*$forwarder_fqdn_pattern$*" forwarder_ip="*$forwarder_ip_pattern$*" forwarder_id="*$forwarder_id_pattern$*" | dedup forwarder_host forwarder_fqdn forwarder_ip forwarder_id | stats count by forwarder_id | search count&gt;1 | sort -count | append [| makeresults | eval forwarder_id="add_zero" | eval count=0 | table forwarder_id count ] | search forwarder_id!="add_zero"</query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> <sampleRatio>1</sampleRatio> <refresh>5m</refresh> <refreshType>delay</refreshType> </search> <option name="count">10</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> <option name="totalsRow">true</option> </table> </panel> </row> <row> <panel> <title>Forwarder Summary</title> <table> <search> <query>index=_internal sourcetype=splunkd_access "phonehome/connection" | rex "phonehome/connection_(?&lt;forwarder_ip&gt;[^\_]+)_80\d\d_(?&lt;forwarder_fqdn&gt;[^\_]+)_(?&lt;forwarder_host&gt;[^\_]+)_(?&lt;forwarder_id&gt;[^\s]+)" | search forwarder_host="*$forwarder_host_pattern$*" forwarder_fqdn="*$forwarder_fqdn_pattern$*" forwarder_ip="*$forwarder_ip_pattern$*" forwarder_id="*$forwarder_id_pattern$*" host="$deployment_server$" | dedup forwarder_host forwarder_fqdn forwarder_ip forwarder_id | table _time host forwarder_host forwarder_fqdn forwarder_ip forwarder_id | rename host as deployment_server</query> <earliest>$time.earliest$</earliest> <latest>$time.latest$</latest> <sampleRatio>1</sampleRatio> <refresh>5m</refresh> <refreshType>delay</refreshType> </search> <option name="count">40</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">true</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <format type="color" field="deployment_server"> <colorPalette type="sharedList"></colorPalette> <scale type="sharedCategory"></scale> </format> </table> </panel> </row> </form>                              
We would like to detect and display fluctuations of data. So, let's say the count of network events for a certain sourcetype in this month is double than the previous month, we would like to show it ... See more...
We would like to detect and display fluctuations of data. So, let's say the count of network events for a certain sourcetype in this month is double than the previous month, we would like to show it in a panel and potentially create an alert for it. How can we do it? 
Hello, I want to reset all the filters values to reset back to their default values when the Reset button is clicked. I tried a lot using JS but it's not working. Thanks in advance for your help @... See more...
Hello, I want to reset all the filters values to reset back to their default values when the Reset button is clicked. I tried a lot using JS but it's not working. Thanks in advance for your help @ITWhisperer