All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi community, Currently we are having 82 active rules/use cases in splunk and few of them were disabled. I was trying to pull the report of all the 82 rules but i couldn't able to do. I would reques... See more...
Hi community, Currently we are having 82 active rules/use cases in splunk and few of them were disabled. I was trying to pull the report of all the 82 rules but i couldn't able to do. I would requesting you to help me out on this...? Thanks in advance, Kishore. 
Hi , Can we integrate window AD with Splunk administration ?? OR can we integrate with TACAS and RADIUS for centralization user administraion ? Our Splunk Enterprise is installed on Linux platform... See more...
Hi , Can we integrate window AD with Splunk administration ?? OR can we integrate with TACAS and RADIUS for centralization user administraion ? Our Splunk Enterprise is installed on Linux platform.   Aside, On splunk setting > accesscontrol  I can see ldap as a option but not with AD ?   Please help me with it.    
Hi, I have uploaded customized app ,but App origin is showing as "Uploaded" ,we suppose to have "Splunk"   How to change this ?  
Hi, Logs are going to _internal index instead of customized index. host = xxxx index = _internal source = C:\Program Files\SplunkUniversalForwarder\var\log\splunk\splunkd.log sourcetype = splunk... See more...
Hi, Logs are going to _internal index instead of customized index. host = xxxx index = _internal source = C:\Program Files\SplunkUniversalForwarder\var\log\splunk\splunkd.log sourcetype = splunkd inputs.conf and props.conf are set properly in deployment server.Also verified in the SplunkForwarder "Windows"  server.  Still getting above one , not going to customized index. What could be the reason ?
Hi all, I need to count the unique values of each row i have in my search (maybe this sounds abstract to you), the data i have in my cells are separated by "," . Here is an example: ID        ... See more...
Hi all, I need to count the unique values of each row i have in my search (maybe this sounds abstract to you), the data i have in my cells are separated by "," . Here is an example: ID               |             A                 |                B 1                 |        a,b,c                |                3                   ("a","b","c" are unique here) 2                 |            a,d               |               1                   ("a" has been already found, so the only unique value is "d") 3                  |          b,k,e            |               2                   ("b" has been already found, only unique values are "k" and "e") I tried to run dedup and rex field , but cannot get the search right. could anyone help? One more question , is it possible to duplicate the row but only in each line i find only one value (example): ID               |             A                       |                A_mod 1               |             a,b,c                |                  a 1               |             a,b,c                |                  b 1               |             a,b,c                |                  c ---------------------------------------- 2               |             a,d                    |                  a 2               |             a,d                    |                  d ----------------------------------------- 3               |             b,k,e                 |                  b 3               |             b,k,e                 |                  k 3               |             b,k,e                 |                  e Sorry if i posted two questions, i am just asking about the possibility and how to achieve it. I thought of this exemple because it might get easier by then to run dedup
Hello, i have a field and i want to generate a new field that is the old field but its mean subtracted. Example: [1,2,3] -> [-1, 0, 1]   This is my attempt | makeresults count=10 | eval dice_to... See more...
Hello, i have a field and i want to generate a new field that is the old field but its mean subtracted. Example: [1,2,3] -> [-1, 0, 1]   This is my attempt | makeresults count=10 | eval dice_toss = random()%6+1 | table dice_toss | stats avg(dice_toss) as tmp | eval dice_toss_centered = dice_toss - tmp  
Hi All, I hope someone could help, search is waiting for inputs when I try to first load the dashboard and even search doesn't work. <form hideAppBar="false" hideEdit="false" hideFooter="true" ... See more...
Hi All, I hope someone could help, search is waiting for inputs when I try to first load the dashboard and even search doesn't work. <form hideAppBar="false" hideEdit="false" hideFooter="true" hideSplunkBar="false" hideTitle="false"> <label>Activity by ID or IP ADDRESS</label> <fieldset submitButton="true" autoRun="false"> <input type="text" token="id" searchWhenChanged="false"> <label>ID</label> <default></default> </input> <input type="text" token="ip_address" searchWhenChanged="true"> <label>IP Address</label> </input> <input type="dropdown" token="timespan" searchWhenChanged="true"> <label>Previous Days</label> <choice value="7">7</choice> <choice value="14">14</choice> <choice value="30">30</choice> <choice value="60">60</choice> <choice value="90">90</choice> <choice value="120">120</choice> <choice value="180">180</choice> <choice value="9999">All</choice> <default>30</default> <initialValue>30</initialValue> </input> </fieldset> <search id="baseSearch"> <query>| dbxquery connection=XXX maxrows=2000 query="select \"timeLoRes\" as ACTIVITY_TIMESTAMP, \"category\",\"applicationId\",\"userId\",\"action\",\"action2\",\"action3\",\"policyId\",\"policyVersionId\",\"deviceId\",\"deviceHardwareId\",\"deviceOsType\",\"deviceOsVersion\",\"deviceModel\",\"sessionId\",\"deviceSessionId\",\"clientIp\",\"host\",\"errorCode\",\"errorMessage\",\"failure\" from REPORTS.REPORTS WHERE (\"userId\" = '$id$' OR \"clientIp\" = '$ip_address$') AND \"category\" = 'User' AND \"applicationId\" ='sso' AND \"timeLoRes\" &gt; (sysdate - $timespan$)" shortnames=true</query> </search> <row> <panel> <chart> <title>SAC Successful</title> <search base="baseSearch"> <query>| search action = assert_start AND action2 = token_response | eval _time=strptime( ACTIVITY_TIMESTAMP, "%Y-%m-%d %H:%M:%S" ) | timechart span=1d count by action2</query> </search> <option name="charting.axisLabelsY.majorUnit">1</option> <option name="charting.axisTitleX.visibility">collapsed</option> <option name="charting.axisTitleY.visibility">collapsed</option> <option name="charting.chart">column</option> <option name="charting.drilldown">all</option> <option name="charting.legend.placement">bottom</option> </chart> </panel> <panel> <table> <title>SAC Unsuccessful</title> <search base="baseSearch"> <query>| search action = assertion_start AND action2 = reject | chart count by action2</query> </search> <option name="drilldown">cell</option> </table> </panel> </row> <row> <panel> <title>SAC bind successful</title> <table> <title>Results</title> <search base="baseSearch"> <query>| search action = add_device_group | table ACTIVITY_TIMESTAMP, category,applicationId,userId,action,action2,action3,policyId,policyVersionId,deviceId,deviceHardwareId,deviceOsType,deviceOsVersion,deviceModel,sessionId,deviceSessionId,clientIp,host,errorCode,errorMessage,failure</query> </search> <option name="count">30</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="rowNumbers">false</option> <option name="wrap">true</option> </table> </panel> </row> <row> <panel> <title>SAC form </title> <table> <title>Results</title> <search base="baseSearch"> <query>| search action = assertion_start AND action2 = form AND action3 = action | table ACTIVITY_TIMESTAMP, category,applicationId,userId,action,action2,action3,policyId,policyVersionId,deviceId,deviceHardwareId,deviceOsType,deviceOsVersion,deviceModel,sessionId,deviceSessionId,clientIp,host,errorCode,errorMessage,failure</query> </search> <option name="count">30</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="rowNumbers">false</option> <option name="wrap">true</option> </table> </panel> </row> </form>   I was not able to format the code part, apologies for pasting it as it is.
I have some powershell scripts scheduled on a windows server and want to track their memory and cpu utilization. And I do have a working solution, but I just wondered what's the best practice here. ... See more...
I have some powershell scripts scheduled on a windows server and want to track their memory and cpu utilization. And I do have a working solution, but I just wondered what's the best practice here. So I created a WMI input for Win32_process, which has the "CommandLine" for each powershell.exe instance, and that includes the script name. But out of the box, Splunk cuts off that CommandLine field at the first space. I guess my question is, if there are any props.conf options (or in any other configuration) to let it read that full "line" and not stop at the spaces. It seems to be clever enough to read the WMI feed in a way where it puts each field=value into one line in the _raw data. The server in question has a universal forwarder installed. And that has an app with the wmi.conf in it: [WMI:powershell.exe] index = perfmon disabled = 0 interval = 60 wql = SELECT CommandLine, ProcessId, WorkingSetSize, KernelModeTime, UserModeTime from Win32_process WHERE Name = 'powershell.exe' And on the indexer, the sourcetype is configured in the props.conf like this: [WMI:powershell.exe] FIELDALIAS-dest_for_perfmon = host AS dest FIELDALIAS-src_for_perfmon = host AS src LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Custom disabled = false In the end, my goal is to get that script name, so I just added a field extraction for that EXTRACT-ScriptName = CommandLine=powershell.exe+\s+\-command+\s+"&\s+'*(?P<ScriptName>[^("|')]+) Is that the best way of doing it?
Hi Everyone, I need some help in restrict users from deleting other users knowledge objects.Recently one of the user has deleted the alerts which belongs to  other team.We need to restrict them in d... See more...
Hi Everyone, I need some help in restrict users from deleting other users knowledge objects.Recently one of the user has deleted the alerts which belongs to  other team.We need to restrict them in deleting other KO's and they have only the capability of deleting their own and share their KO's globally. All this is related to search and reporting app. Below is the existing config which we are using currently.Kindly advise me on tweaking the setting to achieve the above mentioned restrictions.   [role_vpn] accelerate_search = enabled cumulativeRTSrchJobsQuota = 50 edit_search_schedule_window = enabled export_results_is_visible = enabled get_metadata = enabled get_typeahead = enabled pattern_detect = enabled rest_properties_get = enabled rtSrchJobsQuota = 5 rtsearch = enabled schedule_search = enabled search = enabled srchDiskQuota = 200 srchIndexesAllowed = vpn srchIndexesDefault = vpn srchJobsQuota = 20 srchMaxTime = 0 And the permission for search and reporting are as follows. [] access = read : [ * ], write : [ * ] export = none Thank you.    
We have our Splunk instance connected to an API user per the specs of the Splunk App for Salesforce. One of the things we are trying to get more insight into is the event log so we can create alerts... See more...
We have our Splunk instance connected to an API user per the specs of the Splunk App for Salesforce. One of the things we are trying to get more insight into is the event log so we can create alerts, dashboards, etc. in order to be proactive. From what I can tell, even though we set up the Splunk sfdc:logfile to be on an interval of 300 seconds, I never end up getting data in Splunk until after 24 hours. This means I can't search for anything that happened unless it was 24 hours ago. Looking at some Salesforce documentation, it appears that there is hourly log files, as well as the 24 hour. Our Splunk admin sent me this screenshot showing me that the interval was set on that source type, although anytime I try and search that index using "today", I get no events.  We are wanting to configure some alerts based on events that are logged, but we are unable to do so with such old data.  Any idea of how we can increase the interval on this log file, outside of what appears to be set at 300 seconds?
I have a situation where I want to send just the content of one local log file on one indexer ("test_indexer") to another indexer ("production_indexer"). Apart from that, the sending indexer in this ... See more...
I have a situation where I want to send just the content of one local log file on one indexer ("test_indexer") to another indexer ("production_indexer"). Apart from that, the sending indexer in this scenario ("test_indexer") should continue to function as usual (indexing everything else locally). My plan was to just add an additional tcpout stanza in outputs.conf (in my case [tcpout:production_indexer] in /opt/splunk/etc/system/local/outputs.conf) and declare the _TCP_ROUTING parameter for the specific stanza in inputs.conf.  Problem: The sending indexer ("test_indexer") stops indexing any incoming and local data completely after I add the following configurations: /opt/splunk/etc/system/local/inputs.conf   [monitor:///path/to/my/file.log] index = my_index sourcetype = my_sourcetype _TCP_ROUTING = production_indexer   /opt/splunk/etc/system/local/outputs.conf   [tcpout:production_indexer] clientCert = $SPLUNK_HOME/etc/auth/server.pem server = xyz:9998 sslPassword = $abc== sslVerifyServerCert = false useSSL = true   To me, this behavior is wrong. I am just adding an additional, non-default tcpout stanza (on top of the default one defined in /opt/splunk/etc/system/default/outputs.conf) that is used only by one specific input stanza. According to my understanding, this change should neither impact any other inputs not the default tcpout definition.  Debugging output before adding the above configuration:    $ splunk btool --debug outputs list /opt/splunk/etc/system/default/outputs.conf [syslog] /opt/splunk/etc/system/default/outputs.conf maxEventSize = 1024 /opt/splunk/etc/system/default/outputs.conf priority = <13> /opt/splunk/etc/system/default/outputs.conf type = udp /opt/splunk/etc/system/default/outputs.conf [tcpout] /opt/splunk/etc/system/default/outputs.conf ackTimeoutOnShutdown = 30 /opt/splunk/etc/system/default/outputs.conf autoLBFrequency = 30 /opt/splunk/etc/system/default/outputs.conf autoLBVolume = 0 /opt/splunk/etc/system/default/outputs.conf blockOnCloning = true /opt/splunk/etc/system/default/outputs.conf blockWarnThreshold = 100 /opt/splunk/etc/system/default/outputs.conf cipherSuite = xyz /opt/splunk/etc/system/default/outputs.conf compressed = false /opt/splunk/etc/system/default/outputs.conf connectionTTL = 0 /opt/splunk/etc/system/default/outputs.conf connectionTimeout = 20 /opt/splunk/etc/system/default/outputs.conf disabled = false /opt/splunk/etc/system/default/outputs.conf dropClonedEventsOnQueueFull = 5 /opt/splunk/etc/system/default/outputs.conf dropEventsOnQueueFull = -1 /opt/splunk/etc/system/default/outputs.conf ecdhCurves = prime256v1, secp384r1, secp521r1 /opt/splunk/etc/system/default/outputs.conf forceTimebasedAutoLB = false /opt/splunk/etc/system/default/outputs.conf forwardedindex.0.whitelist = .* /opt/splunk/etc/system/default/outputs.conf forwardedindex.1.blacklist = _.* /opt/splunk/etc/system/default/outputs.conf forwardedindex.2.whitelist = (_audit|_internal|_introspection|_telemetry|_metrics|_metrics_rollup) /opt/splunk/etc/system/default/outputs.conf forwardedindex.filter.disable = false /opt/splunk/etc/system/default/outputs.conf heartbeatFrequency = 30 /opt/splunk/etc/system/default/outputs.conf indexAndForward = false /opt/splunk/etc/system/default/outputs.conf maxConnectionsPerIndexer = 2 /opt/splunk/etc/system/default/outputs.conf maxFailuresPerInterval = 2 /opt/splunk/etc/system/default/outputs.conf maxQueueSize = auto /opt/splunk/etc/system/default/outputs.conf readTimeout = 300 /opt/splunk/etc/system/default/outputs.conf secsInFailureInterval = 1 /opt/splunk/etc/system/default/outputs.conf sendCookedData = true /opt/splunk/etc/system/default/outputs.conf sslQuietShutdown = false /opt/splunk/etc/system/default/outputs.conf sslVersions = tls1.2 /opt/splunk/etc/system/default/outputs.conf tcpSendBufSz = 0 /opt/splunk/etc/system/default/outputs.conf useACK = false /opt/splunk/etc/system/default/outputs.conf writeTimeout = 300   Debugging output after adding the above configuration:    $ splunk btool --debug outputs list /opt/splunk/etc/system/default/outputs.conf [syslog] /opt/splunk/etc/system/default/outputs.conf maxEventSize = 1024 /opt/splunk/etc/system/default/outputs.conf priority = <13> /opt/splunk/etc/system/default/outputs.conf type = udp /opt/splunk/etc/system/default/outputs.conf [tcpout] /opt/splunk/etc/system/default/outputs.conf ackTimeoutOnShutdown = 30 /opt/splunk/etc/system/default/outputs.conf autoLBFrequency = 30 /opt/splunk/etc/system/default/outputs.conf autoLBVolume = 0 /opt/splunk/etc/system/default/outputs.conf blockOnCloning = true /opt/splunk/etc/system/default/outputs.conf blockWarnThreshold = 100 /opt/splunk/etc/system/default/outputs.conf cipherSuite = xyz /opt/splunk/etc/system/default/outputs.conf compressed = false /opt/splunk/etc/system/default/outputs.conf connectionTTL = 0 /opt/splunk/etc/system/default/outputs.conf connectionTimeout = 20 /opt/splunk/etc/system/default/outputs.conf disabled = false /opt/splunk/etc/system/default/outputs.conf dropClonedEventsOnQueueFull = 5 /opt/splunk/etc/system/default/outputs.conf dropEventsOnQueueFull = -1 /opt/splunk/etc/system/default/outputs.conf ecdhCurves = prime256v1, secp384r1, secp521r1 /opt/splunk/etc/system/default/outputs.conf forceTimebasedAutoLB = false /opt/splunk/etc/system/default/outputs.conf forwardedindex.0.whitelist = .* /opt/splunk/etc/system/default/outputs.conf forwardedindex.1.blacklist = _.* /opt/splunk/etc/system/default/outputs.conf forwardedindex.2.whitelist = (_audit|_internal|_introspection|_telemetry|_metrics|_metrics_rollup) /opt/splunk/etc/system/default/outputs.conf forwardedindex.filter.disable = false /opt/splunk/etc/system/default/outputs.conf heartbeatFrequency = 30 /opt/splunk/etc/system/default/outputs.conf indexAndForward = false /opt/splunk/etc/system/default/outputs.conf maxConnectionsPerIndexer = 2 /opt/splunk/etc/system/default/outputs.conf maxFailuresPerInterval = 2 /opt/splunk/etc/system/default/outputs.conf maxQueueSize = auto /opt/splunk/etc/system/default/outputs.conf readTimeout = 300 /opt/splunk/etc/system/default/outputs.conf secsInFailureInterval = 1 /opt/splunk/etc/system/default/outputs.conf sendCookedData = true /opt/splunk/etc/system/default/outputs.conf sslQuietShutdown = false /opt/splunk/etc/system/default/outputs.conf sslVersions = tls1.2 /opt/splunk/etc/system/default/outputs.conf tcpSendBufSz = 0 /opt/splunk/etc/system/default/outputs.conf useACK = false /opt/splunk/etc/system/default/outputs.conf writeTimeout = 300 /opt/splunk/etc/system/local/outputs.conf [tcpout:production_indexer] /opt/splunk/etc/system/local/outputs.conf clientCert = $SPLUNK_HOME/etc/auth/server.pem /opt/splunk/etc/system/local/outputs.conf server = xyz:9998 /opt/splunk/etc/system/local/outputs.conf sslPassword = $abc== /opt/splunk/etc/system/local/outputs.conf sslVerifyServerCert = false /opt/splunk/etc/system/local/outputs.conf useSSL = true   Note: Setting  indexAndForward to true is not an option as I really only want to forward the contents of the one specific local log file to the other indexer. 
 I have event like this from here i have to extract bold name  like : Burp-collab Qualys_scanner_RPA SIE-PT-BAU-1 SIE-PT-BAU-2Kali   can any one help me on this     <166>2020-09-11T12: [O... See more...
 I have event like this from here i have to extract bold name  like : Burp-collab Qualys_scanner_RPA SIE-PT-BAU-1 SIE-PT-BAU-2Kali   can any one help me on this     <166>2020-09-11T12: [Originator@6870 sub=Vmsvc.vm:/vmfs/volumes/5b33d479-61618708-d3cd-d094665b5e96/Burp-Collab/Burp-Collab.vmx opID=1bcac8c3 user=root] <13>2020-09-08T05: /vmfs/volumes/5b33d479-61618708-d3cd-d094665b5e96/Qualys_scanner_RPA/Qualys_scanner_RPA.vmx: Connected to mks-fd <164>2020-09-11T13:[Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/5b33d479-61618708-d3cd-d094665b5e96/SIE-PT-BAU-1/SIE-PT-BAU-1.vmx] Failed to find activation record, event user unknown. <166>2020-09-08T05:54:57.060Z siscesxi01.sisc-lab.com Hostd: info hostd[2099583] [Originator@6876 sub=Vmsvc.vm:/vmfs/volumes/5b33d479-61618708-d3cd-d094665b5e96/SIE-PT-BAU-2Kali/SIE-PT-BAU-2Kali.vmx opID=1bca6b2a user=root] Ticket issued for mks service to user: root    
Has anyone used this add-on before? https://splunkbase.splunk.com/app/5222/#/details I'm trying to see if there are any logs that this pull into Splunk which would be useful to a SOC Team. 
We're planning to purchase Splunk (v8.0.6) and Splunk ES (v6.2) shortly, and have a requirement to enable FIPS Mode in order to meet government regulations. We'll be following the directions from thi... See more...
We're planning to purchase Splunk (v8.0.6) and Splunk ES (v6.2) shortly, and have a requirement to enable FIPS Mode in order to meet government regulations. We'll be following the directions from this Splunk doc here: https://docs.splunk.com/Documentation/Splunk/8.0.6/Security/SecuringSplunkEnterprisewithFIPs  Once we're running on FIPS 140-2, how do we determine which cipher is being used?
Is it ok if someone change the root permission of local folder in etc/apps/search? I want to provide permission to all the users to access local folder. Will this impact any functionality of Splunk... See more...
Is it ok if someone change the root permission of local folder in etc/apps/search? I want to provide permission to all the users to access local folder. Will this impact any functionality of Splunk? Please guide me through this.
Let's say I have a dashboard and it runs a search and when that search is done it sets a token $risk$.     <panel> <single depends="$hide$"> <search> <progress> ... See more...
Let's say I have a dashboard and it runs a search and when that search is done it sets a token $risk$.     <panel> <single depends="$hide$"> <search> <progress> <unset token="risk"></unset> </progress> <done> <set token="risk">$result.risk$</set> </done> <query>| base search | table risk | head 1</query> <earliest>-15m@m</earliest> <latest>@m</latest> </search> </single> <html> <font size="4"> <p style="text-align:left;">$risk$</p> </font> </html> </panel>   Now risk is a multi-value field and the strings for each value are long:   Risk thing 1 0.06 > population (standard deviation 0.02 + average 0.04) Risk thing 2 0.74 > population (standard deviation 0.21 + average 0.30) Risk thing 3 0.90 > population (standard deviation 0.20 + average 0.65)    What happens when the token gets set and the html element renders risk is that the values turn into one long string:   Risk thing 1 0.06 > population (standard deviation 0.02 + average 0.04),Risk thing 2 0.74 > population (standard deviation 0.21 + average 0.30),Risk thing 3 0.90 > population (standard deviation 0.20 + average 0.65)   What I need is some way to format the token inside the html element.  I have tried to mvjoin the field using html tags as the delimiter but that does not work:   Risk thing 1 0.06 > population (standard deviation 0.02 + average 0.04)<br>Risk thing 2 0.74 > population (standard deviation 0.21 + average 0.30)<br>Risk thinkg 3 0.90 > population (standard deviation 0.20 + average 0.65)   I've tried different html tags around/inside the <p> and html elements and url encoded new line delimiters in the mv field and \n as a delimiter and who knows what else at this point.   I want to be able to render the mv field values with a line break between them.  The only thing that kinda worked at all was the <pre> html tag, but I didn't really like that too much either.  
Hi, I am trying to filter out unique request which does have a particular event. For instance, each request can go through following events 1. receive 2. process 3. publish we publish all events... See more...
Hi, I am trying to filter out unique request which does have a particular event. For instance, each request can go through following events 1. receive 2. process 3. publish we publish all events for every requests in splunk. I am trying to write a query to find all the uniq request ( let's say represented by requestId) which does not have a "publish" event.   How can I achieve it?  I have tried using`NOT` but that just ignore the event, In fact I want to eval. Any suugestions?
It's possible to assign the result of a subsearch to a field with the eval command as can be seen in the following snippet:   | makeresults | eval blahblah = [ | makeresults | eval search="\"b... See more...
It's possible to assign the result of a subsearch to a field with the eval command as can be seen in the following snippet:   | makeresults | eval blahblah = [ | makeresults | eval search="\"blah\"" ]   How can I accomplish this in an <eval> dashboard XML tag? I've tried the below in the dashboard XML source but the result is just '$blahblah$' instead of 'blah', as if the `blahblah` token is not defined:   <eval token="blahblah"> [ | makeresults | eval search="\"blah\"" ] </eval> <!-- OR --> <eval token="blahblah"> [ | makeresults | eval myOutput = "\"blah\"" | return $myOutput ] </eval> <!-- ... --> <panel> <title>DEBUG</title> <html> <pre> blahblah = '$blahblah$' </pre> </html> </panel>    In the documentation about <eval> and its limitations, subsearches are not listed in the list of limitations and unsupported functionality, so this should be possible.
I have an issue which seems to be simple but after 2 days I'm still struggling.  I am attempting to have one search return the number of Logins for a large set of Hosts for both Windows and Linux.  I... See more...
I have an issue which seems to be simple but after 2 days I'm still struggling.  I am attempting to have one search return the number of Logins for a large set of Hosts for both Windows and Linux.  I have successfully figured out each Search which will give me the numbers I want, however it only ever returns 1 Stat row.  I want to be able to show both numbers plus a total.  Here is my search altered slightly for security... index=windows [ inputlookup hosts.csv |  fields host] EventCode = 4627 |  stats count as winlogins | appendcols     [search index=linux [ inputlookup hosts.csv |  fields host]  type=login |  stats count as linuxlogins] | addtotals What I get is each value and a Total but it only has 1 Statistics row so I am unsure how to create a useful Visualization(Report) which will ultimately be placed on a Dashboard.  How can I get 1 search to return all 3 values as seperate Statistics so I can post a Report on a Dashboard?
Hi Everyone, I have one dashboard which consists of several panels . Like LOGIN,TIMEOUT. I want to display the Trend Indicator for the count values. Suppose I select the date Range between 11th Se... See more...
Hi Everyone, I have one dashboard which consists of several panels . Like LOGIN,TIMEOUT. I want to display the Trend Indicator for the count values. Suppose I select the date Range between 11th September to 13th September . The Timeout count for11th september is 3694,12th sep is 1209 and 13th september is 2755. I want to display the trend Indicator which will show the percentage increase/decrease of timeout count values. I have already use <option name="trendDisplayMode">percent</option> . But not sure the percentage increase and decrease count is coming correct. Can someone guide me do I need to add anything else to show Trend Indicator for comparison. Below is my XML Code: <panel> <single> <title>TIMEOUT</title> <search> <query>index="abc" sourcetype=xyz Timeout $Org$ | bin span=1d _time |stats count by _time</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <option name="colorBy">value</option> <option name="drilldown">all</option> <option name="height">100</option> <option name="numberPrecision">0</option> <option name="rangeValues">[0,10,25,40]</option> <option name="trendDisplayMode">percent</option> <option name="unit"></option> <option name="rangeColors">["0xFF0000","0xFF0000","0xFF0000","0xFF0000","0xFF0000"]</option> <option name="useColors">1</option> <drilldown> <set token="show_panel">true</set> <set token="selected_value">$click.value$</set> </drilldown> </single> </panel>