All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Splunk Community  I need some assistance with a Splunk alert, the search result provides exactly what I require but the alert can be improved. The search query:  source="/var/log/wireless.log" ... See more...
Hi Splunk Community  I need some assistance with a Splunk alert, the search result provides exactly what I require but the alert can be improved. The search query:  source="/var/log/wireless.log" AnyConnect OR NetworkDeviceName=fw* "NOTICE Passed-Authentication: Authentication succeeded" earliest=-30d@d latest=now | iplocation Calling_Station_ID | where NOT Country="South Africa" | stats count by Country, User_Name | eventstats sum(count) as Country_Count by Country | eventstats sum(count) as Username_Count by User_Name | where NOT (Username_Count >= 10 AND Country_Count >= 10) The search returns users and country,  only if the username count is less 10 and the country count is less than 10 in past 30 days, which is exactly what I want. The problem comes in wit h the alert, if I schedule the alert (lets say 10min) the query gets run, it creates alerts for each return value. I only want new events to be returned and not values which =was alerted on 10min ago.  Is there any way one can achieve this ?  Thank you so much 
I have defined eventhub_splunk_dev01event hub on HF  , no events are pulled  please assist    [azure_event_hub://eventhub_splunk_dev01] connection_string = ******** consumer_group = $Default ev... See more...
I have defined eventhub_splunk_dev01event hub on HF  , no events are pulled  please assist    [azure_event_hub://eventhub_splunk_dev01] connection_string = ******** consumer_group = $Default event_hub_name = insights-activity-logs event_hub_timeout = 5 index = test interval = 60 max_batch_set_iterations = 100 max_batch_size = 100 number_of_threads = 4 source_type = azure:eventhub disabled = 1 [splunk@ilissplfwd06 local]$   [splunk@ilissplfwd06 splunk]$ /bin/telnet 10.67.37.117 5671 Trying 10.67.37.117... Connected to 10.67.37.117. Escape character is '^]'. ^CConnection closed by foreign host. [splunk@ilissplfwd06 splunk]$ /bin/telnet 10.67.37.117 5672 Trying 10.67.37.117... Connected to 10.67.37.117.   No errors in the log , only the below messages     2020-10-29 16:08:41,971 DEBUG pid=23998 tid=ThreadPoolExecutor-0_0 file=__init__.py:deinitialize:170 | Deinitializing platform. 2020-10-29 16:08:41,971 DEBUG pid=23998 tid=ThreadPoolExecutor-0_0 file=thread.py:run:63 | Deallocating 'SymbolValue' 2020-10-29 16:08:41,971 DEBUG pid=23998 tid=ThreadPoolExecutor-0_0 file=thread.py:run:63 | Destroying 'SymbolValue' 2020-10-29 16:08:41,972 DEBUG pid=23998 tid=ThreadPoolExecutor-0_0 file=thread.py:run:63 | Deallocating 'LongValue' 2020-10-29 16:08:41,972 DEBUG pid=23998 tid=ThreadPoolExecutor-0_0 file=thread.py:run:63 | Destroying 'LongValue' 2020-10-29 16:08:41,972 DEBUG pid=23998 tid=ThreadPoolExecutor-0_0 file=thread.py:run:63 | Deallocating cSource 2020-10-29 16:08:41,972 DEBUG pid=23998 tid=ThreadPoolExecutor-0_0 file=thread.py:run:63 | Destroying cSource 2020-10-29 16:08:41,972 DEBUG pid=23998 tid=ThreadPoolExecutor-0_0 file=thread.py:run:63 | Deallocating 'StringValue' 2020-10-29 16:08:41,972 DEBUG pid=23998 tid=ThreadPoolExecutor-0_0 file=thread.py:run:63 | Destroying 'StringValue' 2020-10-29 16:08:42,008 DEBUG pid=23998 tid=Dummy-1 file=(unknown file):(unknown function):0 | Deallocating cSession 2020-10-29 16:08:42,008 DEBUG pid=23998 tid=Dummy-1 file=(unknown file):(unknown function):0 | Deallocating CBSTokenAuth 2020-10-29 16:08:42,009 DEBUG pid=23998 tid=Dummy-1 file=(unknown file):(unknown function):0 | Deallocating Connection 2020-10-29 16:08:42,009 DEBUG pid=23998 tid=Dummy-1 file=(unknown file):(unknown function):0 | Deallocating XIO 2020-10-29 16:08:42,009 DEBUG pid=23998 tid=Dummy-1 file=(unknown file):(unknown function):0 | Deallocating XIO 2020-10-29 16:08:42,009 DEBUG pid=23998 tid=Dummy-1 file=(unknown file):(unknown function):0 | Deallocating SASLMechanism 2020-10-29 16:08:42,009 DEBUG pid=23998 tid=Dummy-1 file=(unknown file):(unknown function):0 | Deallocating cSession 2020-10-29 16:08:42,009 DEBUG pid=23998 tid=Dummy-1 file=(unknown file):(unknown function):0 | Deallocating CBSTokenAuth 2020-10-29 16:08:42,009 DEBUG pid=23998 tid=Dummy-1 file=(unknown file):(unknown function):0 | Deallocating Connection 2020-10-29 16:08:42,009 DEBUG pid=23998 tid=Dummy-1 file=(unknown file):(unknown function):0 | Deallocating XIO 2020-10-29 16:08:42,009 DEBUG pid=23998 tid=Dummy-1 file=(unknown file):(unknown function):0 | Deallocating XIO 2020-10-29 16:08:42,009 DEBUG pid=23998 tid=Dummy-1 file=(unknown file):(unknown function):0 | Deallocating SASLMechanism 2020-10-29 16:08:42,009 DEBUG pid=23998 tid=Dummy-1 file=(unknown file):(unknown function):0 | Deallocating cSession 2020-10-29 16:08:42,009 DEBUG pid=23998 tid=Dummy-1 file=(unknown file):(unknown function):0 | Deallocating CBSTokenAuth 2020-10-29 16:08:42,009 DEBUG pid=23998 tid=Dummy-1 file=(unknown file):(unknown function):0 | Deallocating Connection 2020-10-29 16:08:42,009 DEBUG pid=23998 tid=Dummy-1 file=(unknown file):(unknown function):0 | Deallocating XIO 2020-10-29 16:08:42,009 DEBUG pid=23998 tid=Dummy-1 file=(unknown file):(unknown function):0 | Deallocating XIO 2020-10-29 16:08:42,009 DEBUG pid=23998 tid=Dummy-1 file=(unknown file):(unknown function):0 | Deallocating SASLMechanism 2020-10-29 16:08:42,010 DEBUG pid=23998 tid=Dummy-1 file=(unknown file):(unknown function):0 | Deallocating cSession 2020-10-29 16:08:42,010 DEBUG pid=23998 tid=Dummy-1 file=(unknown file):(unknown function):0 | Deallocating Connection 2020-10-29 16:08:42,010 DEBUG pid=23998 tid=Dummy-1 file=(unknown file):(unknown function):0 | Deallocating CBSTokenAuth 2020-10-29 16:08:42,010 DEBUG pid=23998 tid=Dummy-1 file=(unknown file):(unknown function):0 | Deallocating XIO 2020-10-29 16:08:42,010 DEBUG pid=23998 tid=Dummy-1 file=(unknown file):(unknown function):0 | Deallocating XIO 2020-10-29 16:08:42,010 DEBUG pid=23998 tid=Dummy-1 file=(unknown file):(unknown function):0 | Deallocating SASLMechanism ~     Escape character is '^]'. ^C^X Connection closed by foreign host. [splunk@ilissplfwd06 splunk]$   the ports are opened to Private EndPoint    
HI, I am cutting over non-clustered indexers (v7.3.3) to a new smart store (s2) index cluster (v8.0.6). Currently I have all new incoming data going to the new s2 idx cluster, and the old indexers ... See more...
HI, I am cutting over non-clustered indexers (v7.3.3) to a new smart store (s2) index cluster (v8.0.6). Currently I have all new incoming data going to the new s2 idx cluster, and the old indexers are not taking on any new data.  All coldToFrozen time settings on the old indexers are commented out/ stopped.  In other words, the warm data is not growing or rolling off to frozen. Our challenge is getting the non-frozen data and the frozen data into the new s2 indexer cluster so we can decom' the legacy non-clustered indexers. Our plan is to start with the non-frozen data first, then thaw the frozen data and move that into the s2 idx cluster. We have been reading splunk documentation but we are still a little confused by the process. Splunk reference we are looking at>>> https://docs.splunk.com/Documentation/Splunk/8.0.6/Indexer/MigratestandalonetoSmartStore Is there any other documentation we should review as well or will this process work for us? If anyone has experience with this type of data migration, any advice is much appreciated.  We welcome any suggestions to tackle this migration.   Thank you    
Hi I'm trying to set up a query in dbConnect. I have configured the connection correctly. I'm running the following query: SELECT CREATE_DATE, DELETE_DATE, SPLUNK_DEDUP_SQID, SPLUNK_IDX_ASC, FR... See more...
Hi I'm trying to set up a query in dbConnect. I have configured the connection correctly. I'm running the following query: SELECT CREATE_DATE, DELETE_DATE, SPLUNK_DEDUP_SQID, SPLUNK_IDX_ASC, FROM "USER"."TABLE" WHERE CREATE_DATE BETWEEN TO_DATE('20161101', 'RRRRMMDD') AND TO_DATE('20180821', 'RRRRMMDD') ORDER BY SPLUNK_IDX_ASC ASC If I run this in batch mode, the query works. However, if I run it in "rising" mode,  with SPLUNK_IDX_ASC set as the rising column, the query returns: java.sql.SQLException: Invalid column index No results found.
Hi   I am migrating from a single install to a cluster 1SH + 1MD + 3 Indexers.   When we are trying a load test - 5 heavy screens in parallel we are getting the following errors - this was not th... See more...
Hi   I am migrating from a single install to a cluster 1SH + 1MD + 3 Indexers.   When we are trying a load test - 5 heavy screens in parallel we are getting the following errors - this was not the case in the signal install and we think perhaps we are missing a prop?   [subsearch]: Unknown error for indexer: hp925srv_INDEXER4. Search Results might be incomplete! If this occurs frequently, check on the peer.   Unable to distribute to peer named 10.25.57.21:8089 at uri=10.25.57.21:8089 using the uri-scheme=https because peer has status=Down. Verify uri-scheme, connectivity to the search peer, that the search peer is up, and that an adequate level of system resources are available. See the Troubleshooting Manual for more information.   [subsearch]: Error connecting: Connect Timeout Regards Robert
Hi All I am trying to index some log files that have been converted to tab delimited text files. These are being picked up by a Universal Forwarder and forwarded to a one-box Splunk Enterprise serve... See more...
Hi All I am trying to index some log files that have been converted to tab delimited text files. These are being picked up by a Universal Forwarder and forwarded to a one-box Splunk Enterprise server.  Splunk is ingesting them ok but it is indexing the UK dates dd/mm/yyyy as US format mm/dd/yyyy for all dates up to the 10th of each month.  So, for October 8th (08/10/2020) for example I have no events indexed. The events collected on the 8th October have been indexed as August 10th.  So far I have changed props.conf on both UF and Splunk Enterprise to look like this [SourceType] NO_BINARY_CHECK = 1 TIME_PREFIX = ^ TIME_FORMAT = %d/%m/%Y %H:%M:%S I have also set the SourceType within Splunk to use Timezone = GMT Timestamp format = %d/%m/%Y %H:%M:%S Timestamp Prefix = ^ Any ideas where I am going wrong? 
Good morning to you all and happy Thursday! I have a set of data called server_os which contains CentOS 5, CentOS 6 and CentOS 7. As CentOS 5 is end of life and so will be centos 6 soon, I want t... See more...
Good morning to you all and happy Thursday! I have a set of data called server_os which contains CentOS 5, CentOS 6 and CentOS 7. As CentOS 5 is end of life and so will be centos 6 soon, I want to create a radio button for my analyst that once they click on "EOL" as shown here; it shows specific data for those 2 (or more) servers in the below graphs, tables etc. Windows was easy because “server_os”=win 2008 What is the best way to get around this? I’ve tried:   index=u* server_os=* | eval EOL=case(match(server_os,"(?i)CentOS 4/5 or later \(64-bit\)"),1 ,match(server_os,"(?i)CentOS 6 \(64-bit\)"),1) | search EOL=1 | dedup host, server_os | rename server_os AS EOL | table EOL   Just getting stuck so any ideas are welcome. Note for Windows this worked: Static does not seem to accept multiple values and adding another EOL underneath. Note II that I adjusted the value nicer as this was a test.  Thanks!
Hi all,   I am trying to build a dashboard to show from where the client geographically accessed the splunk web UI. Unfortunately when I use iplocation command it is returning fields country, city ... See more...
Hi all,   I am trying to build a dashboard to show from where the client geographically accessed the splunk web UI. Unfortunately when I use iplocation command it is returning fields country, city but not any values. Please help out.  I checked my $splunkhone/share directory, data base is available there.
Hello! I ask you to check if the props.conf I wrote is appropriate.   1. Data {"subscription_id": "ec7d6887-675d-46d6", "maximum": 109133.0, "namespace": "microsoft.dbformariadb/servers", "unit":... See more...
Hello! I ask you to check if the props.conf I wrote is appropriate.   1. Data {"subscription_id": "ec7d6887-675d-46d6", "maximum": 109133.0, "namespace": "microsoft.dbformariadb/servers", "unit": "Bytes", "_time": "2020-10-29T06:36:00Z", "average": 109133.0, "host": "/subscriptions/ec7d6887-675d-46d6/resourceGroups/RG-T/providers/Microsoft.DBforMariaDB/servers/azure-mariadb", "metric_name": "serverlog_storage_usage", "minimum": 109133.0}    2. index="_internal" host="VM-KC" log_level!=INFO (*fail* OR *extract*) ARN DateParserVerbose - Failed to parse timestamp in first MAX_TIMESTAMP_LOOKAHEAD (128) characters of event. Defaulting to timestamp of previous event (Thu Oct 29 14:02:00 2020). Context: source=azure_metrics://MariaDB|host=VM-KC|azure:metrics|   3. Line Breaking Error ERROR LineBreakingProcessor - Line breaking regex has no capturing groups: \}\} - data_source="/monitoring/scouter/server/ext_plugin_filelog/scouter-counter-javaee.json", data_host="VM-KC", data_sourcetype="scouter_json"   4. Timestamp Parsing Error A possible timestamp match (Fri Sep 10 00:41:19 2010) is outside of the acceptable time window. If this timestamp is correct, consider adjusting MAX_DAYS_AGO and MAX_DAYS_HENCE. WARN DateParserVerbose - A possible timestamp match (Fri Sep 10 00:41:19 2010) is outside of the acceptable time window. If this timestamp is correct, consider adjusting MAX_DAYS_AGO and MAX_DAYS_HENCE. Context: source=azure_metrics://MariaDB|host=VM-KC|azure:metrics| <props.conf Sample>  index = "azure" source = "azure_metrics : // MariaDB" sourcetype = "azure : metrics"  [source :: azure_metrics : // MariaDB] DATETIME_CONFIG = CURRENT BREAK_ONLY_BEFORE_DATE = true NO_BINARY_CHECK = true MAX_TIMESTAMP_LOOKAHEAD=200    
Hi, I have a search like this : index="test" sourcetype="B" | dedup Id | eval horodate=strptime(substr(Horodate,1,10),"%Y-%m-%d") | fieldformat horodate=strftime(horodate,"%Y-%m-%d") | stats co... See more...
Hi, I have a search like this : index="test" sourcetype="B" | dedup Id | eval horodate=strptime(substr(Horodate,1,10),"%Y-%m-%d") | fieldformat horodate=strftime(horodate,"%Y-%m-%d") | stats count(eval(Statut=="OK")) as OK count(eval(Statut=="KO")) as KO count(Statut) as TOTAL by horodate This search works good with time picker "last 24h" :   but not with the time picker "Today" : it returns "no results found" whereas I have 3 events ...    I found that the   can you help me please ?
Hello,  Are there searches or any log files that will tell me what is being forwarded from my heavy forwarder?  I have a multi site, clustered splunk environment that ingests all its own person... See more...
Hello,  Are there searches or any log files that will tell me what is being forwarded from my heavy forwarder?  I have a multi site, clustered splunk environment that ingests all its own personal logs first but then sends all of the data to a third party via a heavy forwarder and data diode. I do not have access to the data once they have been sent from the HF so i cannot assess what has been sent.  Are there any splunk techniques, searches, log files i can view from my heavy forwarder to determine what data have been sent to the data diode?  I can provide config files if required. 
Hi, I'm Alex from France as almost everyone here, I need some splunk guru ^^ fields computer and user are in index1, computer2 is in index2 I need a table with computer and related user fields, bu... See more...
Hi, I'm Alex from France as almost everyone here, I need some splunk guru ^^ fields computer and user are in index1, computer2 is in index2 I need a table with computer and related user fields, but only computers which are not in computer2 I can't get my table, please help me! ((index="index1") OR (index="index2")) | streamstats count by computer, user, computer2 | stats values(computer) AS computer, values(computer2) AS computer2 | mvexpand computer | where computer!=computer2 | table computer
I need show any value in every minute, but I only get value > 0 Search: | tstats count WHERE index=XXXXX C_TXN_A IN (1,2) C_TXN_B IN (1) ((C_TXN_C IN (1,2,3,5) AND C_TXN_D IN (5,6)) OR (NOT C_TXN_... See more...
I need show any value in every minute, but I only get value > 0 Search: | tstats count WHERE index=XXXXX C_TXN_A IN (1,2) C_TXN_B IN (1) ((C_TXN_C IN (1,2,3,5) AND C_TXN_D IN (5,6)) OR (NOT C_TXN_C IN (4,6) AND C_TXN_D IN (7,8))) by _time span=1m  | sort _time After that, I  get: 2020-10-29 10:45:00     47 2020-10-29 10:40:00     12 But I want to get: 2020-10-29 10:45:00     47 2020-10-29 10:44:00     0 2020-10-29 10:43:00     0 2020-10-29 10:42:00     0 2020-10-29 10:41:00     0 2020-10-29 10:45:00     12 How to do it?
When plotting a timechart on my SHCluster (just migrated to 8.0.6) it stopped rendering the data after 100 points. Easy to be seen with a sample request:     |gentimes start=10/02/2020 increment=... See more...
When plotting a timechart on my SHCluster (just migrated to 8.0.6) it stopped rendering the data after 100 points. Easy to be seen with a sample request:     |gentimes start=10/02/2020 increment=1h |eval _time=starttime|timechart count span=1h |streamstats count     At first I thought it was a new limit included in 8.0.6, but my configuration seems fine :     ## Web.conf jschart parameter : jschart_results_limit = 10000 jschart_series_limit = 100 jschart_test_mode = False jschart_truncation_limit.chrome = 50000 jschart_truncation_limit.firefox = 50000 jschart_truncation_limit.ie11 = 50000 jschart_truncation_limit.safari = 50000           ## Vizualisations line parameter : [line] allow_user_selection = True core.charting_type = line core.height_attribute = display.visualizations.chartHeight core.icon = chart-line core.order = 1 core.preview_image = line.png core.recommend_for = timechart, predict core.type = visualizations core.viz_type = charting data_sources = primary,annotation data_sources.annotation.params.count = 1000 data_sources.annotation.params.output_mode = json_cols data_sources.primary.params.count = $display.visualizations.charting.data.count:JSCHART_RESULTS_LIMIT:10000$ data_sources.primary.params.offset = 0 data_sources.primary.params.output_mode = json_cols data_sources.primary.params.show_metadata = true default_height = 300 default_width = 300 description = Track values and trends over time. label = Line Chart max_height = 10000 min_height = 100 min_width = 100 search_fragment = | timechart count [by comparison_category] supports_drilldown = True supports_export = True supports_trellis = True trellis_default_height = 400       I understand the precedence order linked to data_sources.primary.params.count it's taking the charting.data.count value first (from the dashboards) , then the jschart_results_limit then fallback to 10000...   but i'm only in the search app, not in a dashboard, i've no option charting.data.count value so it should be using the jschart_results_limits which is 10000 based on my web.conf Any ideads on what is happening ? The VERY strange thing is that it's only happening on my shcluster, my utility server is fine...   I've the same behavior on all the charting ...
I have multi line file (_json), which I am trying to create a individual events, the multi line file contains array of id, message and timestamp. Sample Event data:  { [-] logEvents: [ [-] { [-] ... See more...
I have multi line file (_json), which I am trying to create a individual events, the multi line file contains array of id, message and timestamp. Sample Event data:  { [-] logEvents: [ [-] { [-] id: 3576745055635743000077342515139507954347666517578940416 message: START RequestId: 4e1251df-11d9-55d0-918a-09bb06b96122 Version: $LATEST timestamp: 1603867953198 } { [+] } { [-] id: 35767450557316368740614159310005543840071546062336098306 message: [2020-10-28T06:52:33.240Z][4e1251df-11d9-55d0-918c-09cc06b96122][INFO][wfm-test2-lmd-towSyncWorkOrderWOM][HeaderProcessor.py, 23][The filtered request headers are {"test-PartyID": "test"}] timestamp: 1603867953241 } { [+] } { [-] id: 3576745057558067905821073966314329716666554135734059012 message: [2020-10-28T06:52:34.59Z][4e1251df-11d9-55d0-918c-09cc06b96122][INFO][wfm-test2-lmd-towSyncWorkOrderWOM][lambda_function.py, 37][Response received from SNOW with status code :202 and response as {"result":{"message":"Message has been received!","value":"WOR200033942808"}}] timestamp: 1603867954060 } { [+] } { [+] } ] logGroup: /aws/lambda/wfm-test2-lmd-towSyncWorkOrderWOM logStream: 2020/10/28/[$LATEST]0e5e38b8bf8e4247a5f063e5e1fdaf51 messageType: DATA_MESSAGE owner: 126208963777 subscriptionFilters: [ [+] ] Can you please guide me how to break this multi line event using the line breaker.
Hello everyone, I was wondering if this kind of search is possible. I want to replace the text from my search which looks like this: eventtype=zyxel_user sourcetype="zyxel-fw" msg="Failed login att... See more...
Hello everyone, I was wondering if this kind of search is possible. I want to replace the text from my search which looks like this: eventtype=zyxel_user sourcetype="zyxel-fw" msg="Failed login attempt to Device from *" | stats count by msg | rex field=msg mode=sed "s/'Failed login attempt to Device from ssh (incorrect password or inexistent username)'/SSH/g" Basically, I want to get instead of this long string  (Failed login attempt to Device.....) just SSH, so I can create a Pie Chart with this information.  Is that possible? Thank you very much for helping me!
Hi Team, Repeatedly I am getting below event : Network Visibility Agent registered successfully. Does it means network agent is getting auto restarted that is why it is getting registered multiple... See more...
Hi Team, Repeatedly I am getting below event : Network Visibility Agent registered successfully. Does it means network agent is getting auto restarted that is why it is getting registered multiple times into AppDynamics Controller? Thanks, Pratik
Hello All, I am trying to find categorial outlier for all the emails sent from our environment with respect to its count per day. My query is as follows, sourcetype="source" earliest=-2d latest=now... See more...
Hello All, I am trying to find categorial outlier for all the emails sent from our environment with respect to its count per day. My query is as follows, sourcetype="source" earliest=-2d latest=now() SenderAddress="*@mydomain.com" RecipientAddress!="*@mydomain.com"|timechart span=3d count by SenderAddress limit=0 | anomalydetection  "example@mydomain.com"  action=annotate | eval isOutlier = if(probable_cause != "", "1", "0") | table "example@mydomain.com", probable_cause, isOutlier | sort 100000 probable_cause But since I am having unlimited email addresses I have given limit=0 in timechart but unable to detect outliers for all the email address unless I specify them like anomalydetection  "example@mydomain.com" example2@mydomain.com  . I have tried something like below, sourcetype="source" earliest=-2d latest=now() SenderAddress="*@mydomain.com" RecipientAddress!="*@mydomain.com"|timechart span=3d count by SenderAddress limit=0 | anomalydetection  "[search sourcetype="source" earliest=-30d latest=now() SenderAddress="*@mydomain.com" RecipientAddress!="*@mydomain.com"|rename SenderAddress as search|table search|format] action=annotate | eval isOutlier = if(probable_cause != "", "1", "0") | table "example@mydomain.com", probable_cause, isOutlier | sort 100000 probable_cause But the above is not working. I have also tried with stats command as below but it is detecting overall outlier for last 3 days and is not comparing with specific email address per day, sourcetype="source" earliest=-2d latest=now() SenderAddress="*@mydomain.com" RecipientAddress!="*@mydomain.com"|bin _time span=1d|stats count by SenderAddress,_time | anomalydetection "SenderAddress" "count" action=annotate | eval isOutlier = if(probable_cause != "", "1", "0") | table "SenderAddress" "count", probable_cause, isOutlier | sort 100000 probable_cause Please do suggest a way where I can detect categorial outlier for emails sent per email address per day comparing with previous days. For example in the below data, 1@mydomain.com 9th sep 20 34emails                                         10th sep 20 100emails 3@mydomain.com 9th sep 20 45 emails                                         15th sep 20 37emails   It has to detect 1@mydomain.com on 10th sep as outlier, because comparing with previous day it has sent many emails.       
I have 2 different data set: 1. host and prevStatus field with IDLE value 2. server (same values as host) and server state with active/standby values. I would like to use prevStatus events ONLY fr... See more...
I have 2 different data set: 1. host and prevStatus field with IDLE value 2. server (same values as host) and server state with active/standby values. I would like to use prevStatus events ONLY from the active server.  My base search is something like index=indx (host=app1 OR host app2) (prevStatus=IDLE OR (server_state=active OR server_state=standby)) How do I mark all the prevStatus events so, that they have the current server_state field on them, so that I can then just filter prevStatus=IDLE AND host=server AND server_state=active? I think I need to use streamstats, but I did not quite get it there. Example data table below:   _time host prevStatus server server_state 2020-10-07 11:13:29.283 app1 IDLE 2020-10-07 11:28:09.284 app1 IDLE 2020-10-07 11:51:17.138 app2 IDLE 2020-10-08 01:55:27.816 app1 app2 standby 2020-10-08 01:55:40.591 app2 app1 active 2020-10-08 13:37:01.284 app1 IDLE 2020-10-09 12:11:13.786 app2 IDLE 2020-10-12 09:01:49.119 app1 app2 active 2020-10-12 09:12:30.444 app2 app1 standby 2020-10-12 10:43:59.461 app2 IDLE 2020-10-12 10:57:41.298 app1 IDLE   I think I need something like this: _time host prevStatus server server_state 2020-10-07 11:13:29.283 app1 IDLE 2020-10-07 11:28:09.284 app1 IDLE 2020-10-07 11:51:17.138 app2 IDLE 2020-10-08 01:55:27.816 app1 app2 standby 2020-10-08 01:55:40.591 app2 app1 active 2020-10-08 13:37:01.284 app1 IDLE app1 active 2020-10-09 12:11:13.786 app2 IDLE app1 active 2020-10-12 09:01:49.119 app1 app2 active 2020-10-12 09:12:30.444 app2 app1 standby 2020-10-12 10:43:59.461 app2 IDLE app2 active 2020-10-12 10:57:41.298 app1 IDLE app2 active
Tried inputlookup=abc | search NOT “row value” ,, but still getting the rows  I want to remove the entire two rows (first and second ) ; please help