All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, What I want to get  a table with all fields populated with their last values by time range. For each form I have a field called Created also the field Product that can be filled later or c... See more...
Hello, What I want to get  a table with all fields populated with their last values by time range. For each form I have a field called Created also the field Product that can be filled later or can be left empty. I want to calculate with form has a product value or are empty to get a total percentage. For a time range let's say Created date goes from Nov 1 to Nov 7 the field Product shows empty,  but if I add more day at time range like going ahead it gets populated properly, how can I get this last value when taking in count that I just want results within time range  Nov1-7 (time range picker), here what I have until now: index=main | eval _time=strptime(Created,"%Y-%m-%d %H:%M:%S") | addinfo | where ((_time >= info_min_time) AND (_time=="+Infinity" OR _time<=info_max_time)) | stats latest(Created) as Created latest(Product) as Product values(Delivered) as Delivered last(Updated) as Updated by Code   Code Created Product Delivered Updated 1 A89580 2020-11-02 15:56:20     2020-11-02 20:47:20 2 A23780 2020-11-03 21:18:37     2020-11-04 19:08:12 3 A23826 2020-11-03 21:20:58     2020-11-06 21:21:35 4 A23900 2020-11-03 21:25:05     2020-11-06 21:25:19 If I extend the time range: shows all values for Product and Delivered date I modified last(Update) for values(Update) I can see all the time range for each Code. Just to clarify Product and Delivered fields are independent one of another; after is Delivered, Product can be filled or left empty.   Code Created Product Delivered Updated 1 A89580 2020-11-02 15:56:20 PPA89580   2020-11-13 19:39:01 2020-11-02 15:56:24 2020-11-02 19:21:34 2020-11-02 20:47:20 2020-11-10 13:13:06 2020-11-13 19:39:01 2020-11-14 20:01:49 2 A23780 2020-11-03 21:18:37 PPA23780   2020-11-10 02:22:47 2020-11-03 21:18:51 2020-11-04 19:08:12 2020-11-07 19:08:18 2020-11-10 02:19:48 2020-11-10 02:22:47 2020-11-11 03:00:36 3 A23826 2020-11-03 21:20:58 PPA23826   2020-11-12 20:34:07 2020-11-03 21:20:58 2020-11-03 21:21:28 2020-11-06 21:21:35 2020-11-09 21:21:37 2020-11-12 17:56:48 2020-11-12 17:58:36 2020-11-12 20:34:07 2020-11-13 21:01:04 4 A23900 2020-11-03 21:25:05 PPA23900   2020-11-09 21:43:31 2020-11-03 21:25:15 2020-11-06 21:25:19 2020-11-09 13:07:25 2020-11-09 13:09:33 2020-11-09 21:43:31 2020-11-10 22:03:09  Assistance with this will greatly be appreciated. Thank you.
Hello! I'm trying to collect logs from Cisco ASA devices through Heavy Forwarder, I'm sending all Cisco ASA logs to my HF instance and then send them to indexers. I want to parse these logs and sen... See more...
Hello! I'm trying to collect logs from Cisco ASA devices through Heavy Forwarder, I'm sending all Cisco ASA logs to my HF instance and then send them to indexers. I want to parse these logs and send to indexers only VPN-event logs. How can I filter them? Can I filter them using event types from Cisco Add-on?
Hello, I am working with historical log data from a train system and I have two types of log files: log1: each row is an event that was logged every time a train arrived at a station.  log2: each... See more...
Hello, I am working with historical log data from a train system and I have two types of log files: log1: each row is an event that was logged every time a train arrived at a station.  log2: each row is an event that was logged every time a train station sign displayed a message. The messages predicted how many minutes it will take for the next train to arrive. There are around 50 log2 events that correlate with each log1 event. I was able to group together all the log2 events with their corresponding log1 event into transactions. Here is the search I used to do this:    sourcetype="log1" OR sourcetype="log2" | transaction serial platform maxspan=30m   This returns transactions which contain around 50 log2 events and 1 log1 event. How do I create a calculated field for each log2 event that makes up this transaction? The eval expression for the calculated field includes data from the log1 event in the transaction. Here is how I tried to do this:   sourcetype="log1" OR sourcetype="log2" | transaction serial platform maxspan=30m | eval prediction_deviation = (arrival_date_time - (sign_date_time + next_min * 60))   "arrival_date_time" is a field from log1. "sign_date_time" and "next_min" are fields from log2. "prediction_deviation" is the calculated field which I am trying to add as a new column to all of the events from log2.  When I run this command, only five values for "prediction_deviation" are calculated. I found out that this field is only being calculated for the transactions which only have one log2 event. These situations are outliers and there is no field being calculated for the rest of the transactions.  The eval command is only working when there is only one value for "sign_date_time" and "next_min". However, in most of the transactions there are about 50 values for these fields (one value for each log2 event in the transaction).  How do I calculate the "prediction_deviation" for all of the log2 events in a transaction? The calculation of this field requires the "arrival_date_time" field for which there is only one value in each transaction. Thank you for your help.
        source="main" service="sales" operation="inquiryV3" port="8443"       In these screenshots, there's no change in the query at all. query is very simple, so something lik... See more...
        source="main" service="sales" operation="inquiryV3" port="8443"       In these screenshots, there's no change in the query at all. query is very simple, so something like:     field1="a" field2="b" field3="c"     all are fixed string, no weird syntax or variable at all, completely basic field-based search all searches always start on 29 okt 12AM and we're only interested in 29 okt 12AM bar. first search covers until EOD (23:59:59) 29 okt, yields 703 events second one covers till EOD 30 okt, yields 752 third is till EOD 31 okt, yields 580 and last one, EOD 1 Nov, yields 642 How is this possible? what am I missing here?   Thanks in advance
Hi, I am looking for any recommendation when  dealing with such scenario. On one instance or one indexer, 300-400GB of data per day in a single index. Is there any recommended configuration for such... See more...
Hi, I am looking for any recommendation when  dealing with such scenario. On one instance or one indexer, 300-400GB of data per day in a single index. Is there any recommended configuration for such index? So far, I came with few changes: -  increase maxTotalDataSizeMB to go beyond the 500GB default and meet my requirement in term of retention. - Enable maxDataSize = auto_high_volume (host bucket to 10GB) What I am considering is to increase the number of hot bucket, cause with 6 Hot bucket by default, it is only 60GB of data which is not even 24 Hours of data. Should I increase it ? or should I only increase the number of warm bucket? or both? Are warm buckets also 10GB each? If my disk capacity and performance allows it, can I keep only warm bucket for my max retention (30 days) and dont use Cold? Any advice or feedbacks with this type of scenario? thanks /Fabien
I'm sure this is a noob question but here goes. I have used Splunk from the user perspective but now dipping my toes as an admin. I have Splunk 8.0.5: One cluster master One Search head Two index... See more...
I'm sure this is a noob question but here goes. I have used Splunk from the user perspective but now dipping my toes as an admin. I have Splunk 8.0.5: One cluster master One Search head Two indexers to host clustered indexes I am logged into the UI of the search head and have the admin role but I cannot do any of the following: View any of the clustered custom indexes View the licensing usage in the monitoring console I'm presuming this is because it need to logon to the UI of the master node for this in a clustered configuration? So when I try to do this with my account (LDAP) it fails. Is the auth.conf per node in the cluster or centralized? 
I have setup the AMQP Messaging Modular Input to write to the amqp index but it's not working.  It's reading from Rabbit, and the messages are removed.    In the search console, I see the message in ... See more...
I have setup the AMQP Messaging Modular Input to write to the amqp index but it's not working.  It's reading from Rabbit, and the messages are removed.    In the search console, I see the message in the _internal index but not in the amqp index.   When I read the actual messages, it shows index=_internal.   I am using a local install of splunk.  I also setup the rest api modular input and it works as expected.   In the splunk UI I see the index is set for amqp, and here is my config: [amqp://RabbitMQ] ack_messages = 1 activation_key = **** exchange_name = event_bus hec_batch_mode = 0 hec_endpoint = raw hec_https = 0 hostname = localhost index_message_envelope = 0 index_message_propertys = 0 log_level = TRACE output_type = stdout password = guest port = 5672 queue_name = LogsAMQP sourcetype = _json use_ssl = 0 username = guest index = amqp routing_key_pattern = virtual_host = disabled = 1
Hi, i'm using Splunk since two month and i love it. But i need help. I have a lot of sensors, sampling per minute. I have a lookup where I can fill in formulas in text format like 'if (sensor1> 200,... See more...
Hi, i'm using Splunk since two month and i love it. But i need help. I have a lot of sensors, sampling per minute. I have a lookup where I can fill in formulas in text format like 'if (sensor1> 200,0, sensor1 / 10)'. Over the period concerned, I want to apply the formula for each sensor every minute. Here is an expample with four sensors  (row* are the values) : id_sensor row 1 row 2 row 3 row 4 formula ID1 77.2 250 77.5 79.4 =if(value<200,value/10,false) ID2 227.29 227.18 226.59 227.1 =value/10 ID3 34.1 34.8 35.9 36.1 =(value*9/5)+32   And the results needed id_sensor row 1 row 2 row 3 row 4 ID1 7.72 0 7.75 7.94 ID2 22.729 22.718 22.659 22.71 ID3 93.38 94.64 96.62 96.98   I can have a lot of rows and colums ...  I saw quite a few "close" answers on this forum, but none that I managed to apply. In particular, I wish to avoid going through a subsearch which would limit the number of exploitable results. Thank you very much in advance.
I have an automatic lookup configured for a particular sourcetype. The events that have this sourcetype are stored in a single index. When I search for these events, the automatic lookup seems to wor... See more...
I have an automatic lookup configured for a particular sourcetype. The events that have this sourcetype are stored in a single index. When I search for these events, the automatic lookup seems to work in that it outputs the fields I would expect. However, when I search more broadly, the automatic does not output the fields. For example, the search below:   index=index1 (other criteria) | table _time, output_field1, output_field_2, ... output_fieldN    produces the "output_field*" fields. However, if I run a search like the one below: index IN (index1, index2, index3) (other criteria) | table _time, output_field1, output_field_2, ... output_fieldN the "output_field*" fields are not always produced (in very small instances, a single record will have the fields). I have absolutely no idea why this is the case. For reference, we're running Enterprise Security, the automatic lookup I have configured for that sourcetype is a lookup against the asset_lookup_by_str KV store. If anyone knows where to look to help figure this out, let me know.
Hey guys, according to the Splunk documentation for compref_searchbar, the properties of the internal timerange created by the search bar are configurable via  timerange_*  (see URLs below) . Howeve... See more...
Hey guys, according to the Splunk documentation for compref_searchbar, the properties of the internal timerange created by the search bar are configurable via  timerange_*  (see URLs below) . However, when I try to set the dialogOptions property via  timerange_dialogOptions  (using the mypresetsettings example dictionary given in the documentation for compref_timerange), I get a JS exception within my SimpleXML dashboard:   common.js:1114 Uncaught TypeError: str.replace is not a function at Object.replaceTokens (common.js:1114) at Object.computeValue (common.js:1114) at child._pullPropertyValue (common.js:1114) at child._setBinding (common.js:1114) at common.js:1114 at Function._.each._.forEach (common.js:1114) at child._updateBindingsForProperties (common.js:1114) at child.<anonymous> (common.js:1114) at triggerEvents (common.js:725) at child.trigger (common.js:725) at configure (dashboard.js:1178) at initialize (dashboard.js:1178) at Backbone.View (dashboard.js:669) at constructor (dashboard.js:1178) at child (dashboard.js:669) at _createTimeRange (dashboard.js:1178)   The basic example given in the WebFramework documention works fine otherwise. But once I try to limit the time range picker, it fails. Can anybody tell my, what (or if) I'm doing wrong? I've been trying Splunk 8.0.5 and 8.1.0 in the last version of Google Chrome - same result. The script is 1:1 identical with the documentation, except the timerange_* properties set. I also tried defining it via the options and settings before rendering the search bar, but no visible effect. Probably, because porperties like dialogOptions or presets are only evaluated during the initialization phase, making subsequent changes useless. EDIT from the 17-NOV-2020: After further investigations, I believe this is a bug, eventually happening in method _updateBindingsForProperties(): For some reasons, Splunk wants to replace tokens in the TimeRangeView properties when created by the SearchBarView . But once the passed property is an object or array (e. g. try passing  timerange_foo: {}  or   timerange_foo: [] ), str.replace() will fail. Unfortunately, I may not file a bug report - would have to task a customer for this. But is any developer reading here to confirm my observation?   Dashboard: demo.xml   <form script="demo.js"> <label>demo</label> <row> <panel> <html> <div id="mysearchbarview"></div> </html> <table> <search base="example-search"> <query>| search *</query> </search> </table> </panel> </row> </form>   Script: demo.js   require([ "splunkjs/mvc/searchmanager", "splunkjs/mvc/searchbarview", "splunkjs/mvc/simplexml/ready!" ], function(SearchManager, SearchBarView) { // Create the search manager var mysearch = new SearchManager({ id: "example-search", status_buckets: 300, required_field_list: "*", preview: true, cache: true, autostart: false, // Prevent the search from running automatically search: "index=_internal | head 500" }); // Create the searchbar var mysearchbar = new SearchBarView({ id: "example-searchbar", managerid: "example-search", timerange_earliest_time: "-24h@h", timerange_latest_time: "now", timerange_dialogOptions: { showPresets: false, showCustomRealTime: false, showCustomAdvanced:false }, el: $("#mysearchbarview") }).render(); // Listen for changes to the search query portion of the searchbar mysearchbar.on("change", function() { // Reset the search query (allows the search to run again, // even when the query is unchanged) mysearch.settings.unset("search"); // Update the search query mysearch.settings.set("search", mysearchbar.val()); // Run the search (because autostart=false) mysearch.startSearch(); }); // Listen for changes to the built-in timerange portion of the searchbar mysearchbar.timerange.on("change", function() { // Update the time range of the search mysearch.settings.set(mysearchbar.timerange.val()); // Run the search (because autostart=false) mysearch.startSearch(); }); });   https://docs.splunk.com/DocumentationStatic/WebFramework/1.5/compref_searchbar.html https://docs.splunk.com/DocumentationStatic/WebFramework/1.5/compref_timerange.html Thanks
Hi Splunk experts,  My events have a timeline that tells me how long certain operations took. What I'm trying to determine is how frequently "item_B" has a longer duration than "item_C". The array i... See more...
Hi Splunk experts,  My events have a timeline that tells me how long certain operations took. What I'm trying to determine is how frequently "item_B" has a longer duration than "item_C". The array is not guaranteed to have the same order every time so I need to access each object in the array by the "label" field. Any suggestions?     timeline":[ { "label":"item_A", "duration":1 }, { "label":"item_B", "duration":955, }, { "label":"item_C", "duration":0, }, { "label":"item_D", "duration":55, } ]      
Hi! I'm trying to continue to tune our Splunk Search Heads (We currently have 15!). I'm noticing a few odd behaviors and wanted to see what I could do. 1) Over time, the number of PendingDiscard mes... See more...
Hi! I'm trying to continue to tune our Splunk Search Heads (We currently have 15!). I'm noticing a few odd behaviors and wanted to see what I could do. 1) Over time, the number of PendingDiscard messages keeps piling up 2) Our Search Head CPU and Memory usage is relatively low to medium 3) Over time, our concurrency counts keep growing (I need to lower my limits, which may also help here) 3) On our search heads, the executor_workers is set to the default of 10. executor_workers = <positive integer> * Only valid if 'mode=master' or 'mode=slave'. * Number of threads that can be used by the clustering thread pool. * A value of 0 defaults to 1. * Default: 10  Which leads to my question. I've seen a couple of answers posts around setting this on the Search Head, but the Docs for .7.2.10 look to me like it's more designed for the Indexers.  But would adding more executor workers allow for more searches to run? What I'm seeing is no more than 15-25 searches run on my 16 Core (32 vCPU) box, even though my limits are much higher than that. And the box isn't even breathing hard. Can someone offer some advice here? Thanks! Stephen
Hello Spunkers, I have Splunk app for Windows Infrastructure installed and have done the setup but when I get to the "customize features" section it can't find the AD data it is looking for. My cli... See more...
Hello Spunkers, I have Splunk app for Windows Infrastructure installed and have done the setup but when I get to the "customize features" section it can't find the AD data it is looking for. My client/Universal Forwarders are calling home and sending data. It seems as if my indexes are not parsing the data. Of the following indexes msad, perfmon, windows, wineventlog; only perfmon and wineventlog are showing in the Splunk App for Win Infra. But the data is only for the splunk server where splunk resides. . Thanks in advance for any help.   My setup has the deployment and the search head are on the same SPLUNK instance. Splunk version: 8.1.0 Splunk app for Windows Infrastructure v2.0.1 Splunk Supporting Add-on for Microsoft Windows v7.0 Splunk Supporting Add-on for Microsoft Windows Active Directory v3.0.1   Here is the output of the "detect features" button.   Detecting Event Monitoring ... Windows: Event Monitoring found. Detecting Performance Monitoring ... Windows: Performance Monitoring found. Detecting Applications and Updates ... Windows: Applications and Updates found. Detecting Network Monitoring ... Windows: Network Monitoring not found. Detecting Print Monitoring ... Windows: Print Monitoring not found. Detecting Host Monitoring ... Windows: Host Monitoring not found. Detecting Domains ... Active Directory: Domains not found. Detecting Domain Controllers ... Active Directory: Domain Controllers not found. Detecting DNS ... Active Directory: DNS not found. Detecting Users ... Active Directory: Users not found. Detecting Computers ... Active Directory: Computers not found. Detecting Groups ... Active Directory: Groups not found. Detecting Group Policy ... Active Directory: Group Policy found. Detecting Organizational Units ... Active Directory: Group Policy found. Detecting Organizational Units ... Active Directory: Organizational Units found.s
Hi guys! I researched some things in the forum, I found some using an addon for GuardDuty that has been discontinued. I installed the Splunk Addon fo AWS, but there is no input for this service in ... See more...
Hi guys! I researched some things in the forum, I found some using an addon for GuardDuty that has been discontinued. I installed the Splunk Addon fo AWS, but there is no input for this service in the options. Does anyone know the step-by-step how to configure on the AWS and Splunk Forwarder side? Thanks
We recently upgraded from Splunk Enterprise 6.1.4 to 8.0.5. We collect quite a bit of Windows Performance counters.  I see that time series indexes are available as per this. Being a Splunk part-time... See more...
We recently upgraded from Splunk Enterprise 6.1.4 to 8.0.5. We collect quite a bit of Windows Performance counters.  I see that time series indexes are available as per this. Being a Splunk part-timer I'd assume it's a no brainier to start using these for performance counter collection? Seems like that's the entire reason they were brought into the product? Do they save on license count?   PS: Obviously I might have to modify a few dashboards and queries but I'm OK with that. 
  In Total_error Count , I want to add if the logs contains string like "exception", "failed", "error" ( Case Insensitive if possible ).  in addition to level=ERROR condition.      index=myIndex ... See more...
  In Total_error Count , I want to add if the logs contains string like "exception", "failed", "error" ( Case Insensitive if possible ).  in addition to level=ERROR condition.      index=myIndex sourcetype=mySourceType | timechart count as total_logs count(eval(level="INFO")) as total_info count(eval(level="WARN")) as total_warn count(eval(level="ERROR") ) as total_error span=1h       Added those search criteria like this . did not work .  count(eval(level="ERROR" OR ("Failed" OR "Exception" OR "Fatal") )   The condition should be  where level="ERROR" OR ( log like '%failed%' or log like '%Exception%')    ( case should not matter).  Need your expert advise.
Hi everyone, I'm new to Splunk and trying to create a simple report, but I'm already having trouble. I would like to do a search on a DATA_ACA field that contains dates in this format: 2020-11-13 1... See more...
Hi everyone, I'm new to Splunk and trying to create a simple report, but I'm already having trouble. I would like to do a search on a DATA_ACA field that contains dates in this format: 2020-11-13 15:10:23. The search must return all those events that have the previous month in the DATA_ACA field, therefore all the events that have: 2020-10- * I tried with index=........ | eval month_aca = strptime (relative_time (now (), "- 1mon @ d") "% m) | eval year_aca = strptime (relative_time (now (), "- 1mon @ d") "% Y) | eval data_aca = year_aca. "-". month_aca. "- *" | search DATA_ACA = data_aca ..... .... | table DATA_ACA, month_aca, year_aca, data_aca but nothing returns me no event. You can help me? Do you have any suggestions ? Tks Bye Antonio
I want to create an Accelarated Data Model. For that I have created a Base Search which has a join command. However, I am getting below error while accelarting the DM. Acceleration Warning You can ... See more...
I want to create an Accelarated Data Model. For that I have created a Base Search which has a join command. However, I am getting below error while accelarting the DM. Acceleration Warning You can only accelerate data models that include at least one event-based dataset or one search-based dataset that does not use reporting commands.   How can I use a Join command in the base search?    
Hello everyone, I am planning to upgrade my all-in-one Splunk which is on version 7.2.4 to 8.1. According to the documentation about the upgrade, I am able to perform this upgrade. However, I ... See more...
Hello everyone, I am planning to upgrade my all-in-one Splunk which is on version 7.2.4 to 8.1. According to the documentation about the upgrade, I am able to perform this upgrade. However, I have a little question: Actually I am using a deployment-server and I am collecting logs from universal forwarders only. According to the documentation, I do not have to stop my indexer during the upgrade. In this case, I will not lose any logs during the upgrade. If we are following the documentation, upgrade sounds very easy but we never know what can happen during the upgrade. My All-in-one Splunk is installed on a virtualized machine and I will perform a snapshot. I will rollback if any problem happens during the upgrade. During the upgrade my UFs will keep sending logs to my indexer but if I rollback, every log that my UFs sent to my indexer will be lost. What can I do to prevent this loss of logs?   Thank you for your replies.
Hi, I have some sylog events, login failed and login success in particular. I can determine if the event is success or failed by a field (field1) which contain something like "success" or "failure".... See more...
Hi, I have some sylog events, login failed and login success in particular. I can determine if the event is success or failed by a field (field1) which contain something like "success" or "failure". In the event I Have also a field mac_address(field2) which contain some MAC address. I need to count the number of mac address that exist in failure but nerver exist in success. Can you help me???   Thanks in advance