All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Team, Could you please throw some light here? We are receiving the error "Schema validation failed, unexpected property truncate" while editing the HEC token for modifying the existing sourcetyp... See more...
Hi Team, Could you please throw some light here? We are receiving the error "Schema validation failed, unexpected property truncate" while editing the HEC token for modifying the existing sourcetype to a new one. Appreciate any help here. TIA
Hi, so I am trying to record the Earliest connection for IP addresses and the Latest connection for IP addresses however when trying to use both the earliest(_time) and latest(_time) command it seems... See more...
Hi, so I am trying to record the Earliest connection for IP addresses and the Latest connection for IP addresses however when trying to use both the earliest(_time) and latest(_time) command it seems that my latest(_time) command is overwriting my earliest(_time) value. Any reason for why this is happening and a potential fix? My code is as follows: | datamodel Network_Traffic All_Traffic search | search  All_Traffic.src_ip="172.18.*" OR All_Traffic.src_ip="172.19.*" OR All_Traffic.src_ip="172.20.*" OR | dedup All_Traffic.src_ip | eventstats earliest(_time) AS Earliest, latest(_time) as Latest by All_Traffic.src_ip | eval Earliest = strftime(Earliest,"%m/%d/%Y:%H:%M:%S") | eval Latest = strftime(Latest, "%m/%d/%Y:%H:%M:%S") | table All_Traffic.src_ip  Earliest Latest  
In a single value element, the text size always adjusted dynamically to the element's height. E.g., if you use the mouse to increase the single element's height, the text becomes larger. However, on... See more...
In a single value element, the text size always adjusted dynamically to the element's height. E.g., if you use the mouse to increase the single element's height, the text becomes larger. However, once you add the tag version="1.1" to the topmost dashboard or form element, the text size is fixed and does not adjust to changing heights any more. Reproduction In Splunk's web UI, create a new "classic" dashboard Add a single value visualization with the following search: index=_internal | timechart count Use the mouse to make the panel higher Notice how the single element's font size adjusted to the higher panel (it increased dynamically) Save the dashboard Edit the dashboard's source: change <dashboard> to <dashboard version="1.1"> Save the dashboard Refresh the page (press F5) The single element's font size is now smaller and it does not adjust to changed heights Revert the changes in the source, save the dashboard, and refresh the page The font size is adjusted correctly again A screen recording is available here: https://www.youtube.com/watch?v=M_btqoyDg2I My test environment: Splunk Enterprise 8.2.1 Chrome 92 Update 2021-09-23 Still unfixed in Splunk Enterprise 8.2.2.1
Hi, is it possible to indexed the event only once.  Thank you in advance.
I am extracting a list of free text string in the _ raw and creating a new field. The list of terms comes from user input, on the search input of a dashboard.  I cant seem to find how to place th... See more...
I am extracting a list of free text string in the _ raw and creating a new field. The list of terms comes from user input, on the search input of a dashboard.  I cant seem to find how to place the token/variable in the regex... probably something easy im missing. $token$="test|google|domain|badguy"      (formmated this way so regex can see it as OR separated list) rex field=_raw " (?<extractedfieldname>$token$) (does not work) Is there a way to do this? if not a token option, can i: eval tokenname=$token$ rex field=_raw " (?<extractedfieldname>'tokenname') (does not work) After the token/variable is placed correctly this is the search format im looking for: rex field=_raw " (?<extractedfieldname>test|google|domain|badguy) (this does work) thanks for any help!
We are testing a study on routing logs from an e-mail security product we have used to the SIEM environment. In this context, we carried out studies using free or community versions of different SIEM... See more...
We are testing a study on routing logs from an e-mail security product we have used to the SIEM environment. In this context, we carried out studies using free or community versions of different SIEM products. The logs transmitted to Splunk were sent encrypted with TLS as they were transmitted to other products. However, the logs we see on Splunk cannot be decrypted and come in the below. Example output: \x00 \x00 \xFC m\xDF qs\x81\xF2^8g&&\xB3B\xDF\xF9\xD5 I checked the config files in Splunk and it already supported TLS.  How can I fix that issue? 
I am getting multiple of the same errors + same saved searches that are skipped. So I can not find exactly how many time an App may have been installed without using the "upgrade" option. Please advi... See more...
I am getting multiple of the same errors + same saved searches that are skipped. So I can not find exactly how many time an App may have been installed without using the "upgrade" option. Please advise. Thank u very much in advance.
I know Saved searches depend on Kvstores & Kvstores are updates once a day. In order to synch the timing saved searches run & when Kvstores update. How do I look for the timing for Kvstores updates. ... See more...
I know Saved searches depend on Kvstores & Kvstores are updates once a day. In order to synch the timing saved searches run & when Kvstores update. How do I look for the timing for Kvstores updates. Any SPLs or using GUI?    
Hello! Sample data: Vehicle Hour of Day count delta(count) car1 11 5 -- car1 12 0 -5 car1 13 3 3 car2 11 9 6 car2 12 5 -4 car3 11 5 0 car3 12 5 0 ... See more...
Hello! Sample data: Vehicle Hour of Day count delta(count) car1 11 5 -- car1 12 0 -5 car1 13 3 3 car2 11 9 6 car2 12 5 -4 car3 11 5 0 car3 12 5 0 car3 13 0 -5 car3 14 2 2   Please notice how delta(count) is calculated even going from a row that says car1 to car2 or car2 to car3. I want to alert when delta(count) is greater than 5, but only if this delta is calculated going from a row that car2 to a similar next row that is also for car2. That is, if the row switches from car1 to car2 and delta is greater than 5, or like in the table, delta is 6, I want to ignore this change and only show rows with deltas greater than 5 that were calculated for the same car, and not between different cars. Is there a way to do this? I tried to using streamstats/eventstats with the last() function but I'm not sure that I am using it correctly. For the end product, I need an alert that will fire off when a car has an increase in its count of more than 5. Thank you so much for any help!!!
Hi,  I want to monitor the subnet 172.30.0.0/24 through splunk, which IP address is used and which is not. Whenever new IP address comes live or assign to any host, new alert should be made. Thanks
I have a usecase to send data from splunk to snow, I noticed there are a bunch of scripts available in servicenow add-on, did anyone tried this effort? please let me know your thoughts.
Hopefully, someone could guide me through this process. Newbie here. Pllease bear with me with these lame questions. All I wanted to do is push my routers' logs pointing to the Syslog server. C... See more...
Hopefully, someone could guide me through this process. Newbie here. Pllease bear with me with these lame questions. All I wanted to do is push my routers' logs pointing to the Syslog server. Can I implement this using Splunk without a good background in programming? I know IP routing but not much on coding. I can configure all routers pointing to the syslog server. What are the things I need to consider? From hardware to installation of splunk? What splunk should I need to subscribe if I only wanted to implement a syslog server?  Do we have a step-by-step tutorial on how do we implement this by beginners point-of-view? Thank you so much. Appreciate someone could help me. God bless and Keep safe guys!  
Hello, I am new for Splunk ES. To configure the ES Incident Review, I use the default setting for the Time which should match the event time format? event time format However, my Incident re... See more...
Hello, I am new for Splunk ES. To configure the ES Incident Review, I use the default setting for the Time which should match the event time format? event time format However, my Incident review time shows different format? Where should I change it?    
Hi Experts,                     I'm stuck trying to show two queries on the same chart. The result sets should be pretty similar (so no issue with the axis) but it seems to show either the 1000 sear... See more...
Hi Experts,                     I'm stuck trying to show two queries on the same chart. The result sets should be pretty similar (so no issue with the axis) but it seems to show either the 1000 search or the 1001 search only and not together. I'm prob not adding them as independent queries correctly to show on the same graph!  Both work on their own without any issues but when combining it doesn't work.  Example     index=someindex "1000" sourcetype="somesourcetype-logs" [search index=someindex "1001" sourcetype="somesourcetype"] | `splitl` | rex "1000=(?<NewRequest>.*?);" | rex "1001(?<RejectedRequest>.*?);" | rex field=source "somesource(?<instance>.*?)\b" | dedup NewRequest,RejectedRequest | where isnotnull(NewRequest) | where isnotnull(RejectedRequest) | timechart count(NewRequest), count(RejectedRequest) by instance   Any help would be great 
We have several remote and traveling systems that we need to forward logs from to our on-prem Spunk environment. Splunk Cloud is not an option. Are there any best practices for system config or arch... See more...
We have several remote and traveling systems that we need to forward logs from to our on-prem Spunk environment. Splunk Cloud is not an option. Are there any best practices for system config or architecture? Is it possible to use a reverse proxy for inbound connections to the deployment server? Should the reverse proxy have a splunk UF or should an intermediate HF be used to forward to the indexer tier?
In new search window (image attach) There are to column "Time" "Event"   How can I automatically(not write each time in query) edit the Time  column that display another field like from event  or a... See more...
In new search window (image attach) There are to column "Time" "Event"   How can I automatically(not write each time in query) edit the Time  column that display another field like from event  or add 1 more column ?  
Hi, I recently started working at a new firm to monitor and manage Splunk for them. The issue I'm encountering is that I want to have a thorough understanding of their deployment, so I'm trying to s... See more...
Hi, I recently started working at a new firm to monitor and manage Splunk for them. The issue I'm encountering is that I want to have a thorough understanding of their deployment, so I'm trying to see where some of their DBX inputs are being used. To avoid confusion as to what I'm trying to do, let me give an example. Let's say I have an input in DB Connect (we'll call it Input_A); The data ingested via Input_A is used by an unknown number of Alerts, an unknown number of dashboards and an unkown number of reports. Is there some way that I can find out how many alerts/dashboards/reports etc. use the data originating from Input_A as well as the names of those alerts/dashboards/reports etc. ? I'm still relatively inexperienced, so perhaps my question will have a simple solution that I'm just not seeing (I'm hoping that the solution is more efficient that looking at the hundreds of alerts/dashobards/reports we have one by one)   Thank you!
Hi, Below is the JS which I have for table row expansion which works fine for my requirement. I want the search given in the JS should be passed from my Splunk dashboard instead of hardcoding it her... See more...
Hi, Below is the JS which I have for table row expansion which works fine for my requirement. I want the search given in the JS should be passed from my Splunk dashboard instead of hardcoding it here. This is because I don't want to update the JS whenever I want to add or delete anything in my search.  Below search in JS which I want to be passed through Splunk dynamically instead of hardcoding in JS: this._searchManager.set({ search: 'index=idx_omn source=vs.csv | eval key=process+" "+oprocess_start_date | search key="' + processCell.value + '" | stats values(process) as process values(oprocess_start_date) as oprocess_start_date values(oprocess_end_date) as oprocess_end_date values(otrans) as otrans values(otpm) as otpm values(oelapsed_mins) as oelapsed_mins values(total_count) as total_count values(local_currency) as local_currency values(local_swift) as local_swift values(local_amt) as local_amt values(eur_local) as eur_local BY file_number_list| table process,oprocess_start_date,oprocess_end_date,file_number_list,otrans,otpm,oelapsed_mins,total_count,local_currency,local_swift,local_amt,eur_local '}); XML: <dashboard script="custom_table_row_expansion1.js"> <label>Visa row expansion</label> <row> <panel> <table id="expand_with_events"> <search> <query>index="idx_omn" source="vs.csv" | eval key=process+" "+oprocess_start_date | stats values(process) as process values(oprocess_start_date) as oprocess_start_date values(oprocess_end_date) as oprocess_end_date values(otrans) as otrans values(otpm) as otpm values(oelapsed_mins) as oelapsed_mins sum(total_count) as total_count values(local_currency) as local_currency values(local_swift) as local_swift sum(local_amt) as local_amt sum(eur_local) as eur_local BY key | table process oprocess_start_date oprocess_end_date otrans otpm oelapsed_mins total_count local_currency local_swift local_amt eur_local key</query> <earliest>0</earliest> <latest></latest> <sampleRatio>1</sampleRatio> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> </dashboard> JS: require([ 'splunkjs/mvc/tableview', 'splunkjs/mvc/chartview', 'splunkjs/mvc/searchmanager', 'splunkjs/mvc', 'underscore', 'splunkjs/mvc/simplexml/ready!'],function( TableView, ChartView, SearchManager, mvc, _ ){ var EventSearchBasedRowExpansionRenderer = TableView.BaseRowExpansionRenderer.extend({ initialize: function(args) { // initialize will run once, so we will set up a search and a chart to be reused. this._searchManager = new SearchManager({ id: 'details-search-manager', preview: false }); //this._chartView = new ChartView({ // managerid: 'details-search-manager', // 'charting.legend.placement': 'none' //}); this._TableView = new TableView({ id: 'TestTable', managerid: 'details-search-manager', drilldown: 'cell' }); }, canRender: function(rowData) { // Since more than one row expansion renderer can be registered we let each decide if they can handle that // data // Here we will always handle it. return true; }, render: function($container, rowData) { // rowData contains information about the row that is expanded. We can see the cells, fields, and values // We will find the key(process and start date) cell to use its value var processCell = _(rowData.cells).find(function (cell) { return cell.field === 'key'; }); //update the search with the key that we are interested in this._searchManager.set({ search: 'index=idx_omn source=vs.csv | eval key=process+" "+oprocess_start_date | search key="' + processCell.value + '" | stats values(process) as process values(oprocess_start_date) as oprocess_start_date values(oprocess_end_date) as oprocess_end_date values(otrans) as otrans values(otpm) as otpm values(oelapsed_mins) as oelapsed_mins values(total_count) as total_count values(local_currency) as local_currency values(local_swift) as local_swift values(local_amt) as local_amt values(eur_local) as eur_local BY file_number_list| table process,oprocess_start_date,oprocess_end_date,file_number_list,otrans,otpm,oelapsed_mins,total_count,local_currency,local_swift,local_amt,eur_local '}); // $container is the jquery object where we can put out content. // In this case we will render our chart and add it to the $container //$container.append(this._chartView.render().el); $container.append(this._TableView.render().el); } }); var tableElement = mvc.Components.getInstance("expand_with_events"); tableElement.getVisualization(function(tableView) { // Add custom cell renderer, the table will re-render automatically. tableView.addRowExpansionRenderer(new EventSearchBasedRowExpansionRenderer()); }); });   I did try by creating a field called "query" which has the query for the row expansion and passed it to the below JS but it didn't return any result. Please ignore this approach if there is any better solution available. XML for passing the entire query to column called query: <dashboard script="custom_table_row_expansion2.js"> <label>Visa row expansion Clone</label> <row> <panel> <table id="expand_with_events"> <search> <query>index="idx_omn" source="vs.csv" | eval key=process+" "+oprocess_start_date | eval query1 = "idx_omn source=\"visabase.csv\" | search key =\"" | eval query2 = "\"| stats values(process) as process values(oprocess_start_date) as oprocess_start_date values(oprocess_end_date) as oprocess_end_date values(otrans) as otrans values(otpm) as otpm values(oelapsed_mins) as oelapsed_mins values(total_count) as total_count values(local_currency) as local_currency values(local_swift) as local_swift values(local_amt) as local_amt values(eur_local) as eur_local BY file_number_list" | eval query = query1+key+query2 | stats values(process) as process values(oprocess_start_date) as oprocess_start_date values(oprocess_end_date) as oprocess_end_date values(otrans) as otrans values(otpm) as otpm values(oelapsed_mins) as oelapsed_mins sum(total_count) as total_count values(local_currency) as local_currency values(local_swift) as local_swift sum(local_amt) as local_amt sum(eur_local) as eur_local values(query) as query BY key | table process oprocess_start_date oprocess_end_date otrans otpm oelapsed_mins total_count local_currency local_swift local_amt eur_local key query</query> <earliest>0</earliest> <latest></latest> <sampleRatio>1</sampleRatio> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> </dashboard>   JS for the new try which didn't redturn any result: require([ 'splunkjs/mvc/tableview', 'splunkjs/mvc/chartview', 'splunkjs/mvc/searchmanager', 'splunkjs/mvc', 'underscore', 'splunkjs/mvc/simplexml/ready!'],function( TableView, ChartView, SearchManager, mvc, _ ){ var EventSearchBasedRowExpansionRenderer = TableView.BaseRowExpansionRenderer.extend({ initialize: function(args) { // initialize will run once, so we will set up a search and a chart to be reused. this._searchManager = new SearchManager({ id: 'details-search-manager', preview: false }); //this._chartView = new ChartView({ // managerid: 'details-search-manager', // 'charting.legend.placement': 'none' //}); this._TableView = new TableView({ id: 'TestTable', managerid: 'details-search-manager', drilldown: 'cell' }); }, canRender: function(rowData) { // Since more than one row expansion renderer can be registered we let each decide if they can handle that // data // Here we will always handle it. return true; }, render: function($container, rowData) { // rowData contains information about the row that is expanded. We can see the cells, fields, and values // We will find the sourcetype cell to use its value var processCell = _(rowData.cells).find(function (cell) { return cell.field === 'query'; }); //update the search with the sourcetype that we are interested in this._searchManager.set({ search: 'index= "' + processCell.value + '" '}); // $container is the jquery object where we can put out content. // In this case we will render our chart and add it to the $container //$container.append(this._chartView.render().el); $container.append(this._TableView.render().el); } }); var tableElement = mvc.Components.getInstance("expand_with_events"); tableElement.getVisualization(function(tableView) { // Add custom cell renderer, the table will re-render automatically. tableView.addRowExpansionRenderer(new EventSearchBasedRowExpansionRenderer()); }); });   Thanks,
I have set up some Numerical Outlier detections in the MLTK, on our ES Search Head.  They are set up as alerts in Splunk on our Enterprise Security Search Head.  The Numerical Outlier alert looks b... See more...
I have set up some Numerical Outlier detections in the MLTK, on our ES Search Head.  They are set up as alerts in Splunk on our Enterprise Security Search Head.  The Numerical Outlier alert looks back over a 2 hour time range. Which creates 24 data points (120 mins / 5 min increments = 24 event data points)    The problem is - in Enterprise Security (since this is a notable event), we receive 24 separate notable events! We correctly only receive 1 email each time the alert fires, but 24 notable events in ES.  I have the alert set to trigger "Once", rather than "Per Result"   Is there anything else I can do to make sure ES doesn't get slammed with 24 notables every time this alert fires?   
The following error is captured in puppetserver.log (no error in splunkd.log):     [puppetserver] Puppet Could not send report to Splunk: execution expired ["org/jruby/ext/openssl/SSLSocket.java:2... See more...
The following error is captured in puppetserver.log (no error in splunkd.log):     [puppetserver] Puppet Could not send report to Splunk: execution expired ["org/jruby/ext/openssl/SSLSocket.java:215:in `connect'", "/opt/puppetlabs/server/apps/puppetserver/puppet-server-release.jar!/META-INF/jruby.home/lib/ruby/1.9/net/http.rb:800:in `connect'", "org/jruby/ext/timeout/Timeout.java:115:in `timeout'", "/opt/puppetlabs/server/apps/puppetserver/puppet-server-release.jar!/META-INF/jruby.home/lib/ruby/1.9/net/http.rb:800:in `connect'", "/opt/puppetlabs/server/apps/puppetserver/puppet-server-release.jar!/META-INF/jruby.home/lib/ruby/1.9/net/http.rb:756:in `do_start'", "/opt/puppetlabs/server/apps/puppetserver/puppet-server-release.jar!/META-INF/jruby.home/lib/ruby/1.9/net/http.rb:745:in `start'", "/opt/puppetlabs/server/apps/puppetserver/puppet-server-release.jar!/META-INF/jruby.home/lib/ruby/1.9/net/http.rb:1293:in `request'", "/etc/puppetlabs/code/environments/production/modules/splunk_hec/lib/puppet/util/splunk_hec.rb:57:in `submit_request'", "/etc/puppetlabs/code/environments/production/modules/splunk_hec/lib/puppet/reports/splunk_hec.rb:112:in `process'", "/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/indirector/report/processor.rb:37:in `process'", "/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/indirector/report/processor.rb:53:in `processors'", "org/jruby/RubyArray.java:1613:in `each'", "/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/indirector/report/processor.rb:51:in `processors'", "/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/indirector/report/processor.rb:30:in `process'", "/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/indirector/report/processor.rb:14:in `save'", "/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/indirector/indirection.rb:285:in `save'", "/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/network/http/api/indirected_routes.rb:176:in `do_save'", "/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/network/http/api/indirected_routes.rb:48:in `call'", "/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/context.rb:65:in `override'", "/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet.rb:306:in `override'", "/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/network/http/api/indirected_routes.rb:47:in `call'", "/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/network/http/route.rb:82:in `process'", "org/jruby/RubyArray.java:1613:in `each'", "/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/network/http/route.rb:81:in `process'", "/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/network/http/route.rb:87:in `process'", "/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/network/http/route.rb:87:in `process'", "/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/network/http/handler.rb:60:in `process'", "/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util/profiler/around_profiler.rb:58:in `profile'", "/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util/profiler.rb:51:in `profile'", "/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/network/http/handler.rb:58:in `process'", "file:/opt/puppetlabs/server/apps/puppetserver/puppet-server-release.jar!/puppetserver-lib/puppet/server/master.rb:42:in `handleRequest'", "Puppet$$Server$$Master_576124986.gen:13:in `handleRequest'", "request_handler_core.clj:273:in `invoke'", "jruby_request.clj:46:in `invoke'", "jruby_request.clj:31:in `invoke'", "request_handler_service.clj:34:in `handle_request'", "request_handler.clj:3:in `invoke'", "request_handler.clj:3:in `invoke'", "core.clj:2515:in `invoke'", "core.clj:211:in `invoke'", "core.clj:45:in `invoke'", "core.clj:343:in `invoke'", "core.clj:51:in `invoke'", "ringutils.clj:83:in `invoke'", "master_core.clj:430:in `invoke'", "ring.clj:21:in `invoke'", "ring.clj:12:in `invoke'", "comidi.clj:249:in `invoke'", "jetty9_core.clj:424:in `invoke'", "normalized_uri_helpers.clj:80:in `invoke'"]     From the puppet server's shell, puppet apply --report=splunk_hec is able to send report with no error. (Puppet Inc's splunk_hec reporter is used by Puppet Report Viewer (Splunk base app 4413 ).  My environment is puppetserver 2.7.0; Splunk is 8.2.0.)