All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a usecase to send data from splunk to snow, I noticed there are a bunch of scripts available in servicenow add-on, did anyone tried this effort? please let me know your thoughts.
Hopefully, someone could guide me through this process. Newbie here. Pllease bear with me with these lame questions. All I wanted to do is push my routers' logs pointing to the Syslog server. C... See more...
Hopefully, someone could guide me through this process. Newbie here. Pllease bear with me with these lame questions. All I wanted to do is push my routers' logs pointing to the Syslog server. Can I implement this using Splunk without a good background in programming? I know IP routing but not much on coding. I can configure all routers pointing to the syslog server. What are the things I need to consider? From hardware to installation of splunk? What splunk should I need to subscribe if I only wanted to implement a syslog server?  Do we have a step-by-step tutorial on how do we implement this by beginners point-of-view? Thank you so much. Appreciate someone could help me. God bless and Keep safe guys!  
Hello, I am new for Splunk ES. To configure the ES Incident Review, I use the default setting for the Time which should match the event time format? event time format However, my Incident re... See more...
Hello, I am new for Splunk ES. To configure the ES Incident Review, I use the default setting for the Time which should match the event time format? event time format However, my Incident review time shows different format? Where should I change it?    
Hi Experts,                     I'm stuck trying to show two queries on the same chart. The result sets should be pretty similar (so no issue with the axis) but it seems to show either the 1000 sear... See more...
Hi Experts,                     I'm stuck trying to show two queries on the same chart. The result sets should be pretty similar (so no issue with the axis) but it seems to show either the 1000 search or the 1001 search only and not together. I'm prob not adding them as independent queries correctly to show on the same graph!  Both work on their own without any issues but when combining it doesn't work.  Example     index=someindex "1000" sourcetype="somesourcetype-logs" [search index=someindex "1001" sourcetype="somesourcetype"] | `splitl` | rex "1000=(?<NewRequest>.*?);" | rex "1001(?<RejectedRequest>.*?);" | rex field=source "somesource(?<instance>.*?)\b" | dedup NewRequest,RejectedRequest | where isnotnull(NewRequest) | where isnotnull(RejectedRequest) | timechart count(NewRequest), count(RejectedRequest) by instance   Any help would be great 
We have several remote and traveling systems that we need to forward logs from to our on-prem Spunk environment. Splunk Cloud is not an option. Are there any best practices for system config or arch... See more...
We have several remote and traveling systems that we need to forward logs from to our on-prem Spunk environment. Splunk Cloud is not an option. Are there any best practices for system config or architecture? Is it possible to use a reverse proxy for inbound connections to the deployment server? Should the reverse proxy have a splunk UF or should an intermediate HF be used to forward to the indexer tier?
In new search window (image attach) There are to column "Time" "Event"   How can I automatically(not write each time in query) edit the Time  column that display another field like from event  or a... See more...
In new search window (image attach) There are to column "Time" "Event"   How can I automatically(not write each time in query) edit the Time  column that display another field like from event  or add 1 more column ?  
Hi, I recently started working at a new firm to monitor and manage Splunk for them. The issue I'm encountering is that I want to have a thorough understanding of their deployment, so I'm trying to s... See more...
Hi, I recently started working at a new firm to monitor and manage Splunk for them. The issue I'm encountering is that I want to have a thorough understanding of their deployment, so I'm trying to see where some of their DBX inputs are being used. To avoid confusion as to what I'm trying to do, let me give an example. Let's say I have an input in DB Connect (we'll call it Input_A); The data ingested via Input_A is used by an unknown number of Alerts, an unknown number of dashboards and an unkown number of reports. Is there some way that I can find out how many alerts/dashboards/reports etc. use the data originating from Input_A as well as the names of those alerts/dashboards/reports etc. ? I'm still relatively inexperienced, so perhaps my question will have a simple solution that I'm just not seeing (I'm hoping that the solution is more efficient that looking at the hundreds of alerts/dashobards/reports we have one by one)   Thank you!
Hi, Below is the JS which I have for table row expansion which works fine for my requirement. I want the search given in the JS should be passed from my Splunk dashboard instead of hardcoding it her... See more...
Hi, Below is the JS which I have for table row expansion which works fine for my requirement. I want the search given in the JS should be passed from my Splunk dashboard instead of hardcoding it here. This is because I don't want to update the JS whenever I want to add or delete anything in my search.  Below search in JS which I want to be passed through Splunk dynamically instead of hardcoding in JS: this._searchManager.set({ search: 'index=idx_omn source=vs.csv | eval key=process+" "+oprocess_start_date | search key="' + processCell.value + '" | stats values(process) as process values(oprocess_start_date) as oprocess_start_date values(oprocess_end_date) as oprocess_end_date values(otrans) as otrans values(otpm) as otpm values(oelapsed_mins) as oelapsed_mins values(total_count) as total_count values(local_currency) as local_currency values(local_swift) as local_swift values(local_amt) as local_amt values(eur_local) as eur_local BY file_number_list| table process,oprocess_start_date,oprocess_end_date,file_number_list,otrans,otpm,oelapsed_mins,total_count,local_currency,local_swift,local_amt,eur_local '}); XML: <dashboard script="custom_table_row_expansion1.js"> <label>Visa row expansion</label> <row> <panel> <table id="expand_with_events"> <search> <query>index="idx_omn" source="vs.csv" | eval key=process+" "+oprocess_start_date | stats values(process) as process values(oprocess_start_date) as oprocess_start_date values(oprocess_end_date) as oprocess_end_date values(otrans) as otrans values(otpm) as otpm values(oelapsed_mins) as oelapsed_mins sum(total_count) as total_count values(local_currency) as local_currency values(local_swift) as local_swift sum(local_amt) as local_amt sum(eur_local) as eur_local BY key | table process oprocess_start_date oprocess_end_date otrans otpm oelapsed_mins total_count local_currency local_swift local_amt eur_local key</query> <earliest>0</earliest> <latest></latest> <sampleRatio>1</sampleRatio> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> </dashboard> JS: require([ 'splunkjs/mvc/tableview', 'splunkjs/mvc/chartview', 'splunkjs/mvc/searchmanager', 'splunkjs/mvc', 'underscore', 'splunkjs/mvc/simplexml/ready!'],function( TableView, ChartView, SearchManager, mvc, _ ){ var EventSearchBasedRowExpansionRenderer = TableView.BaseRowExpansionRenderer.extend({ initialize: function(args) { // initialize will run once, so we will set up a search and a chart to be reused. this._searchManager = new SearchManager({ id: 'details-search-manager', preview: false }); //this._chartView = new ChartView({ // managerid: 'details-search-manager', // 'charting.legend.placement': 'none' //}); this._TableView = new TableView({ id: 'TestTable', managerid: 'details-search-manager', drilldown: 'cell' }); }, canRender: function(rowData) { // Since more than one row expansion renderer can be registered we let each decide if they can handle that // data // Here we will always handle it. return true; }, render: function($container, rowData) { // rowData contains information about the row that is expanded. We can see the cells, fields, and values // We will find the key(process and start date) cell to use its value var processCell = _(rowData.cells).find(function (cell) { return cell.field === 'key'; }); //update the search with the key that we are interested in this._searchManager.set({ search: 'index=idx_omn source=vs.csv | eval key=process+" "+oprocess_start_date | search key="' + processCell.value + '" | stats values(process) as process values(oprocess_start_date) as oprocess_start_date values(oprocess_end_date) as oprocess_end_date values(otrans) as otrans values(otpm) as otpm values(oelapsed_mins) as oelapsed_mins values(total_count) as total_count values(local_currency) as local_currency values(local_swift) as local_swift values(local_amt) as local_amt values(eur_local) as eur_local BY file_number_list| table process,oprocess_start_date,oprocess_end_date,file_number_list,otrans,otpm,oelapsed_mins,total_count,local_currency,local_swift,local_amt,eur_local '}); // $container is the jquery object where we can put out content. // In this case we will render our chart and add it to the $container //$container.append(this._chartView.render().el); $container.append(this._TableView.render().el); } }); var tableElement = mvc.Components.getInstance("expand_with_events"); tableElement.getVisualization(function(tableView) { // Add custom cell renderer, the table will re-render automatically. tableView.addRowExpansionRenderer(new EventSearchBasedRowExpansionRenderer()); }); });   I did try by creating a field called "query" which has the query for the row expansion and passed it to the below JS but it didn't return any result. Please ignore this approach if there is any better solution available. XML for passing the entire query to column called query: <dashboard script="custom_table_row_expansion2.js"> <label>Visa row expansion Clone</label> <row> <panel> <table id="expand_with_events"> <search> <query>index="idx_omn" source="vs.csv" | eval key=process+" "+oprocess_start_date | eval query1 = "idx_omn source=\"visabase.csv\" | search key =\"" | eval query2 = "\"| stats values(process) as process values(oprocess_start_date) as oprocess_start_date values(oprocess_end_date) as oprocess_end_date values(otrans) as otrans values(otpm) as otpm values(oelapsed_mins) as oelapsed_mins values(total_count) as total_count values(local_currency) as local_currency values(local_swift) as local_swift values(local_amt) as local_amt values(eur_local) as eur_local BY file_number_list" | eval query = query1+key+query2 | stats values(process) as process values(oprocess_start_date) as oprocess_start_date values(oprocess_end_date) as oprocess_end_date values(otrans) as otrans values(otpm) as otpm values(oelapsed_mins) as oelapsed_mins sum(total_count) as total_count values(local_currency) as local_currency values(local_swift) as local_swift sum(local_amt) as local_amt sum(eur_local) as eur_local values(query) as query BY key | table process oprocess_start_date oprocess_end_date otrans otpm oelapsed_mins total_count local_currency local_swift local_amt eur_local key query</query> <earliest>0</earliest> <latest></latest> <sampleRatio>1</sampleRatio> </search> <option name="count">20</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> </dashboard>   JS for the new try which didn't redturn any result: require([ 'splunkjs/mvc/tableview', 'splunkjs/mvc/chartview', 'splunkjs/mvc/searchmanager', 'splunkjs/mvc', 'underscore', 'splunkjs/mvc/simplexml/ready!'],function( TableView, ChartView, SearchManager, mvc, _ ){ var EventSearchBasedRowExpansionRenderer = TableView.BaseRowExpansionRenderer.extend({ initialize: function(args) { // initialize will run once, so we will set up a search and a chart to be reused. this._searchManager = new SearchManager({ id: 'details-search-manager', preview: false }); //this._chartView = new ChartView({ // managerid: 'details-search-manager', // 'charting.legend.placement': 'none' //}); this._TableView = new TableView({ id: 'TestTable', managerid: 'details-search-manager', drilldown: 'cell' }); }, canRender: function(rowData) { // Since more than one row expansion renderer can be registered we let each decide if they can handle that // data // Here we will always handle it. return true; }, render: function($container, rowData) { // rowData contains information about the row that is expanded. We can see the cells, fields, and values // We will find the sourcetype cell to use its value var processCell = _(rowData.cells).find(function (cell) { return cell.field === 'query'; }); //update the search with the sourcetype that we are interested in this._searchManager.set({ search: 'index= "' + processCell.value + '" '}); // $container is the jquery object where we can put out content. // In this case we will render our chart and add it to the $container //$container.append(this._chartView.render().el); $container.append(this._TableView.render().el); } }); var tableElement = mvc.Components.getInstance("expand_with_events"); tableElement.getVisualization(function(tableView) { // Add custom cell renderer, the table will re-render automatically. tableView.addRowExpansionRenderer(new EventSearchBasedRowExpansionRenderer()); }); });   Thanks,
I have set up some Numerical Outlier detections in the MLTK, on our ES Search Head.  They are set up as alerts in Splunk on our Enterprise Security Search Head.  The Numerical Outlier alert looks b... See more...
I have set up some Numerical Outlier detections in the MLTK, on our ES Search Head.  They are set up as alerts in Splunk on our Enterprise Security Search Head.  The Numerical Outlier alert looks back over a 2 hour time range. Which creates 24 data points (120 mins / 5 min increments = 24 event data points)    The problem is - in Enterprise Security (since this is a notable event), we receive 24 separate notable events! We correctly only receive 1 email each time the alert fires, but 24 notable events in ES.  I have the alert set to trigger "Once", rather than "Per Result"   Is there anything else I can do to make sure ES doesn't get slammed with 24 notables every time this alert fires?   
The following error is captured in puppetserver.log (no error in splunkd.log):     [puppetserver] Puppet Could not send report to Splunk: execution expired ["org/jruby/ext/openssl/SSLSocket.java:2... See more...
The following error is captured in puppetserver.log (no error in splunkd.log):     [puppetserver] Puppet Could not send report to Splunk: execution expired ["org/jruby/ext/openssl/SSLSocket.java:215:in `connect'", "/opt/puppetlabs/server/apps/puppetserver/puppet-server-release.jar!/META-INF/jruby.home/lib/ruby/1.9/net/http.rb:800:in `connect'", "org/jruby/ext/timeout/Timeout.java:115:in `timeout'", "/opt/puppetlabs/server/apps/puppetserver/puppet-server-release.jar!/META-INF/jruby.home/lib/ruby/1.9/net/http.rb:800:in `connect'", "/opt/puppetlabs/server/apps/puppetserver/puppet-server-release.jar!/META-INF/jruby.home/lib/ruby/1.9/net/http.rb:756:in `do_start'", "/opt/puppetlabs/server/apps/puppetserver/puppet-server-release.jar!/META-INF/jruby.home/lib/ruby/1.9/net/http.rb:745:in `start'", "/opt/puppetlabs/server/apps/puppetserver/puppet-server-release.jar!/META-INF/jruby.home/lib/ruby/1.9/net/http.rb:1293:in `request'", "/etc/puppetlabs/code/environments/production/modules/splunk_hec/lib/puppet/util/splunk_hec.rb:57:in `submit_request'", "/etc/puppetlabs/code/environments/production/modules/splunk_hec/lib/puppet/reports/splunk_hec.rb:112:in `process'", "/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/indirector/report/processor.rb:37:in `process'", "/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/indirector/report/processor.rb:53:in `processors'", "org/jruby/RubyArray.java:1613:in `each'", "/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/indirector/report/processor.rb:51:in `processors'", "/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/indirector/report/processor.rb:30:in `process'", "/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/indirector/report/processor.rb:14:in `save'", "/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/indirector/indirection.rb:285:in `save'", "/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/network/http/api/indirected_routes.rb:176:in `do_save'", "/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/network/http/api/indirected_routes.rb:48:in `call'", "/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/context.rb:65:in `override'", "/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet.rb:306:in `override'", "/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/network/http/api/indirected_routes.rb:47:in `call'", "/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/network/http/route.rb:82:in `process'", "org/jruby/RubyArray.java:1613:in `each'", "/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/network/http/route.rb:81:in `process'", "/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/network/http/route.rb:87:in `process'", "/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/network/http/route.rb:87:in `process'", "/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/network/http/handler.rb:60:in `process'", "/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util/profiler/around_profiler.rb:58:in `profile'", "/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/util/profiler.rb:51:in `profile'", "/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet/network/http/handler.rb:58:in `process'", "file:/opt/puppetlabs/server/apps/puppetserver/puppet-server-release.jar!/puppetserver-lib/puppet/server/master.rb:42:in `handleRequest'", "Puppet$$Server$$Master_576124986.gen:13:in `handleRequest'", "request_handler_core.clj:273:in `invoke'", "jruby_request.clj:46:in `invoke'", "jruby_request.clj:31:in `invoke'", "request_handler_service.clj:34:in `handle_request'", "request_handler.clj:3:in `invoke'", "request_handler.clj:3:in `invoke'", "core.clj:2515:in `invoke'", "core.clj:211:in `invoke'", "core.clj:45:in `invoke'", "core.clj:343:in `invoke'", "core.clj:51:in `invoke'", "ringutils.clj:83:in `invoke'", "master_core.clj:430:in `invoke'", "ring.clj:21:in `invoke'", "ring.clj:12:in `invoke'", "comidi.clj:249:in `invoke'", "jetty9_core.clj:424:in `invoke'", "normalized_uri_helpers.clj:80:in `invoke'"]     From the puppet server's shell, puppet apply --report=splunk_hec is able to send report with no error. (Puppet Inc's splunk_hec reporter is used by Puppet Report Viewer (Splunk base app 4413 ).  My environment is puppetserver 2.7.0; Splunk is 8.2.0.)
Invalid key in stanza [SSLConfiguration] in /opt/splunk/etc/apps/ssl_checker/default/ssl.conf, line 3: certPaths (value: c:\path\to\cert1, /path/to/cert2).   A README file already exists, so I cann... See more...
Invalid key in stanza [SSLConfiguration] in /opt/splunk/etc/apps/ssl_checker/default/ssl.conf, line 3: certPaths (value: c:\path\to\cert1, /path/to/cert2).   A README file already exists, so I cannot create a README folder to house a ssl.conf.spec file.       
I have a few endpoints with forwarders that need to be disconnected from the network for periods of time (up to a month in some instances). Since we forward Windows Event Log data (for security audit... See more...
I have a few endpoints with forwarders that need to be disconnected from the network for periods of time (up to a month in some instances). Since we forward Windows Event Log data (for security audits) to our indexer on the network, I do not want to lose any data and would like the forwarders to send all of the missing data to the indexer once they rejoin the network. I have been reading about acknowledgement and persistent queues, but it seems that the forwarder still keeps some data in memory. I would like to eliminate or at least severely minimize the amount of audit data in memory that will be lost. Can I combine the acknowledgement and persistent queue settings to achieve this? Can I set useACK=true and set maxQueueSize to something super small like maxQueueSize=1kb, then set the persistentQueueSize to an appropriate amount to cover the amount of time the forwarder will be disconnected? Is there a minimum limit to maxQueueSize?
Hi,   We are looking to join two different soucretype which is given below 1- first source type for  abc(In this soucetype it contains all server list)  sourcetype=abc AlertName IN ("Health Servi... See more...
Hi,   We are looking to join two different soucretype which is given below 1- first source type for  abc(In this soucetype it contains all server list)  sourcetype=abc AlertName IN ("Health Service Heartbeat Failure", "Unexpected shutdown Event ID XXXX") | sort _time  | table ServerName, AlertName      ,AlertTriggered | dedup ServerName, AlertName      ,AlertTriggered   2- Second source type for  xyz(In this source type list contain only selective server i.e suport)    sourcetype=xyz  StatusValue IN(blue) Company IN("Support")  | sort _time  desc | dedup ManagementGroup , ServerName  , _time  | table ManagementGroup,  ServerName, StatusValue,  _time __________________________________________________________- we looking for combine syntax on which we view data  like (serverName(support), Event ID includes heartbite Failure, Start time of event, End time of event).   I am looking for your response    Thanks in advance   _
I have configured this on my heavy forwarder but everyday i have to  disable and then re-enable inputs for call record_002 and user_report_001 to see, When i checked for error logs. This is what i fo... See more...
I have configured this on my heavy forwarder but everyday i have to  disable and then re-enable inputs for call record_002 and user_report_001 to see, When i checked for error logs. This is what i found  ERROR pid=20093 tid=MainThread file=base_modinput.py:log_error:309 | Error getting callRecord data: 404 Client Error: Not Found for url: https://graph.microsoft.com/v1.0/communications/callRecords/0061b243-52a9-49a2-b2c1-39642b7aa549?$expand=sessions($expand=segments)   Can someone please suggest the solution to this ?.  
Hi all, I have a multiple json files. The format is like as below. { "ID": "123", "TIME": "Jul 11, 2021, 08:55:54 AM", "STATUS": "FAIL", "DURATION": "4 hours, 32 minutes", } From these json f... See more...
Hi all, I have a multiple json files. The format is like as below. { "ID": "123", "TIME": "Jul 11, 2021, 08:55:54 AM", "STATUS": "FAIL", "DURATION": "4 hours, 32 minutes", } From these json files i want to use the DURATION field and convert the value into hours. After that i want to use these values of all the json files to plot a graph. I have used regex to extract the value, but its not working. Below is the query that i have used. | rex field=DURATION "(?<duration_hour>\d*)hours, ?(?<duration_minute>\d*)minutes" | eval DURATION=duration_hour+(duration_minute)/60 can anyone please tell me what is mistake here?
I know email field is not editable but shouldn't there be an option to update email address. And if its there for some reason at least give a chance to update/correct email before validation. If ther... See more...
I know email field is not editable but shouldn't there be an option to update email address. And if its there for some reason at least give a chance to update/correct email before validation. If there was a typo while registering what is the solution. It can not be verified as user will not receive any mail. Anyone has faced this and found any solution?
My data source can't seem to negotiate TLS v1.2.  So, I am trying to "downgrade" HEC.  But no matter how I change inputs.conf, only TLS 1.2 is supported on port 8080. In fact, default sslVersions fo... See more...
My data source can't seem to negotiate TLS v1.2.  So, I am trying to "downgrade" HEC.  But no matter how I change inputs.conf, only TLS 1.2 is supported on port 8080. In fact, default sslVersions for splunk_httpinput app is already *:       $ cat etc/apps/splunk_httpinput/default/inputs.conf [http] disabled=1 port=8088 enableSSL=1 dedicatedIoThreads=2 maxThreads = 0 maxSockets = 0 useDeploymentServer=0 # ssl settings are similar to mgmt server sslVersions=*,-ssl2 allowSslCompression=true allowSslRenegotiation=true ackIdleCleanup=true       openssl s_client can only negotiate within TLS 1.2, nothing lower.  If I use, say -tls1_1, splunkd.log shows "error:1408A0C1:SSL routines:ssl3_get_client_hello:no shared cipher", the same error my data source triggers.  Is there some way to "downgrade"? The data source in question is Puppet Inc's splunk_hec module used by Puppet Report Viewer (Splunk base app 4413 ).  I am testing it with Puppet Server 2.7.0. (Splunk is 8.2.0.) My colleague suspects that the Jruby version (ruby 1.9) may be too old to support TLS 1.2. (I can invoke splunk_hec report in native Ruby 2.0 successfully.) Update: JRuby version is probably the problem, although it does support TLS 1.2; the problem is (still) in cipher suites mismatch.  I used tcpdump and wireshark to analyze TLS exchange.  Puppet server sends the following:     Transport Layer Security TLSv1.2 Record Layer: Handshake Protocol: Client Hello Content Type: Handshake (22) Version: TLS 1.2 (0x0303) Length: 223 Handshake Protocol: Client Hello Handshake Type: Client Hello (1) Length: 219 Version: TLS 1.2 (0x0303) Random: c1221d62f8911dc203ac02cf12c7cf7a71093cd5141a0f56e7bad2429d4e1095 Session ID Length: 0 Cipher Suites Length: 12 Cipher Suites (6 suites) Cipher Suite: TLS_RSA_WITH_AES_256_CBC_SHA (0x0035) Cipher Suite: TLS_DHE_RSA_WITH_AES_256_CBC_SHA (0x0039) Cipher Suite: TLS_DHE_DSS_WITH_AES_256_CBC_SHA (0x0038) Cipher Suite: TLS_RSA_WITH_AES_128_CBC_SHA (0x002f) Cipher Suite: TLS_DHE_RSA_WITH_AES_128_CBC_SHA (0x0033) Cipher Suite: TLS_DHE_DSS_WITH_AES_128_CBC_SHA (0x0032)     Even adding extended ciphers illustrated in default inputs.conf, i.e.,   cipherSuite = ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA   they still cannot match. I am unfamiliar with how Splunk represents these suites.  Is there a supported cipher suite that can match one of those sent by splunk_hec?
Hi, I am getting error-"Waiting for requisite number of peers to join the cluster. - https://127.0.0.1:8089. site=site2 has only 0 peers (waiting for 2 peers to join the site)." As my CM is working ... See more...
Hi, I am getting error-"Waiting for requisite number of peers to join the cluster. - https://127.0.0.1:8089. site=site2 has only 0 peers (waiting for 2 peers to join the site)." As my CM is working fine but in my standby master(configuration like CM but having for disaster recovery) having this error. Is this the normal behavior as there is no peer attached?
I design a function that is used to trigger a script(windows batch)  from a universal forwarder. The universal forwarder server is windows server 2012 The script has already been transformed to tha... See more...
I design a function that is used to trigger a script(windows batch)  from a universal forwarder. The universal forwarder server is windows server 2012 The script has already been transformed to that uf server and the cron schedule is planned to trigger the script every day(3:00 am) The script has a date command to get the date of the system like below: =============== echo %date% =============== when the script triggered by splunk alert action. it will get the result :07/29/2021       -----MM/DD/YYYY it is not what I deside to get the format of date. but when I run the script from uf by mannual, I can get  the right result :2021/07/29       -----YYYY/MM/DD I also check the windows setting it was like below: I don't know the difference between splunk trigger script and mannually run the script. I know if the uf server is linux or unix it will have the problem of users(ex root or splunk user)  It will be a lot of help if some one could solve this problem. Sorry for writing so long sentences. Thank you.    
Hi, I would like to highlight an anomaly with Enterprise 8.2.1 (and maybe lower versions?), within Splunk Enterprise 8.2.1, this is pre-loaded with 'splunk_essentials_8_2' version 0.3.0, the Apps ma... See more...
Hi, I would like to highlight an anomaly with Enterprise 8.2.1 (and maybe lower versions?), within Splunk Enterprise 8.2.1, this is pre-loaded with 'splunk_essentials_8_2' version 0.3.0, the Apps manager suggests an update to version 1.0.0 When this happens and Splunk is restarted the following warning File '/opt/splunk/etc/apps/splunk_essentials_8_2/default/app.conf' changed. In other words it don't like version 1.0.0 in the party manifest, and the warning becomes tiresome. It may well have been reported, but here is another voice reporting it. I would have raised this as a user case, but my lowly login does not allow me to raise a case.