All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Invalid key in stanza [SSLConfiguration] in /opt/splunk/etc/apps/ssl_checker/default/ssl.conf, line 3: certPaths (value: c:\path\to\cert1, /path/to/cert2).   A README file already exists, so I cann... See more...
Invalid key in stanza [SSLConfiguration] in /opt/splunk/etc/apps/ssl_checker/default/ssl.conf, line 3: certPaths (value: c:\path\to\cert1, /path/to/cert2).   A README file already exists, so I cannot create a README folder to house a ssl.conf.spec file.       
I have a few endpoints with forwarders that need to be disconnected from the network for periods of time (up to a month in some instances). Since we forward Windows Event Log data (for security audit... See more...
I have a few endpoints with forwarders that need to be disconnected from the network for periods of time (up to a month in some instances). Since we forward Windows Event Log data (for security audits) to our indexer on the network, I do not want to lose any data and would like the forwarders to send all of the missing data to the indexer once they rejoin the network. I have been reading about acknowledgement and persistent queues, but it seems that the forwarder still keeps some data in memory. I would like to eliminate or at least severely minimize the amount of audit data in memory that will be lost. Can I combine the acknowledgement and persistent queue settings to achieve this? Can I set useACK=true and set maxQueueSize to something super small like maxQueueSize=1kb, then set the persistentQueueSize to an appropriate amount to cover the amount of time the forwarder will be disconnected? Is there a minimum limit to maxQueueSize?
Hi,   We are looking to join two different soucretype which is given below 1- first source type for  abc(In this soucetype it contains all server list)  sourcetype=abc AlertName IN ("Health Servi... See more...
Hi,   We are looking to join two different soucretype which is given below 1- first source type for  abc(In this soucetype it contains all server list)  sourcetype=abc AlertName IN ("Health Service Heartbeat Failure", "Unexpected shutdown Event ID XXXX") | sort _time  | table ServerName, AlertName      ,AlertTriggered | dedup ServerName, AlertName      ,AlertTriggered   2- Second source type for  xyz(In this source type list contain only selective server i.e suport)    sourcetype=xyz  StatusValue IN(blue) Company IN("Support")  | sort _time  desc | dedup ManagementGroup , ServerName  , _time  | table ManagementGroup,  ServerName, StatusValue,  _time __________________________________________________________- we looking for combine syntax on which we view data  like (serverName(support), Event ID includes heartbite Failure, Start time of event, End time of event).   I am looking for your response    Thanks in advance   _
I have configured this on my heavy forwarder but everyday i have to  disable and then re-enable inputs for call record_002 and user_report_001 to see, When i checked for error logs. This is what i fo... See more...
I have configured this on my heavy forwarder but everyday i have to  disable and then re-enable inputs for call record_002 and user_report_001 to see, When i checked for error logs. This is what i found  ERROR pid=20093 tid=MainThread file=base_modinput.py:log_error:309 | Error getting callRecord data: 404 Client Error: Not Found for url: https://graph.microsoft.com/v1.0/communications/callRecords/0061b243-52a9-49a2-b2c1-39642b7aa549?$expand=sessions($expand=segments)   Can someone please suggest the solution to this ?.  
Hi all, I have a multiple json files. The format is like as below. { "ID": "123", "TIME": "Jul 11, 2021, 08:55:54 AM", "STATUS": "FAIL", "DURATION": "4 hours, 32 minutes", } From these json f... See more...
Hi all, I have a multiple json files. The format is like as below. { "ID": "123", "TIME": "Jul 11, 2021, 08:55:54 AM", "STATUS": "FAIL", "DURATION": "4 hours, 32 minutes", } From these json files i want to use the DURATION field and convert the value into hours. After that i want to use these values of all the json files to plot a graph. I have used regex to extract the value, but its not working. Below is the query that i have used. | rex field=DURATION "(?<duration_hour>\d*)hours, ?(?<duration_minute>\d*)minutes" | eval DURATION=duration_hour+(duration_minute)/60 can anyone please tell me what is mistake here?
I know email field is not editable but shouldn't there be an option to update email address. And if its there for some reason at least give a chance to update/correct email before validation. If ther... See more...
I know email field is not editable but shouldn't there be an option to update email address. And if its there for some reason at least give a chance to update/correct email before validation. If there was a typo while registering what is the solution. It can not be verified as user will not receive any mail. Anyone has faced this and found any solution?
My data source can't seem to negotiate TLS v1.2.  So, I am trying to "downgrade" HEC.  But no matter how I change inputs.conf, only TLS 1.2 is supported on port 8080. In fact, default sslVersions fo... See more...
My data source can't seem to negotiate TLS v1.2.  So, I am trying to "downgrade" HEC.  But no matter how I change inputs.conf, only TLS 1.2 is supported on port 8080. In fact, default sslVersions for splunk_httpinput app is already *:       $ cat etc/apps/splunk_httpinput/default/inputs.conf [http] disabled=1 port=8088 enableSSL=1 dedicatedIoThreads=2 maxThreads = 0 maxSockets = 0 useDeploymentServer=0 # ssl settings are similar to mgmt server sslVersions=*,-ssl2 allowSslCompression=true allowSslRenegotiation=true ackIdleCleanup=true       openssl s_client can only negotiate within TLS 1.2, nothing lower.  If I use, say -tls1_1, splunkd.log shows "error:1408A0C1:SSL routines:ssl3_get_client_hello:no shared cipher", the same error my data source triggers.  Is there some way to "downgrade"? The data source in question is Puppet Inc's splunk_hec module used by Puppet Report Viewer (Splunk base app 4413 ).  I am testing it with Puppet Server 2.7.0. (Splunk is 8.2.0.) My colleague suspects that the Jruby version (ruby 1.9) may be too old to support TLS 1.2. (I can invoke splunk_hec report in native Ruby 2.0 successfully.) Update: JRuby version is probably the problem, although it does support TLS 1.2; the problem is (still) in cipher suites mismatch.  I used tcpdump and wireshark to analyze TLS exchange.  Puppet server sends the following:     Transport Layer Security TLSv1.2 Record Layer: Handshake Protocol: Client Hello Content Type: Handshake (22) Version: TLS 1.2 (0x0303) Length: 223 Handshake Protocol: Client Hello Handshake Type: Client Hello (1) Length: 219 Version: TLS 1.2 (0x0303) Random: c1221d62f8911dc203ac02cf12c7cf7a71093cd5141a0f56e7bad2429d4e1095 Session ID Length: 0 Cipher Suites Length: 12 Cipher Suites (6 suites) Cipher Suite: TLS_RSA_WITH_AES_256_CBC_SHA (0x0035) Cipher Suite: TLS_DHE_RSA_WITH_AES_256_CBC_SHA (0x0039) Cipher Suite: TLS_DHE_DSS_WITH_AES_256_CBC_SHA (0x0038) Cipher Suite: TLS_RSA_WITH_AES_128_CBC_SHA (0x002f) Cipher Suite: TLS_DHE_RSA_WITH_AES_128_CBC_SHA (0x0033) Cipher Suite: TLS_DHE_DSS_WITH_AES_128_CBC_SHA (0x0032)     Even adding extended ciphers illustrated in default inputs.conf, i.e.,   cipherSuite = ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA   they still cannot match. I am unfamiliar with how Splunk represents these suites.  Is there a supported cipher suite that can match one of those sent by splunk_hec?
Hi, I am getting error-"Waiting for requisite number of peers to join the cluster. - https://127.0.0.1:8089. site=site2 has only 0 peers (waiting for 2 peers to join the site)." As my CM is working ... See more...
Hi, I am getting error-"Waiting for requisite number of peers to join the cluster. - https://127.0.0.1:8089. site=site2 has only 0 peers (waiting for 2 peers to join the site)." As my CM is working fine but in my standby master(configuration like CM but having for disaster recovery) having this error. Is this the normal behavior as there is no peer attached?
I design a function that is used to trigger a script(windows batch)  from a universal forwarder. The universal forwarder server is windows server 2012 The script has already been transformed to tha... See more...
I design a function that is used to trigger a script(windows batch)  from a universal forwarder. The universal forwarder server is windows server 2012 The script has already been transformed to that uf server and the cron schedule is planned to trigger the script every day(3:00 am) The script has a date command to get the date of the system like below: =============== echo %date% =============== when the script triggered by splunk alert action. it will get the result :07/29/2021       -----MM/DD/YYYY it is not what I deside to get the format of date. but when I run the script from uf by mannual, I can get  the right result :2021/07/29       -----YYYY/MM/DD I also check the windows setting it was like below: I don't know the difference between splunk trigger script and mannually run the script. I know if the uf server is linux or unix it will have the problem of users(ex root or splunk user)  It will be a lot of help if some one could solve this problem. Sorry for writing so long sentences. Thank you.    
Hi, I would like to highlight an anomaly with Enterprise 8.2.1 (and maybe lower versions?), within Splunk Enterprise 8.2.1, this is pre-loaded with 'splunk_essentials_8_2' version 0.3.0, the Apps ma... See more...
Hi, I would like to highlight an anomaly with Enterprise 8.2.1 (and maybe lower versions?), within Splunk Enterprise 8.2.1, this is pre-loaded with 'splunk_essentials_8_2' version 0.3.0, the Apps manager suggests an update to version 1.0.0 When this happens and Splunk is restarted the following warning File '/opt/splunk/etc/apps/splunk_essentials_8_2/default/app.conf' changed. In other words it don't like version 1.0.0 in the party manifest, and the warning becomes tiresome. It may well have been reported, but here is another voice reporting it. I would have raised this as a user case, but my lowly login does not allow me to raise a case.
Hi , I have a clustered environment of Slunk setup. How can I find the all reports and alerts with email address. Actually I  need to correct the email domains again and I didn't found any correct w... See more...
Hi , I have a clustered environment of Slunk setup. How can I find the all reports and alerts with email address. Actually I  need to correct the email domains again and I didn't found any correct way to check all reports with email address. Is there any search query and specific method to find out.
HI, I have configured an alert to get the email when my query gives greater than 0 search results. I am able to see the alert but if search results are 3 I am getting 3 different emails as below. ... See more...
HI, I have configured an alert to get the email when my query gives greater than 0 search results. I am able to see the alert but if search results are 3 I am getting 3 different emails as below. In the above screenshot, we can see that all 3 different emails are triggered at the same time.  want all these results to be in one email alert. Can someone please help me with how can we get all the search results of a single alert? Also in my alert trigger conditions, I have something like  does that mean my Splunk alert expires after 24 hours? If it is so how can I change the settings to work the alert forever and if I need to stop the alert I will disable it.   Thanks in advance, Swetha. G
Can someone help me to check if any way to stop the scheduled excess bucket removal? Or anything that could speed up the task.  I am trying to perform data-rebalance on the cluster and seems it is... See more...
Can someone help me to check if any way to stop the scheduled excess bucket removal? Or anything that could speed up the task.  I am trying to perform data-rebalance on the cluster and seems it is blocked by on going excess bucket removal.       
I have 2 search Queries to get the windows shutdown list from the lookup file but when  I run these 2 Queries I am getting different host list for same time period, Can you please suggest the best Qu... See more...
I have 2 search Queries to get the windows shutdown list from the lookup file but when  I run these 2 Queries I am getting different host list for same time period, Can you please suggest the best Query to get the shutdown hosts from the lookup file. 1. index=* host=* sourcetype=XmlWinEventLog* (EventCode=41 OR EventCode=1074 OR EventCode=6006 OR EventCode=6008) join type=inner host [ |inputlookup Windows.csv ] | stats count by host | dedup host 2. index=* sourcetype=XmlWinEventLog* EventCode=41 OR EventCode=1074 OR EventCode=6006 OR EventCode=6008 [ | inputlookup Windows.csv | return 1000 host] | stats count by host | where count >1        
Hi, As mentioned in the subject, I wanted to perform a simple subtraction operation on individual values/elements within a multivalue field.  So for example, I have a MV 23 79 45 23 ... See more...
Hi, As mentioned in the subject, I wanted to perform a simple subtraction operation on individual values/elements within a multivalue field.  So for example, I have a MV 23 79 45 23 38 I will iterate over the items, and perform the subtraction to get the following: Iteration 1 diff = 0 (since it's the first element) Iteration 2 diff = abs(79 - 23) Iteration 3 diff = abs(45 - 79) Iteration 4 diff = abs(23 - 45) Iteration 5 diff = abs(38 - 23)   So far, here's what I did: | makeresults | eval a1="23,79,45,23,29" | makemv delim="," a1 | mvexpand a1 | eval start=0 | eval i=0 | foreach a1 [ eval tmp='<<FIELD>>' | eval diff=if(start==0, 0, "subtract_element[i+1]_with_subtract_element[i]") | eval i=i+1 | eval start=1 ] dummy query I haven't implemented the subtraction logic yet, since obviously, I am having a challenging time in doing it. I've spent almost 3 hours already experimenting, but no luck. Another weird thing (for me), which I can't explain, why the variable "i" is not incrementing even if I'm updating it within the foreach block. Hope my question makes sense, and as usual, thank you very much in advance. Any ideas are greatly appreciated
We have a number of saved searches (configured as alerts) that make use of custom search commands we wrote. It can happen that those custom commands fail to execute. This would cause the saved search... See more...
We have a number of saved searches (configured as alerts) that make use of custom search commands we wrote. It can happen that those custom commands fail to execute. This would cause the saved search to fail with an error message, which can be seen if you would run the search in the Splunk UI. We would like alerts to be triggered if this happens to the saved search. In the Alert configuration, this type of trigger is not an option.  Is there any way we can get an alert triggered when one of our saved searches fails with an error? PS: we are running our app on a multi-tenant platform, so we do not have access to the internal logs, thus we cannot run a search like: index=_internal sourcetype=scheduler status!=success | table _time search_type status user app savedsearch_name   
Hi all,  I am trying to highlight a particular column if the values in that column are no all the same. For instance, say I have a table such as the following: User  Place Version Mark Lon... See more...
Hi all,  I am trying to highlight a particular column if the values in that column are no all the same. For instance, say I have a table such as the following: User  Place Version Mark London 1 Sally US 2 Will Africa 2   I would specifically like to highlight the version column red because the values are not all the same on that column (the values are not all 2) , so that it looks like this (underline = should be highlighted) User  Place Version Mark London 1 Sally US 2 Will Africa 2   I understand the first step is to turn the Version column into a multi-value field and add "RED" in each cell along that column but unsure as to how to do that. I'd imagine that logic is something like if (count(Version uniq) >1 )) then add RED. But again not sure how to implement something like that.   Any help would be hugely appreciated!
Hi Splunkers. I'm looking for a way to delete a correlation search that has been created with the wrong name (as ES doesn't let you rename them). The CS is currently disabled but I don't see a way ... See more...
Hi Splunkers. I'm looking for a way to delete a correlation search that has been created with the wrong name (as ES doesn't let you rename them). The CS is currently disabled but I don't see a way to actually delete it. I see some previous answers here but they involve direct access to the .conf file which isn't an option in this particular environment, so looking for a way to do this from the SH. Thanks.  
Splunking legends... how good is Splunk:  we are on splunk cloud and I want to create a dashboard that looks at the status of an input, and reports on any that are disabled... shouldn't be hard, but ... See more...
Splunking legends... how good is Splunk:  we are on splunk cloud and I want to create a dashboard that looks at the status of an input, and reports on any that are disabled... shouldn't be hard, but after hours of research... meh Help me Splunky-won-knobi, your my only hope
Hello All, Nessus keeps throwing the error that "/en-US/splunkd/__raw/services/server/info/server-info?output_mode=json" exposes critical information for unauthenticated scans, but it the test is st... See more...
Hello All, Nessus keeps throwing the error that "/en-US/splunkd/__raw/services/server/info/server-info?output_mode=json" exposes critical information for unauthenticated scans, but it the test is stupid and runs an authenticated scan, therefore it fails since the data will be presented if authenticated. We need a clean Nessus scan result and I managed to make the following changes to restmap.conf [admin:server-info] requireAuthentication = true acceptFrom = "127.0.0.1" [admin:server-info-alias] requireAuthentication = true acceptFrom = "127.0.0.1"   This basically makes it even if you are authenticated you will get forbidden if you visit "/en-US/splunkd/__raw/services/server/info/server-info?output_mode=json".   This works great, but a side effect is that I am unable to view some UI pages like for example the user page anymore. I would have to remove the 127.0.0.1 line to view the UI elements. Anyone know how I can specially block "/en-US/splunkd/__raw/services/server/info/server-info?output_mode=json" but not cause other pages like users from being blocked?  This is to just get the nessus scan to pass.