Activity Feed
- Karma Re: After upgrading to Splunk 6.5.0, what is the best way to alert when forwarders don't check in for more than 2-3 minutes? for gjanders. 06-05-2020 12:48 AM
- Got Karma for Re: Splunk Dashboard Causing Browsers to crash?. 06-05-2020 12:48 AM
- Got Karma for Re: Splunk DB Connect: Inputs get disabled after server restarts. Is there a way to automatically re-enable the inputs or another solution?. 06-05-2020 12:48 AM
- Got Karma for Why am I experiencing indexer congestion after 6.5.0 upgrade?. 06-05-2020 12:48 AM
- Got Karma for Why am I experiencing indexer congestion after 6.5.0 upgrade?. 06-05-2020 12:48 AM
- Got Karma for Re: Deployment Server flooded with SSL handshake errors from forwarders. How to configure forwarders to use TLS 1.2. 06-05-2020 12:48 AM
- Karma Re: How do I create the same HTTP event collector token for multiple indexers? for gblock_splunk. 06-05-2020 12:47 AM
- Got Karma for Where to put my DMC?. 06-05-2020 12:47 AM
- Got Karma for How do I create the same HTTP event collector token for multiple indexers?. 06-05-2020 12:47 AM
- Karma Re: Search for event X and Y, but only Y during business hours? for sideview. 06-05-2020 12:46 AM
- Karma Re: Search for event X and Y, but only Y during business hours? for sideview. 06-05-2020 12:46 AM
- Karma Re: splnkd can't find libjemalloc.so.1 for Ayn. 06-05-2020 12:46 AM
- Karma Re: splnkd can't find libjemalloc.so.1 for mikemaki. 06-05-2020 12:46 AM
- Got Karma for Search for event X and Y, but only Y during business hours?. 06-05-2020 12:46 AM
- Posted Re: Splunk DB Connect: Inputs get disabled after server restarts. Is there a way to automatically re-enable the inputs or another solution? on All Apps and Add-ons. 03-10-2017 10:30 AM
- Posted Re: Why are indexing queues maxing out after upgrading to Splunk 6.5.0? on Getting Data In. 02-16-2017 10:56 PM
- Posted Re: Splunk Dashboard Causing Browsers to crash? on Security. 02-02-2017 04:04 PM
- Posted Re: Splunk DB Connect: Inputs get disabled after server restarts. Is there a way to automatically re-enable the inputs or another solution? on All Apps and Add-ons. 11-16-2016 10:55 AM
- Posted Re: After upgrading to 6.5.0, why are my real time dashboards showing a "chrome has run out of memory" browser message? on Dashboards & Visualizations. 11-10-2016 12:27 PM
- Posted After upgrading to 6.5.0, why are my real time dashboards showing a "chrome has run out of memory" browser message? on Dashboards & Visualizations. 11-07-2016 01:49 PM
Topics I've Started
Subject | Karma | Author | Latest Post |
---|---|---|---|
0 | |||
0 | |||
0 | |||
2 | |||
0 | |||
0 | |||
1 | |||
1 | |||
1 | |||
0 |
03-10-2017
10:30 AM
I'm pretty sure DBX 2.0 is not supported in an SHC environment, I had all sorts of problems with it and eventually moved it to a standalone search head by itself outside of my SHC.
... View more
02-16-2017
10:56 PM
I'm on 6.5.2 now, it's fixed but I'm not sure if it was hardware or software at this point.
My cold storage was degraded but everything gets ingested into my warm storage
... View more
02-02-2017
04:04 PM
1 Karma
The memory leak is still there, I just upgraded my entire environment (10 nodes) to 6.5.2 mainly because I wanted this fixed and they are still running out of memory.
... View more
11-16-2016
10:55 AM
I noticed that too actually just yesterday and replaced everything 😛
I bet it'll be fine now, thanks for follow up.
... View more
11-10-2016
12:27 PM
Apparently there is a memory leak with real time searches in firefox/chrome after 6.2. I "fixed" this by changing my graphs to non real time and making my panels refresh every 10s. Not ideal for my environment but I have no alternative.
... View more
11-07-2016
01:49 PM
I have 5-6 dashboards running on decently spec'd mini-PCs. They all have 4-5 real time searches running on them which is a requirement because they are for Operations to spot issues and thus have to be live.
These all worked great until the 6.5 upgrade. Now, every time I arrive at work, the dashboards show a "chrome has run out of memory" browser message.
Has anyone else had this issue?
... View more
11-02-2016
07:36 PM
So I came in this morning after a major maintenance, every single input was disabled.
so unfortunately this isn't working 😕 does it matter where you put the auto_disable = False?
... View more
10-30-2016
11:35 PM
I've tried a lot of searches including ones with | metadata and they all had weirdness, this one actually looks really accurate/promising I think I can make it work.
Thanks guys!
... View more
10-30-2016
09:48 PM
1 Karma
Added these lines to server.conf on my forwarders and that fixed the communication, I think pushing the app would do the same job at scale. Works!
cipherSuite = TLSv1.2:!eNULL:!aNULL
sslVersions = tls1.2,-ssl2, -ssl3
sslVersionsForClient = tls1.2,-ssl2, -ssl3
allowSslCompression = false
... View more
10-30-2016
11:44 AM
I don't believe i've set sslVersionsForClient anywhere on the forwarders, I have barely touched them in years but made many upgrades/changes on my servers (in this case it's 100% set on my DS)
it does seem like I need to set sslVersionsForClient on the the forwarders but where? server.conf? that's the hard part as there are so many conf files. Also my errors logs are clean on the client side so it's difficult narrowing it down
... View more
10-29-2016
02:28 PM
We've recently locked down everything to use TLS 1.2 and I think i've fixed just about everything, however, my deployment server is full of SSL3 handshake errors with the forwarders.
How do I set up the forwarders to use TLS1.2 with my deployment server? I'm confused about which file to modify: server.conf? web.conf? Everything looks fine server side - it's just on the forwarder I need to update.
Here is my server.conf file on my deployment server:
sslKeysfile = key.pem
sslKeysfilePassword = xxxxx
sslPassword = xxxxxx
cipherSuite = TLSv1.2:!eNULL:!aNULL
sslVersions = tls1.2,-ssl2, -ssl3
sslVersionsForClient = tls1.2,-ssl2, -ssl3
allowSslCompression = false
... View more
10-29-2016
02:13 PM
Thanks for the lengthy reply, that all makes sense from a monitoring perspective and I've done a solid amount of research on that side of it.
Specifically though I was looking for a best practice way of being alerted when forwarders are down/missing. I've tried grabbing the search that the DMC uses but I've had no luck.
Spent a lot of time googling before posting this but every search I've tried based on my findings has not worked as intended.
... View more
10-28-2016
11:53 PM
I'm currently using a very old deployment monitor search to determine when forwarders are down and it doesn't seem to be working very well in 6.5 (false positives + non alerts). I know the Monitoring Console has some additional functionality.
Does anyone have a specific search for this? I'm hoping to alert if forwarders don't check in for 2-3 mins.
... View more
10-27-2016
01:04 PM
So far I haven't seen any congestion since I made the change, I'm about to put full load on it now we'll see what happens.
... View more
10-27-2016
09:37 AM
1 Karma
For future reference, I used this handy line to add auto_disable = False right after all occurrences of disabled = 0
sed -i $'s/disabled = 0/disabled = 0\\\nauto_disable = False/g' inputs.conf
... View more
10-26-2016
05:49 PM
I'll probably restart it and see what happens
... View more
10-26-2016
04:58 PM
Nice! that's perfect. Can I update inputs.conf live and will the changes immediately be reflected?
... View more
10-26-2016
04:43 PM
Errors are gone, thanks!
... View more
10-26-2016
02:00 PM
2 Karma
I have four independent indexers in a round robin, 2 are fairly old, 1 is a year old and my newest is maybe 3-4 months old.
hot/warm is on an SSD Mirror, cold is on spinning disk but currently barely used at all (thresholds not met yet)
Right after upgrading to 6.5.0, my newest indexer started filling up ALL of its indexing queues, I've taken it in/out a bunch of times and although not as often It's still randomly filling up all queues then stopping indexing of data.
All disks are healthy, no IO waits or anything.. I've watched the disks while this was happening and there are no issues with them.
Could these errors have something to do with it?
10-26-2016 13:00:08.697 -0700 WARN LineBreakingProcessor - Truncating line because limit of 10000 bytes has been exceeded with a line length >= 13431 - data_source="/opt/splunk/var/l
og/splunk/remote_searches.log", data_host="splunk08", data_sourcetype="splunkd_remote_searches"
... View more
10-26-2016
01:49 PM
I work in a dynamic environment and often servers will get restarted for various reasons, updates for example..
Often I'll miss these events because I'm not on-call or nobody tells me. What happens is Splunk DB Connect disables all the inputs for that server and never tries to re-enable them.
So that data stops flowing into Splunk until someone manually re-enables the inputs for the servers in question.
My question is: Is there any way to have auto-retry logic? Or is there something I can use to monitor for inputs that get disabled? Right now I simply can't trust the data going into it because of this.
... View more
10-05-2016
10:22 AM
Yeah that was my determination as well, I opened one up thanks.
... View more
10-04-2016
05:19 PM
Falls over as in it completely fills up every single queue ( parsing/agg/typing/indexing ) then stops indexing data completely and the data becomes unsearchable.
I'm running four stand alone indexers which are being sent data via round robin DNS.
... View more
10-04-2016
02:35 PM
I have four indexers in a round robin, all were working great. After upgrading my entire environment to 6.5.0, all my nodes work just fine aside from my newest indexer which has its queues filling up even at 5kb/s indexing rate.
I've tried everything including upping the parallel indexing pipes to 2/4/8 but it can't take any data without falling over immediately and it just stops indexing which causes all sorts of problems.
I see no errors, disks are healthy, CPU/Memory is very low and the Splunk health checks don't show anything concerning.
This happened immediately after the upgrade, what could cause this?
Thanks.
... View more
10-01-2015
01:36 PM
Hah no worries I appreciate the reply! Look forward to seeing the docs, if you remember please fire them into this post.
Thanks!
... View more
10-01-2015
12:08 PM
1 Karma
I have three stand alone indexers in a round robin and want them to accept HTTP events via the HTTP Event Collector. How do I generate a token with the same value on all three?
... View more