All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Well, this says that Splunk should normally behave properly with HTTP/1.1 https://docs.splunk.com/Documentation/Splunk/latest/Data/TroubleshootHTTPEventCollector#Detect_scaling_problems Another thi... See more...
Well, this says that Splunk should normally behave properly with HTTP/1.1 https://docs.splunk.com/Documentation/Splunk/latest/Data/TroubleshootHTTPEventCollector#Detect_scaling_problems Another thing to consider. forceHttp10 = [auto|never|always] * Whether or not the REST HTTP server forces clients that connect to it to use the HTTP 1.0 specification for web communications. * When set to "always", the REST HTTP server does not use some HTTP 1.1 features such as persistent connections or chunked transfer encoding. * When set to "auto", it does this only if the client did not send a User-Agent header, or if the user agent is known to have bugs in its support of HTTP/1.1. * When set to "never" it always allows HTTP 1.1, even to clients it suspects might be buggy. * Default: auto
10MB is really a very low limit. If you have smaller buckets than that, Splunk will complain about small buckets. But it should work like this: Splunk creates a number of hot buckets for an index. ... See more...
10MB is really a very low limit. If you have smaller buckets than that, Splunk will complain about small buckets. But it should work like this: Splunk creates a number of hot buckets for an index. If a hot bucket grows too big or too idle it gets rolled to warm. If there are too many warm buckets or the homePath.maxDataSizeMB is exceeded, oldest (the one which earliest event is oldest) bucket is rolled to cold. When the latest event in a cold bucket is getting older than retention period, that bucket is getting rolled to frozen. Also when the size limit of the coldPath.maxDataSizeMB or maxTotalDataSizeMB is reached, oldest bucket is rolled to frozen. At any time if a volume size limit is exceeded Splunk rolls the oldest bucket from the whole volume (not including hot buckets as far as I remember) to the next state. So your settings look pretty sound. It's just that 10MB is way too low.
Thank you for replying. Yes, the client is using HTTP 1.1 when sending the HTTP POSTS. This was verified within the packet capture.
Are your clients sending proper HTTP/1.1. Splunk should support keep-alive out of the box.
Our apps send data to the Splunk HEC via HTTP POSTS. The apps are configured to use a connection pool, but after sending data to Splunk (via HTTP POSTS), the Splunk server responds with a Status 200 ... See more...
Our apps send data to the Splunk HEC via HTTP POSTS. The apps are configured to use a connection pool, but after sending data to Splunk (via HTTP POSTS), the Splunk server responds with a Status 200 and the "Connection: Close" header. This instructs our apps to close their connection instead of reusing the connection. How can I stop this behavior? Right now it's constantly re-creating a connection thousands of times instead of just re-using the same connection.
Thank you for your reply. Maybe I am not understanding. I arbitrarily used "10 MB" as the limit so that I could quickly test this concept without repeatedly indexing large amount of logs. I'm not ... See more...
Thank you for your reply. Maybe I am not understanding. I arbitrarily used "10 MB" as the limit so that I could quickly test this concept without repeatedly indexing large amount of logs. I'm not an expert on how all of this works, but from your reponse I get the impression that "10 MB" was probably too small of a setting to experiment with. Just to clarify, per index, I want X GB stored on SSD. Once that X GB is reached, older data should begin rolling over to HDD (cold) storage. That is what I'm trying to accomplish. This way, only 'younger' data will be stored on the expensive SSD disks. Thank you!
And I will give you another approach. | makeresults format=csv data="field1,field2,field3,field4,fiel5,field6,field7,field8,field9,field10,field11 sys1,2,a,v,4,65,2,dd,2,f,44 sys2,2,b,v,4,55,2,dd,... See more...
And I will give you another approach. | makeresults format=csv data="field1,field2,field3,field4,fiel5,field6,field7,field8,field9,field10,field11 sys1,2,a,v,4,65,2,dd,2,f,44 sys2,2,b,v,4,55,2,dd,2,f,44" | appendpipe [ | stats dc(*) as * | eval field1="count"] | transpose 0 header_field=field1 Now you can decide whether to include only those rows where you have one or two different values. You can play with this to account for more rows and such. And I'm assuming your sys1,sys2 names can be dynamic. Otherwise your solution would be as simple as | where sys1=sys2 (or NOT)
Hi @markkasaboski ,   Thanks, it's working for me. In my case, i have edit small code in Add-On & re-package.
What do you mean "all buckets are rolled to cold"? When you have a 10MB limit for hot/warm storage how many buckets do you expect?
Hi @Denise.Perrotta, Thank you for sharing the answer. 
Hello. I'm setting up a new Splunk Enterprise environment - just a single indexer with forwarders. There are two volumes on the server, one is on SSD for hot/warm buckets, and the other volume is H... See more...
Hello. I'm setting up a new Splunk Enterprise environment - just a single indexer with forwarders. There are two volumes on the server, one is on SSD for hot/warm buckets, and the other volume is HDD for cold buckets. I'm trying to configure Splunk such at that an index ("test-index") will only consume, say, 10 MB of the SSD volume. After it hits that threshold, the oldest hot/warm bucket should roll over to the slower HDD volume. I've done various tests, but when the index's 10 MB SSD threshold is reached, all of the buckets are rolled over to the cold storage, leaving SSD empty. Here is how indexes.conf is set now:   [volume:hot_buckets] path = /srv/ssd maxVolumeDataSizeMB = 430000 [volume:cold_buckets] path = /srv/hdd maxVolumeDataSizeMB = 11000000 [test-index] homePath = volume:hot_buckets/test-index/db coldPath = volume:cold_buckets/test-index/colddb thawedPath = /srv/hdd/test-index/thaweddb homePath.maxDataSizeMB = 10   When the 10 MB threshold is reached, why is everything in hot/warm rolling over to cold storage? I had expected 10 MB of data to remain in hot/warm, with only the older buckets rolling over to cold. I've poked around and found a other articles related to maxDataSizeMB, but those questions do not align with what I'm experiencing. Any guidance is appreciated. Thank you!
This was the fix we were looking for. I ended up using group policy preferences to add NT SERVICE\SplunkForwarer to the Event Log Readers group instead of using Restricted Groups (defining members in... See more...
This was the fix we were looking for. I ended up using group policy preferences to add NT SERVICE\SplunkForwarer to the Event Log Readers group instead of using Restricted Groups (defining members in Restricted Groups will remove members already in the group not listed, so be cautious).
I have only demonstrated changing a simple string, but you can replace with a complex string for your base search. <form version="1.1" theme="light"> <search id="base_search"> <query>| makeres... See more...
I have only demonstrated changing a simple string, but you can replace with a complex string for your base search. <form version="1.1" theme="light"> <search id="base_search"> <query>| makeresults | eval testfield="$tok_searchfieldvalue$" | fields *</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <label>Answers - Classic - Token to Select Base Search</label> <fieldset submitButton="true" autoRun="false"> <input type="radio" token="tok_searchfieldvalue" searchWhenChanged="false"> <label>Base Query</label> <choice value="field_value_1">Field Value 1</choice> <choice value="field_value_2">Field Value 2</choice> </input> </fieldset> <row> <panel> <html>search_field_value_change</html> </panel> <panel> <html>$tok_searchfieldvalue$</html> </panel> </row> <row> <panel> <table> <search base="base_search"></search> </table> </panel> </row> </form>
The transpose command has an int argument which defaults to 5 - this is why I have used zero (0) in my suggested solution. https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Transpo... See more...
The transpose command has an int argument which defaults to 5 - this is why I have used zero (0) in my suggested solution. https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Transpose  
Just use key::value as your search term. Like index=something somekey::somevalue You can also check if fields are indexed with (an example looking for Protocol) | walklex=your_index type=all | se... See more...
Just use key::value as your search term. Like index=something somekey::somevalue You can also check if fields are indexed with (an example looking for Protocol) | walklex=your_index type=all | search term=" Protocol::*" | table term (Need to give it a quite big time range).
There is no obvious answer. It might indeed require calling support.
When you change the "Wrap results" option, it switches between this CSS .results-table .wrapped-results td, .results-table .wrapped-results th { white-space:pre-wrap; } and the "nowrap" val... See more...
When you change the "Wrap results" option, it switches between this CSS .results-table .wrapped-results td, .results-table .wrapped-results th { white-space:pre-wrap; } and the "nowrap" value. Instead of "nowrap" it probably should use "preserve-spaces" one to be consistent. (or collapse in both cases). The mystery of collapsing spaces in the source view which I showed is still present however.
Yap it makes since now, it worked thanks!
it shouldn't matter. Here is a run anywhere example with multiple events with different number of URL | makeresults | eval _raw="here are some url and http://firsturl.com and some text again url ht... See more...
it shouldn't matter. Here is a run anywhere example with multiple events with different number of URL | makeresults | eval _raw="here are some url and http://firsturl.com and some text again url http://www.secondurl.com and again some text URL http://www.third.com/" | append [| makeresults | eval _raw="here are some url and http://fourth.com and some text again url"] | append [| makeresults | eval _raw="here are some url and http://fifth.com and some text again url and some text again url http://www.sixth.com and again some text http://www.seventh.com and some http://www.moreandmore.com"] | rex max_match=0 "\b(?<domain2>(?:http?://|www\.)(?:[0-9a-z-]+\.)+[a-z]{2,63})/?"
its a good idea, but in my case i don't know what domains they will be in the _raw, so i cant predict the list. Some events have one domain and it captured but the next event has 5, next will have 12... See more...
its a good idea, but in my case i don't know what domains they will be in the _raw, so i cant predict the list. Some events have one domain and it captured but the next event has 5, next will have 12 and each event has different domains on the raw.