All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Dear Sir/Madam We have installed the on-premise version of AppDynamics with various agents in operational environment. We decided to update the controller (not agents). During the controller update,... See more...
Dear Sir/Madam We have installed the on-premise version of AppDynamics with various agents in operational environment. We decided to update the controller (not agents). During the controller update, we encountered with a problem and we had to reinstall the controller. So, the controller access key was changed. It takes much time to coordinate and  update the agents in operational environment and so we have not changed the agents. According to the link 'Change Account Access Key', we changed the new account access key (for Customer1 in single tenant mode) to the old account access key (without changing any config in agent side, including the access keys). Now, every agent is OK (e.g, app agents , db agents, etc.) but database collectors does not work. Although, database agent is registered but we can't add any database collector. I have checked the controller log and found the following exception: "dbmon config ... doesn't exist". It seems that the instructions mentioned in the link above are not enough for database agent and collector, namely some extra steps are needed. Thanks for your attention Best regards.
hello everyone I ran into a problem with Splunk UBA that I need help with. Thank you for guiding me. I have more than one domain in Splunk UBA and it mistakenly recognizes some users as the same use... See more...
hello everyone I ran into a problem with Splunk UBA that I need help with. Thank you for guiding me. I have more than one domain in Splunk UBA and it mistakenly recognizes some users as the same user due to name similarity. While these users are not the same person and only have name similarities in the login ids field. How can I solve this problem and have users with the same login ids but not have false positive anomalies? Thank you for your guidance.
Hi, I am new to Splunk admin. We have a syslog server in our environment to collect logs from our network device. Our clients asked us to install LTM (Local Traffic Manager) load balancer on syslog s... See more...
Hi, I am new to Splunk admin. We have a syslog server in our environment to collect logs from our network device. Our clients asked us to install LTM (Local Traffic Manager) load balancer on syslog server. I have no idea about what load balancer do and how to install it and is it a component of splunk(full package or light weight package). Please suggest how to setup this environment?  And also what is suggested for network logs... UDP or TCP?  I want to learn completely about syslog server and it's end to end configuration with Splunk. Please provide the latest doc link. (I am not asking about add-on). Please note.
@yuanliu You have not used both below attributes. Can I also skip these two? Not using it will not have any impact on the consistency of data parsing, right? TIME_FORMAT MAX_TIME_LOOKAHEAHD   ... See more...
@yuanliu You have not used both below attributes. Can I also skip these two? Not using it will not have any impact on the consistency of data parsing, right? TIME_FORMAT MAX_TIME_LOOKAHEAHD   Thanks in advance and acknowledging your valuable time.
Using your illustrated event as input, this is my test output Displayed event time is 11/7/24 6:29:43.175 PM, which matches EVENTTS value of 2024-11-07 18:29:43.175 and differs from the log's ti... See more...
Using your illustrated event as input, this is my test output Displayed event time is 11/7/24 6:29:43.175 PM, which matches EVENTTS value of 2024-11-07 18:29:43.175 and differs from the log's timestamp of 2024-11-07 18:45:00.035. This is my sourcetype entry: [test-eventts] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true TIME_PREFIX = EVENTTS=\" category = Custom description = https://community.splunk.com/t5/Splunk-Search/Timestamp-fixing/m-p/703912#M238560 pulldown_type = 1 The sourcetype is created from default except TIME_PREFIX. (Pro tip: Splunk's default timestamp detection is very versatile and often not worth overriding.)
if this is not available for us "sc_users" can the splunk engineer create a visualization into the CMC console? or any alternative ways grabbing the health of your application in the search head? ple... See more...
if this is not available for us "sc_users" can the splunk engineer create a visualization into the CMC console? or any alternative ways grabbing the health of your application in the search head? please advise, Thank you.
To fix this issue, we had our client insert the "Connection: Keep-Alive" header into the HTTP POST requests. This instructed the Splunk server to keep the connection alive.
I have dashboard in Splunk Cloud which uses a dropdown input to determine the index for all of the searches on the page, with a value like "A-suffix", "B-suffix", etc. However, now I want to add anot... See more...
I have dashboard in Splunk Cloud which uses a dropdown input to determine the index for all of the searches on the page, with a value like "A-suffix", "B-suffix", etc. However, now I want to add another search which uses a different index but has `WHERE "column"="A"`, with A being the same value selected in the dropdown, but without the suffix. I tried using eval to replace the suffix with an empty string, and I tried changing the dropdown to remove the suffix and do `index=$token$."-suffix"` in the other queries, but I can't get anything to work. It seems like I might be able to use `<eval token="token">` if I could edit the XML, but I can only find the JSON source in the web editor and don't know how to edit the XML with Dashboard Studio.    
Well, this says that Splunk should normally behave properly with HTTP/1.1 https://docs.splunk.com/Documentation/Splunk/latest/Data/TroubleshootHTTPEventCollector#Detect_scaling_problems Another thi... See more...
Well, this says that Splunk should normally behave properly with HTTP/1.1 https://docs.splunk.com/Documentation/Splunk/latest/Data/TroubleshootHTTPEventCollector#Detect_scaling_problems Another thing to consider. forceHttp10 = [auto|never|always] * Whether or not the REST HTTP server forces clients that connect to it to use the HTTP 1.0 specification for web communications. * When set to "always", the REST HTTP server does not use some HTTP 1.1 features such as persistent connections or chunked transfer encoding. * When set to "auto", it does this only if the client did not send a User-Agent header, or if the user agent is known to have bugs in its support of HTTP/1.1. * When set to "never" it always allows HTTP 1.1, even to clients it suspects might be buggy. * Default: auto
10MB is really a very low limit. If you have smaller buckets than that, Splunk will complain about small buckets. But it should work like this: Splunk creates a number of hot buckets for an index. ... See more...
10MB is really a very low limit. If you have smaller buckets than that, Splunk will complain about small buckets. But it should work like this: Splunk creates a number of hot buckets for an index. If a hot bucket grows too big or too idle it gets rolled to warm. If there are too many warm buckets or the homePath.maxDataSizeMB is exceeded, oldest (the one which earliest event is oldest) bucket is rolled to cold. When the latest event in a cold bucket is getting older than retention period, that bucket is getting rolled to frozen. Also when the size limit of the coldPath.maxDataSizeMB or maxTotalDataSizeMB is reached, oldest bucket is rolled to frozen. At any time if a volume size limit is exceeded Splunk rolls the oldest bucket from the whole volume (not including hot buckets as far as I remember) to the next state. So your settings look pretty sound. It's just that 10MB is way too low.
Thank you for replying. Yes, the client is using HTTP 1.1 when sending the HTTP POSTS. This was verified within the packet capture.
Are your clients sending proper HTTP/1.1. Splunk should support keep-alive out of the box.
Our apps send data to the Splunk HEC via HTTP POSTS. The apps are configured to use a connection pool, but after sending data to Splunk (via HTTP POSTS), the Splunk server responds with a Status 200 ... See more...
Our apps send data to the Splunk HEC via HTTP POSTS. The apps are configured to use a connection pool, but after sending data to Splunk (via HTTP POSTS), the Splunk server responds with a Status 200 and the "Connection: Close" header. This instructs our apps to close their connection instead of reusing the connection. How can I stop this behavior? Right now it's constantly re-creating a connection thousands of times instead of just re-using the same connection.
Thank you for your reply. Maybe I am not understanding. I arbitrarily used "10 MB" as the limit so that I could quickly test this concept without repeatedly indexing large amount of logs. I'm not ... See more...
Thank you for your reply. Maybe I am not understanding. I arbitrarily used "10 MB" as the limit so that I could quickly test this concept without repeatedly indexing large amount of logs. I'm not an expert on how all of this works, but from your reponse I get the impression that "10 MB" was probably too small of a setting to experiment with. Just to clarify, per index, I want X GB stored on SSD. Once that X GB is reached, older data should begin rolling over to HDD (cold) storage. That is what I'm trying to accomplish. This way, only 'younger' data will be stored on the expensive SSD disks. Thank you!
And I will give you another approach. | makeresults format=csv data="field1,field2,field3,field4,fiel5,field6,field7,field8,field9,field10,field11 sys1,2,a,v,4,65,2,dd,2,f,44 sys2,2,b,v,4,55,2,dd,... See more...
And I will give you another approach. | makeresults format=csv data="field1,field2,field3,field4,fiel5,field6,field7,field8,field9,field10,field11 sys1,2,a,v,4,65,2,dd,2,f,44 sys2,2,b,v,4,55,2,dd,2,f,44" | appendpipe [ | stats dc(*) as * | eval field1="count"] | transpose 0 header_field=field1 Now you can decide whether to include only those rows where you have one or two different values. You can play with this to account for more rows and such. And I'm assuming your sys1,sys2 names can be dynamic. Otherwise your solution would be as simple as | where sys1=sys2 (or NOT)
Hi @markkasaboski ,   Thanks, it's working for me. In my case, i have edit small code in Add-On & re-package.
What do you mean "all buckets are rolled to cold"? When you have a 10MB limit for hot/warm storage how many buckets do you expect?
Hi @Denise.Perrotta, Thank you for sharing the answer. 
Hello. I'm setting up a new Splunk Enterprise environment - just a single indexer with forwarders. There are two volumes on the server, one is on SSD for hot/warm buckets, and the other volume is H... See more...
Hello. I'm setting up a new Splunk Enterprise environment - just a single indexer with forwarders. There are two volumes on the server, one is on SSD for hot/warm buckets, and the other volume is HDD for cold buckets. I'm trying to configure Splunk such at that an index ("test-index") will only consume, say, 10 MB of the SSD volume. After it hits that threshold, the oldest hot/warm bucket should roll over to the slower HDD volume. I've done various tests, but when the index's 10 MB SSD threshold is reached, all of the buckets are rolled over to the cold storage, leaving SSD empty. Here is how indexes.conf is set now:   [volume:hot_buckets] path = /srv/ssd maxVolumeDataSizeMB = 430000 [volume:cold_buckets] path = /srv/hdd maxVolumeDataSizeMB = 11000000 [test-index] homePath = volume:hot_buckets/test-index/db coldPath = volume:cold_buckets/test-index/colddb thawedPath = /srv/hdd/test-index/thaweddb homePath.maxDataSizeMB = 10   When the 10 MB threshold is reached, why is everything in hot/warm rolling over to cold storage? I had expected 10 MB of data to remain in hot/warm, with only the older buckets rolling over to cold. I've poked around and found a other articles related to maxDataSizeMB, but those questions do not align with what I'm experiencing. Any guidance is appreciated. Thank you!
This was the fix we were looking for. I ended up using group policy preferences to add NT SERVICE\SplunkForwarer to the Event Log Readers group instead of using Restricted Groups (defining members in... See more...
This was the fix we were looking for. I ended up using group policy preferences to add NT SERVICE\SplunkForwarer to the Event Log Readers group instead of using Restricted Groups (defining members in Restricted Groups will remove members already in the group not listed, so be cautious).