All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, We have a question regarding a specific use case of data forwarding, we would like to know if there is a risk with the situation. Let's say we have two Splunk platforms, one has a set of ... See more...
Hello, We have a question regarding a specific use case of data forwarding, we would like to know if there is a risk with the situation. Let's say we have two Splunk platforms, one has a set of indexers (indexer1) that stores data for a specific use case, the other platform has also its indexer (indexer2) for another use case. At some point, these 2 platforms will have to collect data from the same machines, not necessarily the same data, but it could be. So there is a forwarder on a remote machine, let's say we have to collect the same file and forward it to both platforms. We are using the data cloning technique, with for example: In inputs.conf [monitor://file_path] index = … sourcetype = ... _TCP_ROUTING=indexer1,indexer2 Or with outputs.conf [tcpout] defaultGroup=indexer1,indexer2 Or by using props/transforms to alter the routing of the events on an intermediate Heavy Forwarder, whatever (please, do tell if there is a significant difference with any of these methods for this situation). Now the question is: In general, is there a risk that one of the two platforms will stop receiving events if the other is down, depending on the configuration of the indexers/forwarders ? We have heard of the concept of indexer acknowledgment, we are not sure if it can have any impact on this situation. For example, if the group indexer1 is configured with acknowledgement enabled, is there any risk that the group indexer2 won't receive data when indexer1 is not acknowledging the reception ? This topic is a little bit confusing for us, we have heard claims that the data forwarding could be blocked if another platform needs to receive the data and one of the indexer is down, but it doesn't seem right. We just want to clarify, with the set up described above, if there would be any issue. Thank you very much for your help Update: We learned about the parameters in outputs.conf to consider when configuring the behavior of the queues: - dropEventsOnQueueFull = - dropClonedEventsOnQueueFull = - blockOnCloning = It is indeed possible to block the data collect of a splunk instance (in a data cloning configuration) when not paying attention to these parameters. In the little test we did, the default value of dropClonedEventsOnQueueFull made it so that the data collect didn't block. However we have to watch out for dropEventsOnQueueFull as well, which can cause data forwarding issues when a splunk instance is unavailable (with default value) => But it also depends on whether you accept the loss of data or not in your deployment. Very interesting parameters to know about.
I wonder if Splunk internal logs contain any information such as the expiry of the services on a Splunk addon. Thanks.
Is there a way to monitor AS/400 using an agent of AppDynamics? I need to monitor AS/400, but I haven't found the way to do it with AppDynamics... How can I monitor it?
Does anyone have any suggestions to get support to have any sense of urgency? We're a new customer (4 months) and I've learned to send my diag, the conf file, etc. Before, I would get a response...... See more...
Does anyone have any suggestions to get support to have any sense of urgency? We're a new customer (4 months) and I've learned to send my diag, the conf file, etc. Before, I would get a response... (1 day after submission), need diag. OK, I send diag, another day passes by, then they ask for something else. There is also no way to provide feedback on how the ticket was handled. Even after providing diags / conf files, it still is a multiday process of asking basic questions. I'm used to support from companies like CommVault, 4 hours...someone is one the phone, done! I get that they want people to use PS (support engineer actually said he was forwarding to PS!), but the specific issue I am asking about is not a walkthru... Just a question as to why a specific stanza for printmon (job) is not working. YES, I guess I could post here, but for the amount of cash we paid, I would think that support would be...adequate. Product is AMAZING...Having so much fun. Docs can be a bit obtuse, sure... But, overall, this product is nothing like I've ever seen, just amazing! Wish their support services matched their product offerings.
Hi everyone, Does anyone know how the drill-downs in MC health check items are supposed to work? I am creating some custom health checks and saw that there is a field "drill-down" there. I've put s... See more...
Hi everyone, Does anyone know how the drill-downs in MC health check items are supposed to work? I am creating some custom health checks and saw that there is a field "drill-down" there. I've put some custom search in that field expecting that whenever I click on the health check result I would be presented with a new screen that shows the drill-down. Any ideas? Documentation was very poor, or confusing at least.
I am looking for Perl script execution steps in Splunk. Please provide the details steps in case of possible.
To make things easier, I'd like to include a REST JSON from an external tools of ours in one of our Splunk dashboards. I looked a bit at the REST API Modular Input add-on, but the data I want to pr... See more...
To make things easier, I'd like to include a REST JSON from an external tools of ours in one of our Splunk dashboards. I looked a bit at the REST API Modular Input add-on, but the data I want to present might change over time. Any other interesting ways to solve a thing like that?
Hi Folks, We are planned to upgrade our Splunk 7.2.4 to 8.0.3. 7.2.4 was installed using rpm package. We are thinking to upgrade using tarball. Since earlier version is installed using rpm, O... See more...
Hi Folks, We are planned to upgrade our Splunk 7.2.4 to 8.0.3. 7.2.4 was installed using rpm package. We are thinking to upgrade using tarball. Since earlier version is installed using rpm, OS still lists old package for this command "rpm -qa | grep splunk" Does this really matters and can i just remove that package by rpm -e after up gradation? Or should I go with rpm up gradation only? Could you please explain whats the difference between rpm and tar... Any help is highly appreciated.. Thanks, Pramodh
As the Alert Manager Add-on was updated only a few days ago and is compatible with Splunk Cloud 8.0.x, I was wondering if this app is known to be updated for compatibility before running obsolete on ... See more...
As the Alert Manager Add-on was updated only a few days ago and is compatible with Splunk Cloud 8.0.x, I was wondering if this app is known to be updated for compatibility before running obsolete on August 1st
When I search or after running saved search, sometimes error messages are displayed, however activity log shows they have been completed. "Dispatch Command: Unknown error for indexer: xxxx. Searc... See more...
When I search or after running saved search, sometimes error messages are displayed, however activity log shows they have been completed. "Dispatch Command: Unknown error for indexer: xxxx. Search Results might be incomplete! If this occurs frequently, please check on the peer." I want to search how frequently these message are created. Please let me know the way to search. I can't find as below. -index=_internal "dispatch" -index=_internal "incomplete"
Date="8 May 2020" Link="X" Status="UP" Date="9 May 2020" Link="Y" Status="DOWN" Date="10 May 2020" Link="X" Status="UP" Date="11 May 2020" Link="X" Status="DOWN" Date="12 May 2020" Link="Y" Statu... See more...
Date="8 May 2020" Link="X" Status="UP" Date="9 May 2020" Link="Y" Status="DOWN" Date="10 May 2020" Link="X" Status="UP" Date="11 May 2020" Link="X" Status="DOWN" Date="12 May 2020" Link="Y" Status="UP" I am getting logs on daily basis in above format and data . I am looking to find variable Link whose Status went down but never came up and on which date it went DOWN . Can someone please help with same , thanks
Hi all, i'm here to ask you some information about a current setting i found on an existing Splunk Index. In particular, this is the indexes.conf stanza related to the index A: [A] homePath = ... See more...
Hi all, i'm here to ask you some information about a current setting i found on an existing Splunk Index. In particular, this is the indexes.conf stanza related to the index A: [A] homePath = volume:primary/A/db coldPath = volume:secondary/A/colddb thawedPath = $SPLUNK_DB/A/thaweddb homePath.maxDataSizeMB = 15360 coldPath.maxDataSizeMB = 30720 maxWarmDBCount = 4294967295 frozenTimePeriodInSecs = 7776000 maxDataSize = auto coldToFrozenDir = /splunk/A/frozendb archiver.enableDataArchive = 0 bucketRebuildMemoryHint = 0 compressRawdata = 1 enableDataIntegrityControl = 0 enableOnlineBucketRepair = 1 enableTsidxReduction = 0 maxTotalDataSizeMB = 102400 minHotIdleSecsBeforeForceRoll = 0 rtRouterQueueSize = rtRouterThreads = selfStorageThreads = suspendHotRollByDeleteQuery = 0 syncMeta = 1 tsidxWritingLevel = enableDataIntegrityControl=true After checking bucket information via monitoring console, i have the following question: 1) Why there is a hot bucket related to the index A with with startEpoch 16 december and endEpoch 31 Dec, with size on disk 375MB ? It's related to the fact it does not hit neither size nor time (default maxhotspansec=90days) parameter to roll to warm? 2) if my requirement is to set 6 months of retention of this index, how can i be sure parameter frozenTimePeriodinSec act as expected? 3) I was thinking to set maxHotSpanSecs to 1 day for hot to warm, but what about rolling from warm to cold in a way i does not create any kind of problem with conf modification on existing data? Thanks in advance everyone.
I have a date like May 10 2020 11:20 PM in csv file Defined in props.conf TIME_FORMAT - %b %d %Y %I:%M %p but getting "Failed to parse timestamp" what I am missing ?
I am using Splunk Cloud and I want to use curl without insecure connection for sending data via HEC , What is the process for that and what to do?
Hi Splunkers! I'm trying to frame a query which fetches the list of servers that connects my deployment servers but do not send any external or internal logs to the same. my query for the host ... See more...
Hi Splunkers! I'm trying to frame a query which fetches the list of servers that connects my deployment servers but do not send any external or internal logs to the same. my query for the host last accessed time using metadata is working fine.. but above criteria is not working as expected.. its fetching all the servers connecting to my deployment server. Thanks in Advance!
Hi, I want to confisure Splunk HEC on dedicated splunk server. Please let me know the server hardware and software requirements for this.
We have a series of logs from different devices such as (Firewall .waf. antivirus,...) that come from syslog server to Splunk with the same host name. I want to separate the logs based on sourcetype.... See more...
We have a series of logs from different devices such as (Firewall .waf. antivirus,...) that come from syslog server to Splunk with the same host name. I want to separate the logs based on sourcetype. All logs have the same Hostname and source. is it possible to define different sourcetype?
Using HTTP Event Collector to receier data. When there is unwanted curly brace(s) in value. Event parse incorrect. How can I extract the data when there is {} in the data?
index=juniper host="XXXXXXX" | stats count by user | stats count as Users This query gives output of total number of users connection for the whole time period. Need help to pull the report for l... See more...
index=juniper host="XXXXXXX" | stats count by user | stats count as Users This query gives output of total number of users connection for the whole time period. Need help to pull the report for last three months with shows weekwise user data?
I am trying to connect to few of the MSSQL DB instances wherein the Force Encryption is enabled (set to true) and if I try to create DB connection through Splunk DB Connect Version: 20.02.3.0 Build ... See more...
I am trying to connect to few of the MSSQL DB instances wherein the Force Encryption is enabled (set to true) and if I try to create DB connection through Splunk DB Connect Version: 20.02.3.0 Build 1, it gives me error as *"I/O Error: DB server closed connection."* While checking this issue on Splunk Answers portal, I have gone through similar question with title *"Splunk DB Connect 2: Why am I unable to connect to MSSQL with encryption?"*. But even though I set the connection parameters same as mentioned in that link, it doesn't work. After following this, I am getting error as *"This driver is not configured for integrated authentication. ClientConnectionId:bc701e78-0062-4bb6-af77-7b6d4238a340"* Here is the DB Connection input which I am using:- [DB_Connection] connection_type = generic_mssql_with_windows_auth jdbcURLFormat= jdbc:sqlserver://:/;useCursors=true;domain=;integratedSecurity=true;encrypt=true;trustServerCertificate=true disabled = 0 identity = localTimezoneConversionEnabled = false port = 1600 readonly = true database = master host = abc.com Please can someone assist me on this, as I am not able to find any other option to establish connection to the encrypted DB instances.