All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

trim did not make any difference.
Hello,   I am receiving these errors and my HF is not working properly. I think that it is something related to the SSL intercepction and the intermediate and root CA but I am not discovering it.... See more...
Hello,   I am receiving these errors and my HF is not working properly. I think that it is something related to the SSL intercepction and the intermediate and root CA but I am not discovering it. Root Cause(s): More than 70% of forwarding destinations have failed. Ensure your hosts and ports in outputs.conf are correct. Also ensure that the indexers are all running, and that any SSL certificates being used for forwarding are correct. Last 50 related messages: 03-15-2024 08:14:15.748 -0400 WARN AutoLoadBalancedConnectionStrategy [61817 TcpOutEloop] - Applying quarantine to ip=34.216.133.150 port=9997 connid=0 _numberOfFailures=2 03-15-2024 08:14:15.530 -0400 WARN AutoLoadBalancedConnectionStrategy [61817 TcpOutEloop] - Applying quarantine to ip=35.162.96.25 port=9997 connid=0 _numberOfFailures=2 03-15-2024 08:14:15.296 -0400 WARN AutoLoadBalancedConnectionStrategy [61817 TcpOutEloop] - Applying quarantine to ip=44.231.134.204 port=9997 connid=0 _numberOfFailures=2 03-15-2024 08:14:14.425 -0400 INFO AutoLoadBalancedConnectionStrategy [61817 TcpOutEloop] - Removing quarantine from idx=44.231.134.204:9997 connid=0 03-15-2024 08:14:14.425 -0400 INFO AutoLoadBalancedConnectionStrategy [61817 TcpOutEloop] - Removing quarantine from idx=35.162.96.25:9997 connid=0 03-15-2024 08:14:14.425 -0400 INFO AutoLoadBalancedConnectionStrategy [61817 TcpOutEloop] - Removing quarantine from idx=34.216.133.150:9997 connid=0 03-15-2024 08:12:56.049 -0400 WARN AutoLoadBalancedConnectionStrategy [61817 TcpOutEloop] - Applying quarantine to ip=35.162.96.25 port=9997 connid=0 _numberOfFailures=2 This is my outputsconf [tcpout] defaultGroup = indexers [tcpout:indexers] server = inputs1.tenant.splunkcloud.com:9997, inputs2.tenant.splunkcloud.com:9997, inputs3.tenant.splunkcloud.com:9997, inputs4.tenant.splunkcloud.com:9997, inputs5.tenant.splunkcloud.com:9997, inputs6.tenant.splunkcloud.com:9997, inputs7.tenant.splunkcloud.com:9997, inputs8.tenant.splunkcloud.com:9997, inputs9.tenant.splunkcloud.com:9997, inputs10.tenant.splunkcloud.com:9997, inputs11.tenant.splunkcloud.com:9997, inputs12.tenant.splunkcloud.com:9997, inputs13.tenant.splunkcloud.com:9997, inputs14.tenant.splunkcloud.com:9997, inputs15.tenant.splunkcloud.com:9997 forceTimebasedAutoLB = true autoLBFrequency = 40  
While the commonId fields look like they might match, they obviously don't. This could be due to "invisible" white spaces. Try trimming the commonId field before the stats command
The short answer is probably no. However, it may depend on your data, your applications doing the logging, your infrastructure, your networking, etc. None of this information is available to me. If ... See more...
The short answer is probably no. However, it may depend on your data, your applications doing the logging, your infrastructure, your networking, etc. None of this information is available to me. If there are delays built into any of these, there may be ways to work around them.
The raw data that I have provided is what the two log events look like. But when I run your search I do not get all data  This is what the result looks like      
Hi @gcusello  Running not in realtime it works fine. I'm starting to think the realtime search isn't the best solution. If I set the search time to "all time" and use | head 60 to get the latest ... See more...
Hi @gcusello  Running not in realtime it works fine. I'm starting to think the realtime search isn't the best solution. If I set the search time to "all time" and use | head 60 to get the latest 60 samples it does what I'm after
Yes events have arrived but if I check in the graph for last 15 minutes, then few events are missing in last 5 minutes,is there any solution for this?
Hi @dataisbeautiful, what happens running the search not in real time, with the same time window? have you events? In general I don't like real time searches because every Splunk search uses a CPU ... See more...
Hi @dataisbeautiful, what happens running the search not in real time, with the same time window? have you events? In general I don't like real time searches because every Splunk search uses a CPU and releases it when finished, but a real time search never finishes, so, if many users use one or more real time searches you could kill your system. Maybe you could use a scheduled report (running e.g. every 5 minutes) and access it in a dashboard (using loadjob), solving in this way also you issue. Ciao. Giuseppe
I have shown you how to do this, with a runanywhere example included. If this isn't working for you, you need to provide some example events (in raw source format) where it is not working, because wh... See more...
I have shown you how to do this, with a runanywhere example included. If this isn't working for you, you need to provide some example events (in raw source format) where it is not working, because what you have provided so far has been shown to work.
The problem is that I need to count the sourcetype1 events and get the status. Combine this with the Username from sourcetype2. Either I get correct count and Status but no username or I get userna... See more...
The problem is that I need to count the sourcetype1 events and get the status. Combine this with the Username from sourcetype2. Either I get correct count and Status but no username or I get username but wrong count and status
In my Splunk instance, logs are sent to the central instance via a universal forwarder and the deployment server has been enabled for the distribution of the different configurations to the various c... See more...
In my Splunk instance, logs are sent to the central instance via a universal forwarder and the deployment server has been enabled for the distribution of the different configurations to the various clients. For parsing windows logs the windows add-on is used which also provides a specific sourcetype. The problem is that for Windows clients we are unable to filter authentication events for: - Status (success/logoff/log failed) with EventCode:[4624->Logon success 4625->failure 4634->LogOff] - Account name. That is, we want to filter the logs that contain a certain substring in account name with the regex (always defining it within the whitelist where the event filter for the various eventcodes indicated above is contained). At present, events reach the master instance filtered only by eventcode rather than by eventcode and substring contained in the account name field. Could you help me?
I do not know how to type a search to get the output that I stated. That is what I'm looking for a way to present the information that way.
It might depend on the number of events and it is often an estimate, not a precise value. Aggregate functions - Splunk Documentation
The value which we are seeing it is in single corellationId.so i want to display like correlationID BatchId RequestID Status 125dfe5 1 2 3 117 112| 1156 Success Success Success ... See more...
The value which we are seeing it is in single corellationId.so i want to display like correlationID BatchId RequestID Status 125dfe5 1 2 3 117 112| 1156 Success Success Success 32435sf53 1 2 324 536 643 Success Success        
Search for the events after they have arrived in Splunk
Your question is still like a "How I can build a car?". With this kind of information no-one outside of your organisation which know the installations and how those are deployed cannot answer correct... See more...
Your question is still like a "How I can build a car?". With this kind of information no-one outside of your organisation which know the installations and how those are deployed cannot answer correctly to you. I propose that if you cannot go forward with Splunk documentation, then you should find some local Splunk partner or use Splunk Professional Services to go through this case with you.  You could start with this https://lantern.splunk.com/Splunk_Platform/Getting_Started
@architkhanna  If it is possible, provide inputs.conf and outputs.conf from the source side(UF). Maybe your log files are rotating and splunk is detecting the copy as a new log file to index. ... See more...
@architkhanna  If it is possible, provide inputs.conf and outputs.conf from the source side(UF). Maybe your log files are rotating and splunk is detecting the copy as a new log file to index. please check if : you are using the crcSalt option. If you are using "crcSalt=<SOURCE>" with rotated logs, this could also cause duplicates. This happens because the rotated file may stay in the same directory with a different name. check the rotation of your files, if no first lines are modified during the process. symlinks, verify that the multiple symlinks are not pointing to the same file/folder
@kiran_panchavat  These are present on server level on Indexers. [inputs.conf] [default] host = 10.100.5.5 [splunktcp://9997] disabled = 0   [output.conf] [tcpout] forwardedindex.0.wh... See more...
@kiran_panchavat  These are present on server level on Indexers. [inputs.conf] [default] host = 10.100.5.5 [splunktcp://9997] disabled = 0   [output.conf] [tcpout] forwardedindex.0.whitelist = .* forwardedindex.1.blacklist = _.* forwardedindex.2.whitelist = (_audit|_internal|_introspection|_telemetry|_metrics|_metrics_rollup|_configtracker) forwardedindex.filter.disable = false indexAndForward = false blockOnCloning = true compressed = false disabled = false dropClonedEventsOnQueueFull = 5 dropEventsOnQueueFull = -1 heartbeatFrequency = 30 maxFailuresPerInterval = 2 secsInFailureInterval = 1 maxConnectionsPerIndexer = 2 forceTimebasedAutoLB = false sendCookedData = true connectionTimeout = 20 readTimeout = 300 writeTimeout = 300 tcpSendBufSz = 0 ackTimeoutOnShutdown = 30 useACK = false blockWarnThreshold = 100 sslQuietShutdown = false useClientSSLCompression = true autoLBVolume = 0 maxQueueSize = auto connectionTTL = 0 autoLBFrequency = 30 sslVersions = tls1.2 cipherSuite = ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:AES256-GCM-SHA384:AES128-GCM-SHA256:AES128-SHA256:ECDH-ECDSA-AES256-GCM-SHA384:ECDH-ECDSA-AES128-GCM-SHA256:ECDH-ECDSA-AES256-SHA384:ECDH-ECDSA-AES128-SHA256 ecdhCurves = prime256v1, secp384r1, secp521r1 [syslog] type = udp priority = <13> maxEventSize = 1024 [rfs] batchTimeout = 30 batchSizeThresholdKB = 2048 dropEventsOnUploadError = false compression = zstd compressionLevel = 3
@Hi @gcusello  Thanks for the reply. The delay is outside Splunk, it's not something we can solve unfortunately I've tried adding earliest=rt-70s latest=rt-10s but that returned no results, so I... See more...
@Hi @gcusello  Thanks for the reply. The delay is outside Splunk, it's not something we can solve unfortunately I've tried adding earliest=rt-70s latest=rt-10s but that returned no results, so I broadend the time to earliest=rt-300s latest=rt but this also returned no results. Inspecting the job, the search ran but found no events
@architkhanna Can you confirm how your inputs.conf and outputs.conf is configured?