All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @gcusello  Running not in realtime it works fine. I'm starting to think the realtime search isn't the best solution. If I set the search time to "all time" and use | head 60 to get the latest ... See more...
Hi @gcusello  Running not in realtime it works fine. I'm starting to think the realtime search isn't the best solution. If I set the search time to "all time" and use | head 60 to get the latest 60 samples it does what I'm after
Yes events have arrived but if I check in the graph for last 15 minutes, then few events are missing in last 5 minutes,is there any solution for this?
Hi @dataisbeautiful, what happens running the search not in real time, with the same time window? have you events? In general I don't like real time searches because every Splunk search uses a CPU ... See more...
Hi @dataisbeautiful, what happens running the search not in real time, with the same time window? have you events? In general I don't like real time searches because every Splunk search uses a CPU and releases it when finished, but a real time search never finishes, so, if many users use one or more real time searches you could kill your system. Maybe you could use a scheduled report (running e.g. every 5 minutes) and access it in a dashboard (using loadjob), solving in this way also you issue. Ciao. Giuseppe
I have shown you how to do this, with a runanywhere example included. If this isn't working for you, you need to provide some example events (in raw source format) where it is not working, because wh... See more...
I have shown you how to do this, with a runanywhere example included. If this isn't working for you, you need to provide some example events (in raw source format) where it is not working, because what you have provided so far has been shown to work.
The problem is that I need to count the sourcetype1 events and get the status. Combine this with the Username from sourcetype2. Either I get correct count and Status but no username or I get userna... See more...
The problem is that I need to count the sourcetype1 events and get the status. Combine this with the Username from sourcetype2. Either I get correct count and Status but no username or I get username but wrong count and status
In my Splunk instance, logs are sent to the central instance via a universal forwarder and the deployment server has been enabled for the distribution of the different configurations to the various c... See more...
In my Splunk instance, logs are sent to the central instance via a universal forwarder and the deployment server has been enabled for the distribution of the different configurations to the various clients. For parsing windows logs the windows add-on is used which also provides a specific sourcetype. The problem is that for Windows clients we are unable to filter authentication events for: - Status (success/logoff/log failed) with EventCode:[4624->Logon success 4625->failure 4634->LogOff] - Account name. That is, we want to filter the logs that contain a certain substring in account name with the regex (always defining it within the whitelist where the event filter for the various eventcodes indicated above is contained). At present, events reach the master instance filtered only by eventcode rather than by eventcode and substring contained in the account name field. Could you help me?
I do not know how to type a search to get the output that I stated. That is what I'm looking for a way to present the information that way.
It might depend on the number of events and it is often an estimate, not a precise value. Aggregate functions - Splunk Documentation
The value which we are seeing it is in single corellationId.so i want to display like correlationID BatchId RequestID Status 125dfe5 1 2 3 117 112| 1156 Success Success Success ... See more...
The value which we are seeing it is in single corellationId.so i want to display like correlationID BatchId RequestID Status 125dfe5 1 2 3 117 112| 1156 Success Success Success 32435sf53 1 2 324 536 643 Success Success        
Search for the events after they have arrived in Splunk
Your question is still like a "How I can build a car?". With this kind of information no-one outside of your organisation which know the installations and how those are deployed cannot answer correct... See more...
Your question is still like a "How I can build a car?". With this kind of information no-one outside of your organisation which know the installations and how those are deployed cannot answer correctly to you. I propose that if you cannot go forward with Splunk documentation, then you should find some local Splunk partner or use Splunk Professional Services to go through this case with you.  You could start with this https://lantern.splunk.com/Splunk_Platform/Getting_Started
@architkhanna  If it is possible, provide inputs.conf and outputs.conf from the source side(UF). Maybe your log files are rotating and splunk is detecting the copy as a new log file to index. ... See more...
@architkhanna  If it is possible, provide inputs.conf and outputs.conf from the source side(UF). Maybe your log files are rotating and splunk is detecting the copy as a new log file to index. please check if : you are using the crcSalt option. If you are using "crcSalt=<SOURCE>" with rotated logs, this could also cause duplicates. This happens because the rotated file may stay in the same directory with a different name. check the rotation of your files, if no first lines are modified during the process. symlinks, verify that the multiple symlinks are not pointing to the same file/folder
@kiran_panchavat  These are present on server level on Indexers. [inputs.conf] [default] host = 10.100.5.5 [splunktcp://9997] disabled = 0   [output.conf] [tcpout] forwardedindex.0.wh... See more...
@kiran_panchavat  These are present on server level on Indexers. [inputs.conf] [default] host = 10.100.5.5 [splunktcp://9997] disabled = 0   [output.conf] [tcpout] forwardedindex.0.whitelist = .* forwardedindex.1.blacklist = _.* forwardedindex.2.whitelist = (_audit|_internal|_introspection|_telemetry|_metrics|_metrics_rollup|_configtracker) forwardedindex.filter.disable = false indexAndForward = false blockOnCloning = true compressed = false disabled = false dropClonedEventsOnQueueFull = 5 dropEventsOnQueueFull = -1 heartbeatFrequency = 30 maxFailuresPerInterval = 2 secsInFailureInterval = 1 maxConnectionsPerIndexer = 2 forceTimebasedAutoLB = false sendCookedData = true connectionTimeout = 20 readTimeout = 300 writeTimeout = 300 tcpSendBufSz = 0 ackTimeoutOnShutdown = 30 useACK = false blockWarnThreshold = 100 sslQuietShutdown = false useClientSSLCompression = true autoLBVolume = 0 maxQueueSize = auto connectionTTL = 0 autoLBFrequency = 30 sslVersions = tls1.2 cipherSuite = ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:AES256-GCM-SHA384:AES128-GCM-SHA256:AES128-SHA256:ECDH-ECDSA-AES256-GCM-SHA384:ECDH-ECDSA-AES128-GCM-SHA256:ECDH-ECDSA-AES256-SHA384:ECDH-ECDSA-AES128-SHA256 ecdhCurves = prime256v1, secp384r1, secp521r1 [syslog] type = udp priority = <13> maxEventSize = 1024 [rfs] batchTimeout = 30 batchSizeThresholdKB = 2048 dropEventsOnUploadError = false compression = zstd compressionLevel = 3
@Hi @gcusello  Thanks for the reply. The delay is outside Splunk, it's not something we can solve unfortunately I've tried adding earliest=rt-70s latest=rt-10s but that returned no results, so I... See more...
@Hi @gcusello  Thanks for the reply. The delay is outside Splunk, it's not something we can solve unfortunately I've tried adding earliest=rt-70s latest=rt-10s but that returned no results, so I broadend the time to earliest=rt-300s latest=rt but this also returned no results. Inspecting the job, the search ran but found no events
@architkhanna Can you confirm how your inputs.conf and outputs.conf is configured?
@kiran_panchavat  This explains and confirms the issue that we do have multiple events in index but does not explain the steps to fix this. Let me know if I'm missing something.
Thank you for your response. Please may I know what would be the solution.
Please share the search which is giving you these results
OK Now I understand what you mean - you could try creating a dashboard and schedule that as a PDF delivery - iirc this has to be Classic not Studio
Hi I'm using the  function PERC95 (p95) and PERC99 (p99) to retrieve request duration/response time for requests from a serverfarm (frontend servers). As far as I have understood these functions sh... See more...
Hi I'm using the  function PERC95 (p95) and PERC99 (p99) to retrieve request duration/response time for requests from a serverfarm (frontend servers). As far as I have understood these functions should give you the MAX value of a set of values, so in a thought scenario if you have 100 requests during 1 second the p95 should retrieve 95 of the requests with the lowest response time and out of these 95 requests it will pick out the highest response time as the p95 value. A thought scenario would be that the response time value of these 95 request were in the range of 50ms to 300ms. The p5 value would then be 300ms. I've used searches with p95 and p99 and thought this was correct but looking at the events I get out of both p95 and p99 the response time does not make any sense as this "300ms" value cannot be found, and very often I cannot find any close value to this number at all. Anyone that could enligthen me here in relation to the output I'm getting? Example of search: index=test host=server sourcetype=app_httpd_access AND "example" | bin _time span=1s | stats p99(A_1) as RT_p99_ms p95(A_1) as RT_p95_ms count by _time | eval RT_p95_ms=round(RT_p95_ms/1000,2) | eval RT_p99_ms=round(RT_p99_ms/1000,2)   p95 value output: 341,87ms Total number of values returned during 1 second for p95: 15 Response time output in ms (I was expecting value 341,87 on the TOP here but it's not present) : 343,69 330,675 329,291 301,369 279,018 246,719 106,387 103,216 100,232  44,794 44,496 42,491 38,974 38,336 34,201