All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

>Can you elaborate what "7: SSL certificate requests" means? Means if you have new certificate rotated by receiver then clients will also rotate new certificate. This will help not manually restar... See more...
>Can you elaborate what "7: SSL certificate requests" means? Means if you have new certificate rotated by receiver then clients will also rotate new certificate. This will help not manually restarting thousands of fwds to reload certificate.
Thank you very much Bishida! We tried AWS integration from Splunk's Data Management/Add Integration/AWS, which connected successfully. Anyway, we noted that the ECS widget shown in your image doesn... See more...
Thank you very much Bishida! We tried AWS integration from Splunk's Data Management/Add Integration/AWS, which connected successfully. Anyway, we noted that the ECS widget shown in your image doesn't show unless we have an EC2 ECS cluster, but not for our use case, wich is a Serverless (Fargate) Cluster. Do you know if theres is any way to poll this kind of cluster information? Thanks!
SSL cert files are reloaded since these are essentially not part of the outputs.conf. However if you changed cert path in outputs.conf, then it was not honored.
oneTimeClient=0 (regular connection to destination) _events.size()=20 (outstanding events/chunks to be sent) _refCount=2 (2 means useAck is enabled) _waitingAckQ.size()=4 ( outstanding events/chunks ... See more...
oneTimeClient=0 (regular connection to destination) _events.size()=20 (outstanding events/chunks to be sent) _refCount=2 (2 means useAck is enabled) _waitingAckQ.size()=4 ( outstanding events/chunks still not acknowledged by target) Warningcount=20 ( how many time this log is logged for this connection)
Thanks @splunkmarroko, Thanks. I tried that, however going about it that way returns the initial events with an "Active" status and does not take into consideration that the status has changed from ... See more...
Thanks @splunkmarroko, Thanks. I tried that, however going about it that way returns the initial events with an "Active" status and does not take into consideration that the status has changed from "Active" to "Resolved".
We have a architecture of 3 site multi cluster which contains 6 indexers (2 in each site), 3 search heads (one in each), 1 Deployment server, 2 CMs(active and standby), Deployer, and these instances ... See more...
We have a architecture of 3 site multi cluster which contains 6 indexers (2 in each site), 3 search heads (one in each), 1 Deployment server, 2 CMs(active and standby), Deployer, and these instances residing in AWS cloud and we are in Splunk enterprise 9.1.4 Our requirement is AWS logs will be pushed to our Splunk environment. So they are pushing Cloudwatch logs from Kinesis data streams and Lambda configured (no idea on all these). But they asked our team to create HEC token and give token details and endpoint details to them in order to push logs to Splunk. They don't want Amazon firehouse add-on here. They prefer only through HEC. Now my doubt is where to configure this HEC token? I want all my 6 indexers to be load balances data receiving from AWS (considering it will be huge data). I have gone through HEC docs but it is unclear. Can someone help me with the step by step procedure how can I acheive it? Like where to create HEC token and all these? Thank you.
try this: base search   ``` index=xyz sourcetype=abc``` | where status!=resolved  ```if you already have the "resolved field", if not consider extracting that field.
Unfortunately neither of these were the cause. In the end I believe we're going to set up an intermediate VM with a UF to catch the logs from the Firepowers on udp514. Clunky but it appears to be the... See more...
Unfortunately neither of these were the cause. In the end I believe we're going to set up an intermediate VM with a UF to catch the logs from the Firepowers on udp514. Clunky but it appears to be the only option. I appreciate the help.
Thanks @livehybrid, This works but returns inaccurate results when the search is run using the real time search time filter. This is an example of what I have; | base_search here | stats lates... See more...
Thanks @livehybrid, This works but returns inaccurate results when the search is run using the real time search time filter. This is an example of what I have; | base_search here | stats latest(status) as latest_status by incidentId | where latest_status!="Resolved" | stats count as total   The output is to count the number of active incidents to be displayed on a dashboard. Any pointer or tips on how to better achieve this will be appreciated. Cheers.
Hello, I have the same issue. How did you solve this problem? Regards, agonmu
Hi, I have a question on Netskope onboarding to Splunk.   I installed to TA-NetSkopeAppForSplunk (4.1.0) on Splunk cloud and configured the API tokens provided by Netskope, and logs are flowing. ... See more...
Hi, I have a question on Netskope onboarding to Splunk.   I installed to TA-NetSkopeAppForSplunk (4.1.0) on Splunk cloud and configured the API tokens provided by Netskope, and logs are flowing.   However, the same add-on and tokens are configured on Splunk Enterprise (Intermediate Heavy Forwarder), and logs are not arriving. I tried using multiple local Splunk Enterprise instances for testing, and no logs. Any recommendations on what could be the issue with the Enterprise version while it is working fine on Cloud?
Hi, I can confirm that the certificate is valid. We have use the same certificate on Splunk ver 9.3 and we don't get the TCPOutAutoLB-0 error. It only happens on ver 9.4.x
There is separate thread to DM cool I was able to ingest cloudwatch logs for ecs and lambda with data manager Now i need to add tags like env= service= custom= to enrich logs Same was done for met... See more...
There is separate thread to DM cool I was able to ingest cloudwatch logs for ecs and lambda with data manager Now i need to add tags like env= service= custom= to enrich logs Same was done for metrics with otel collector flags and UF For logs ingested with DM can i add aws resource tag to cloudwatch loggroup i'm ingesting and expect this tag (key-value pair) to be added to logs Another possible solution could be to use splunk log driver directly from ecs instead of cloudwatch. Then according to documentation with env flag of splunk log driver I should be able to add some container env to log message Same question for the lambdas. But if only cloudwatch loggroup aws resource tags from the loggroup are able to be attached to ingested message. Any suggestions?
Hi @dmitrynt  If you're concerned about hitting subsearch limits then run your index= search first, then append the tstats.  Note - The default limits for append by default are 10,000 *results* and... See more...
Hi @dmitrynt  If you're concerned about hitting subsearch limits then run your index= search first, then append the tstats.  Note - The default limits for append by default are 10,000 *results* and max 60 second execution time, but I would hope that your tstats runs faster than this and returns less < 10k results! This limit is based on *returned results* not number of events scanned, so applying stats (for example) in an append can also help with these limits.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Karthikeya , are you sure that you want to apply this extraction at index time? this means a greater job for indexers and this usually depends on the volume of indexed logs for extractions, how... See more...
Hi @Karthikeya , are you sure that you want to apply this extraction at index time? this means a greater job for indexers and this usually depends on the volume of indexed logs for extractions, how many logs must you index daily and in the peak period? here, you can find a comparation between the two modes and a description: https://docs.splunk.com/Documentation/Splunk/9.4.1/Indexer/Indextimeversussearchtime  Ciao. Giuseppe
The error is gone, but still I am unable to download or upgrade the app? Getting this error-   Unexpected error downloading update: Connect Timeout
@hrawat wrote: As mentioned before 9.2 outputs.conf was never reloadable ( no-op for _reload), thus no crashes/complications We've used /services/data/outputs/tcp/server/_reload to successful... See more...
@hrawat wrote: As mentioned before 9.2 outputs.conf was never reloadable ( no-op for _reload), thus no crashes/complications We've used /services/data/outputs/tcp/server/_reload to successfully reload updated `clientCert` certificates and `server` hosts for years when using Splunk Enterprise 8.x and 9.0.x instances.
@hrawat wrote: Protocol levels. 0: Maximum network traffic over S2S connection. 1: Network traffic optimization over S2S connection. 2: Additional network traffic optimization over S2S conne... See more...
@hrawat wrote: Protocol levels. 0: Maximum network traffic over S2S connection. 1: Network traffic optimization over S2S connection. 2: Additional network traffic optimization over S2S connection. 3: Metric support. 4: Ack support for rawless metric events. 5: Flag potential dup events. 6: Flag for cloned metric events so that cloned events exempted from license usage. 7: SSL certificate requests This is the first time I recall seeing any documentation on the protocol levels. Can you elaborate what "7: SSL certificate requests" means?
The AutoLoadBalancedConnectionStrategy message contains several fields  oneTimeClient=0 _events.size()=20 _refCount=2 _waitingAckQ.size()=4 Warningcount=20 What do these fields mean and at what val... See more...
The AutoLoadBalancedConnectionStrategy message contains several fields  oneTimeClient=0 _events.size()=20 _refCount=2 _waitingAckQ.size()=4 Warningcount=20 What do these fields mean and at what values do we need to be concerned?
It appears hitting a known issue for some recent versions below. 9.4.1 9.4.0 9.3.3 9.2.5 9.1.8 You may want to check this article. https://github.com/splunk/docker-splunk/issues/698