All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I wonder if  I need to install the app on the distinct components in order to view the btool results across the implementation, I assume I have to install it on each components and I just want to ver... See more...
I wonder if  I need to install the app on the distinct components in order to view the btool results across the implementation, I assume I have to install it on each components and I just want to verify. 
Hi @splunklearner  The docs state "As a general rule, Data Manager is the recommended method of data ingestion for Splunk Cloud customers for supported data sources where available" Are you using Sp... See more...
Hi @splunklearner  The docs state "As a general rule, Data Manager is the recommended method of data ingestion for Splunk Cloud customers for supported data sources where available" Are you using Splunk Cloud? Its also worth checking the following Lantern docs https://lantern.splunk.com/Data_Descriptors/Microsoft/Getting_started_with_Microsoft_Azure_Event_Hub_data as an alternative - this uses Splunk Add-on for Microsoft Cloud Services which you've already referrenced. Either of these options are good contenders. Alternatively there is a third option, which is to use HEC and Azure Functions to push the data. Check out https://github.com/splunk/azure-functions-splunk/blob/master/event-hubs-hec/README.md for more information around this.  Ultimately the best option for you depends on a number of factors - such as Cloud/Enterprise but also if you have the engineering support for things like Azure Functions etc.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
How can we pull Azure event hub logs to Splunk? I check that we cannot use HEC configuration for pulling the data. When I was checking for apps, there are 3-4 apps present for this: but I have found ... See more...
How can we pull Azure event hub logs to Splunk? I check that we cannot use HEC configuration for pulling the data. When I was checking for apps, there are 3-4 apps present for this: but I have found most of them are not supported now and older version. I found this app - https://splunkbase.splunk.com/app/3110. Not sure how to configure this? Is there any other add-on or approach we can follow to pull event hubs Azure logs to Splunk? Any leads would be appreciated.  
Hi @agonmu , please open a new case, even if on the same topic, otherwise its difficoult to answer you. Ciao. Giuseppe
>Can you elaborate what "7: SSL certificate requests" means? Means if you have new certificate rotated by receiver then clients will also rotate new certificate. This will help not manually restar... See more...
>Can you elaborate what "7: SSL certificate requests" means? Means if you have new certificate rotated by receiver then clients will also rotate new certificate. This will help not manually restarting thousands of fwds to reload certificate.
Thank you very much Bishida! We tried AWS integration from Splunk's Data Management/Add Integration/AWS, which connected successfully. Anyway, we noted that the ECS widget shown in your image doesn... See more...
Thank you very much Bishida! We tried AWS integration from Splunk's Data Management/Add Integration/AWS, which connected successfully. Anyway, we noted that the ECS widget shown in your image doesn't show unless we have an EC2 ECS cluster, but not for our use case, wich is a Serverless (Fargate) Cluster. Do you know if theres is any way to poll this kind of cluster information? Thanks!
SSL cert files are reloaded since these are essentially not part of the outputs.conf. However if you changed cert path in outputs.conf, then it was not honored.
oneTimeClient=0 (regular connection to destination) _events.size()=20 (outstanding events/chunks to be sent) _refCount=2 (2 means useAck is enabled) _waitingAckQ.size()=4 ( outstanding events/chunks ... See more...
oneTimeClient=0 (regular connection to destination) _events.size()=20 (outstanding events/chunks to be sent) _refCount=2 (2 means useAck is enabled) _waitingAckQ.size()=4 ( outstanding events/chunks still not acknowledged by target) Warningcount=20 ( how many time this log is logged for this connection)
Thanks @splunkmarroko, Thanks. I tried that, however going about it that way returns the initial events with an "Active" status and does not take into consideration that the status has changed from ... See more...
Thanks @splunkmarroko, Thanks. I tried that, however going about it that way returns the initial events with an "Active" status and does not take into consideration that the status has changed from "Active" to "Resolved".
We have a architecture of 3 site multi cluster which contains 6 indexers (2 in each site), 3 search heads (one in each), 1 Deployment server, 2 CMs(active and standby), Deployer, and these instances ... See more...
We have a architecture of 3 site multi cluster which contains 6 indexers (2 in each site), 3 search heads (one in each), 1 Deployment server, 2 CMs(active and standby), Deployer, and these instances residing in AWS cloud and we are in Splunk enterprise 9.1.4 Our requirement is AWS logs will be pushed to our Splunk environment. So they are pushing Cloudwatch logs from Kinesis data streams and Lambda configured (no idea on all these). But they asked our team to create HEC token and give token details and endpoint details to them in order to push logs to Splunk. They don't want Amazon firehouse add-on here. They prefer only through HEC. Now my doubt is where to configure this HEC token? I want all my 6 indexers to be load balances data receiving from AWS (considering it will be huge data). I have gone through HEC docs but it is unclear. Can someone help me with the step by step procedure how can I acheive it? Like where to create HEC token and all these? Thank you.
try this: base search   ``` index=xyz sourcetype=abc``` | where status!=resolved  ```if you already have the "resolved field", if not consider extracting that field.
Unfortunately neither of these were the cause. In the end I believe we're going to set up an intermediate VM with a UF to catch the logs from the Firepowers on udp514. Clunky but it appears to be the... See more...
Unfortunately neither of these were the cause. In the end I believe we're going to set up an intermediate VM with a UF to catch the logs from the Firepowers on udp514. Clunky but it appears to be the only option. I appreciate the help.
Thanks @livehybrid, This works but returns inaccurate results when the search is run using the real time search time filter. This is an example of what I have; | base_search here | stats lates... See more...
Thanks @livehybrid, This works but returns inaccurate results when the search is run using the real time search time filter. This is an example of what I have; | base_search here | stats latest(status) as latest_status by incidentId | where latest_status!="Resolved" | stats count as total   The output is to count the number of active incidents to be displayed on a dashboard. Any pointer or tips on how to better achieve this will be appreciated. Cheers.
Hello, I have the same issue. How did you solve this problem? Regards, agonmu
Hi, I have a question on Netskope onboarding to Splunk.   I installed to TA-NetSkopeAppForSplunk (4.1.0) on Splunk cloud and configured the API tokens provided by Netskope, and logs are flowing. ... See more...
Hi, I have a question on Netskope onboarding to Splunk.   I installed to TA-NetSkopeAppForSplunk (4.1.0) on Splunk cloud and configured the API tokens provided by Netskope, and logs are flowing.   However, the same add-on and tokens are configured on Splunk Enterprise (Intermediate Heavy Forwarder), and logs are not arriving. I tried using multiple local Splunk Enterprise instances for testing, and no logs. Any recommendations on what could be the issue with the Enterprise version while it is working fine on Cloud?
Hi, I can confirm that the certificate is valid. We have use the same certificate on Splunk ver 9.3 and we don't get the TCPOutAutoLB-0 error. It only happens on ver 9.4.x
There is separate thread to DM cool I was able to ingest cloudwatch logs for ecs and lambda with data manager Now i need to add tags like env= service= custom= to enrich logs Same was done for met... See more...
There is separate thread to DM cool I was able to ingest cloudwatch logs for ecs and lambda with data manager Now i need to add tags like env= service= custom= to enrich logs Same was done for metrics with otel collector flags and UF For logs ingested with DM can i add aws resource tag to cloudwatch loggroup i'm ingesting and expect this tag (key-value pair) to be added to logs Another possible solution could be to use splunk log driver directly from ecs instead of cloudwatch. Then according to documentation with env flag of splunk log driver I should be able to add some container env to log message Same question for the lambdas. But if only cloudwatch loggroup aws resource tags from the loggroup are able to be attached to ingested message. Any suggestions?
Hi @dmitrynt  If you're concerned about hitting subsearch limits then run your index= search first, then append the tstats.  Note - The default limits for append by default are 10,000 *results* and... See more...
Hi @dmitrynt  If you're concerned about hitting subsearch limits then run your index= search first, then append the tstats.  Note - The default limits for append by default are 10,000 *results* and max 60 second execution time, but I would hope that your tstats runs faster than this and returns less < 10k results! This limit is based on *returned results* not number of events scanned, so applying stats (for example) in an append can also help with these limits.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Karthikeya , are you sure that you want to apply this extraction at index time? this means a greater job for indexers and this usually depends on the volume of indexed logs for extractions, how... See more...
Hi @Karthikeya , are you sure that you want to apply this extraction at index time? this means a greater job for indexers and this usually depends on the volume of indexed logs for extractions, how many logs must you index daily and in the peak period? here, you can find a comparation between the two modes and a description: https://docs.splunk.com/Documentation/Splunk/9.4.1/Indexer/Indextimeversussearchtime  Ciao. Giuseppe
The error is gone, but still I am unable to download or upgrade the app? Getting this error-   Unexpected error downloading update: Connect Timeout