All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

It appears that you then have to change the data input (after completing the apps setup page) to set the index and source type. Also, the polling interval (default of 60 seconds) is found here. Along... See more...
It appears that you then have to change the data input (after completing the apps setup page) to set the index and source type. Also, the polling interval (default of 60 seconds) is found here. Along with this I went and changed the dashboard portlet searches to include the index.  Hope this helps someone else. I've yet to get data in to confirm but will report back if I do.
 ..    
Hi I think that it's doable. Splunk count only indexed data on indexers not from HF. I suppose that you are running DBX on separate HF and then it goes only into Cribl and Cribl send it to indexers?... See more...
Hi I think that it's doable. Splunk count only indexed data on indexers not from HF. I suppose that you are running DBX on separate HF and then it goes only into Cribl and Cribl send it to indexers? If that is valid assumption then you pay only that amount of data what indexers are indexing. r. Ismo
For anyone else running into this below is what I've found so far of what the app does. Logs are sent to following... index=main host=https://app.terraform.io source=terraform_cloud sourcetype=t... See more...
For anyone else running into this below is what I've found so far of what the app does. Logs are sent to following... index=main host=https://app.terraform.io source=terraform_cloud sourcetype=terraform_cloud Two dashboards are added to the dashboards in Splunk. You can use these to determine where the logs are set to go which is to no index by default (main).  Dashboards: [ HCP Terraform Analysis ] - Dark Theme [ HCP Terraform Analysis ] - Light Theme NEXT QUESTION: How to switch the index to get the logs securely stored and format properly recognized? 
@Karthikeya didn't got your question at all
Installed the app yesterday on our cloud instance (Victoria) and I can't figure out what index it points data to or where that is configured? The setup UI never asks for the index. Also, I can't find... See more...
Installed the app yesterday on our cloud instance (Victoria) and I can't figure out what index it points data to or where that is configured? The setup UI never asks for the index. Also, I can't find any internal logs for the app to understand what may be going on. Feeling like this was created as an app whereas maybe it should have been an add-on in the add-on builder? Any help would be greatly appreciated. Josh
In addition to what @gcusello wrote, the application teams should be specifying the correct index names in their input.conf files rather than you changing the name during ingest (which will slow inge... See more...
In addition to what @gcusello wrote, the application teams should be specifying the correct index names in their input.conf files rather than you changing the name during ingest (which will slow ingest). That said, consider using INGEST_EVAL with a lookup table.
Hi @Karthikeya , let me understand: why do you want to create a new index for each application or for each team? Usually indexes are defined based on rtention and access rules, in other words in o... See more...
Hi @Karthikeya , let me understand: why do you want to create a new index for each application or for each team? Usually indexes are defined based on rtention and access rules, in other words in one index, usualy you should store logs (also different) with the same retention and the same access rules. Could you better describe your requirements Ciao. Giuseppe
Okay, and you've set following parameter for your input in DB Connect,right? Rising Column ---> event_time Checkpoint Value ---> any valid date Timestamp - Choose Column ---> event_time Could you... See more...
Okay, and you've set following parameter for your input in DB Connect,right? Rising Column ---> event_time Checkpoint Value ---> any valid date Timestamp - Choose Column ---> event_time Could you share a screenshot of this configuration details? Try to set a Checkpoint value that is quite close to the current date that you only collect few events.
Hi all, Let me explain my infrastructure here. We have a dedicated 6 syslog servers which forwards data from network devices to Splunk indexer cluster. (6 indexers), a cluster manager and 3 search h... See more...
Hi all, Let me explain my infrastructure here. We have a dedicated 6 syslog servers which forwards data from network devices to Splunk indexer cluster. (6 indexers), a cluster manager and 3 search heads. It's a multisite cluster (2 indexers in each, 1 SH,  and 2 syslog servers to receive network data). 1 Dep server and 1 deployer overall. Application team will provide FQDN and we need to map it to new index by creating and assign that index to that application team. Can you please let me know how to proceed with this data ingestion ?
Thanks @MuS!
Dear Splunkers, I am running through an issue concerning the SplunkBar that is empty in some view. As long as I am navigating in my app ([splunk-adress]/app/myapp), everything is normal. The Splunk... See more...
Dear Splunkers, I am running through an issue concerning the SplunkBar that is empty in some view. As long as I am navigating in my app ([splunk-adress]/app/myapp), everything is normal. The Splunk Bar appears on top of my view, and disappears when I am using hideSplunkBar=true. My problem is that when I am clicking on any element of the settings page in the Settings>Knowledge Category (red square on the picture), the bar is totally empty and I have the following error in the console: Uncaught TypeError : Splunk.Module is undefined. <anonymous> [splunk adress] /en-Us/manager/system/advandedsearch. The problem does not appear on the other categories of Settings (green square on the picture). I tried adding hideChrome=false and hideSplunkBar=false at the end of the url but it didn't do anything. I tried searching for the advancedsearch folder but didn't manage to find it.  Has anyone already encountered this problem or knows how to solve it?   [Update] : After more investigation I found out that the problem also occured on Splunk version 9.1.0.1 and occures on the views that are using the template [splunk_home]/.../templates/layout/base.html  Thank you in advance,
I also tried with the latest version 3.16.3 and it is still the same issue
Hello, I am facing strange issue with a Splunk Forwarder where on some servers of the same role is CPU usage 0-3% and the others are around 15%. It doesn't sound bad on the 1st hand, but it did caus... See more...
Hello, I am facing strange issue with a Splunk Forwarder where on some servers of the same role is CPU usage 0-3% and the others are around 15%. It doesn't sound bad on the 1st hand, but it did cause us issues with deployment and such behavior is dangerous for live services if it will grow. It started around 3 weeks ago with installed 9.3.0 on the Windows Server 2019 VMs with 8 CPU cores and 24GB RAM. I did update Forwarder to the 9.3.1 and the behavior is the same.  For example, we have 8 servers with the same setup and apps running on, traffic to them is load balanced and very similar, log files amount and size is also very similar. 5 servers are affected, 3 not. All of them have set 10 inputs, from what are 4 perfmonitors (CPU,RAM,Disk space and Web Services) and 6 inputs are checking around 40 log files. Any sugestion what to check to understand what is happening?  
We did eventually resolve this, however it took multiple steps.  Indeed, we were using an old version of helm, updating to 3.16 did allow us to make further progress, however that moved the issue ... See more...
We did eventually resolve this, however it took multiple steps.  Indeed, we were using an old version of helm, updating to 3.16 did allow us to make further progress, however that moved the issue onto a compatibility/dependency issue with prometheus (specifically prometheus-operator) Switching to the latest otel chart (--version 0.112.0) was one step closer - however there was a breaking change in the values.yaml between 110-112 which meant we needed to rewrite our local values file. Long story short: helm 3.16 + collector & values 0.112.0 worked for us.    
Helm version.BuildInfo{Version:"v3.14.2", GitCommit:"", GitTreeState:"clean", GoVersion:"go1.22.7"} I used helm from Azure cloudshell and also tried GCP cloudshell. Both had similar issue. Do I need... See more...
Helm version.BuildInfo{Version:"v3.14.2", GitCommit:"", GitTreeState:"clean", GoVersion:"go1.22.7"} I used helm from Azure cloudshell and also tried GCP cloudshell. Both had similar issue. Do I need to try installing kubectl and helm locally and try again?
Hi @splunklearner , you have to create a Splunk Role for each AD Group. Then in each role, you have to fix the index to use and/or the additional filtering options. Ciao. Giuseppe
Thanks a lot for your help !
Hi, I'm new to Splunk DB connector. Having Splunk on-prem version and trying to pull data from Snowflake audit logs and push to cribl.io (for log optimization purpose and reducing log size).  As ... See more...
Hi, I'm new to Splunk DB connector. Having Splunk on-prem version and trying to pull data from Snowflake audit logs and push to cribl.io (for log optimization purpose and reducing log size).  As Cribl.io doesn't have connector for Snowflake (and not in near roadmap), wondering if I use Splunk DB connect to read data from Snowflake and send to Cribl.io followed by sending to destination i.e. Splunk (for log monitoring and alerting) Question: Would this be "double hop" to Splunk, if yes, any Splunk charges be applicable while Splunk DB connect reading from Snowflake and sending to Cribl.io? Thank you! Avi
Hi @gcusello , How to assign specific index to specific AD group and how to map specific FQDN to that particular index, so that specific AD group should see their logs only?