All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @pramod,   I've worked with the Proofpoint ET Splunk TA in Splunk Cloud, and there's a specific way to handle the authentication for this app. For configuring the Proofpoint ET Intelligence in ... See more...
Hi @pramod,   I've worked with the Proofpoint ET Splunk TA in Splunk Cloud, and there's a specific way to handle the authentication for this app. For configuring the Proofpoint ET Intelligence in Splunk Cloud, you need to understand that there's a difference between the "authorization code" and the "Oink code": 1. The API key is what you get from your ET Intelligence subscription. 2. The "authorization code" field in the TA configuration actually requires your ET Intelligence subscription key (sometimes also called "download key"), NOT the Oink code. This is a common confusion point. 3. If you're seeing "None" for the authorization code, it's likely because that field hasn't been properly populated in your account settings on the Proofpoint ET Intelligence portal. Here's how to properly configure it: 1. Log in to your ET Intelligence account at https://threatintel.proofpoint.com/ 2. Navigate to "Account Settings" (usually in the top-right profile menu) 3. Make sure both your API key and subscription key (download key) are available - if your subscription key shows "None", contact Proofpoint support to have it properly provisioned 4. In Splunk Cloud: a. Install the Proofpoint ET Splunk TA through the Splunk Cloud self-service app installation b. Open the app configuration c. Enter your API key in the "API Key" field d Enter your subscription key (download key) in the "Authorization Code" field (NOT the Oink code) e. Save the configuration 5. If you're still getting errors, check the following: a. Look at the _internal index for any API connection errors b. Verify your Splunk Cloud instance has proper outbound connectivity to the Proofpoint ET Intelligence API endpoints c. Confirm with Proofpoint that your subscription is active and properly configured If you're still having issues after trying these steps, you may need to: 1. Submit a support ticket with Proofpoint to verify your account credentials 2. Work with Splunk Cloud support to ensure the app is properly installed and has the right permissions Please give for support happly splunking ....
Hi @clumicao, Currently, Mission Control does not have the same capability as Enterprise Security to create and share saved filters across users. In Mission Control, filters are saved locally to y... See more...
Hi @clumicao, Currently, Mission Control does not have the same capability as Enterprise Security to create and share saved filters across users. In Mission Control, filters are saved locally to your browser and user profile, making them user-specific rather than shareable across a team. There are a few workarounds you can use: 1. Document your most useful filters in a shared team document so others can manually recreate them 2. Use Mission Control's Export/Import feature to share filter configurations: a. After creating a filter, click on the "Export" option in the filter panel b. This will download a JSON configuration file c. Share this file with team members d. Other users can import this file using the "Import" option in their filter panel Note that even with the import method, each user would need to import the filter individually, and any updates to the filter would require re-sharing and re-importing. This has been a requested feature, and I recommend submitting it through the Splunk Ideas portal if it would be valuable for your team. The Splunk Mission Control team is regularly enhancing the product based on user feedback. Please give   for support   happly splunking .... 
For optimizing the AppDynamics Events Service Cluster startup process with self-healing capabilities, I recommend implementing systemd service units with proper configurations. This approach handles ... See more...
For optimizing the AppDynamics Events Service Cluster startup process with self-healing capabilities, I recommend implementing systemd service units with proper configurations. This approach handles all your scenarios: graceful shutdown, unplanned downtime, accidental process termination, and OOM situations. Here's a comprehensive solution: 1. Create systemd service units for both Events Service and Elasticsearch: For Events Service (events-service.service): ``` [Unit] Description=AppDynamics Events Service After=network.target elasticsearch.service Requires=elasticsearch.service [Service] Type=forking User=appdynamics Group=appdynamics WorkingDirectory=/opt/appdynamics/events-service ExecStart=/opt/appdynamics/events-service/bin/events-service.sh start ExecStop=/opt/appdynamics/events-service/bin/events-service.sh stop PIDFile=/opt/appdynamics/events-service/pid.txt TimeoutStartSec=300 TimeoutStopSec=120 # Restart settings for self-healing Restart=always RestartSec=60 # OOM handling OOMScoreAdjust=-900 # Health check script ExecStartPost=/opt/appdynamics/scripts/events-service-health-check.sh [Install] WantedBy=multi-user.target ``` For Elasticsearch (elasticsearch.service): ``` [Unit] Description=Elasticsearch for AppDynamics After=network.target [Service] Type=forking User=appdynamics Group=appdynamics WorkingDirectory=/opt/appdynamics/events-service/elasticsearch ExecStart=/opt/appdynamics/events-service/elasticsearch/bin/elasticsearch -d -p pid ExecStop=/bin/kill -SIGTERM $MAINPID PIDFile=/opt/appdynamics/events-service/elasticsearch/pid # Restart settings for self-healing Restart=always RestartSec=60 # Resource limits for OOM prevention LimitNOFILE=65536 LimitNPROC=4096 LimitMEMLOCK=infinity LimitAS=infinity # OOM handling OOMScoreAdjust=-800 [Install] WantedBy=multi-user.target ``` 2. Create a health check script (events-service-health-check.sh): ```bash #!/bin/bash # Health check for Events Service EVENT_SERVICE_PORT=9080 MAX_RETRIES=3 RETRY_INTERVAL=20 check_events_service() { for i in $(seq 1 $MAX_RETRIES); do if curl -s http://localhost:$EVENT_SERVICE_PORT/healthcheck > /dev/null; then echo "Events Service is running properly." return 0 else echo "Attempt $i: Events Service health check failed. Waiting $RETRY_INTERVAL seconds..." sleep $RETRY_INTERVAL fi done echo "Events Service failed health check after $MAX_RETRIES attempts." return 1 } check_elasticsearch() { for i in $(seq 1 $MAX_RETRIES); do if curl -s http://localhost:9200/_cluster/health | grep -q '"status":"green"\|"status":"yellow"'; then echo "Elasticsearch is running properly." return 0 else echo "Attempt $i: Elasticsearch health check failed. Waiting $RETRY_INTERVAL seconds..." sleep $RETRY_INTERVAL fi done echo "Elasticsearch failed health check after $MAX_RETRIES attempts." return 1 } main() { # Wait for initial startup sleep 30 if ! check_elasticsearch; then echo "Restarting Elasticsearch due to failed health check..." systemctl restart elasticsearch.service fi if ! check_events_service; then echo "Restarting Events Service due to failed health check..." systemctl restart events-service.service fi } main ``` 3. Set up a watchdog script for OOM monitoring (run as a cron job every 5 minutes): ```bash #!/bin/bash LOG_FILE="/var/log/appdynamics/oom-watchdog.log" ES_HEAP_THRESHOLD=90 ES_SERVICE="elasticsearch.service" EVENTS_HEAP_THRESHOLD=90 EVENTS_SERVICE="events-service.service" log_message() { echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" >> $LOG_FILE } check_elasticsearch_memory() { ES_MEMORY_PCT=$(ps -C java -o cmd= | grep elasticsearch | grep -o "Xmx[0-9]*[mMgG]" | head -1) ES_CURRENT_HEAP=$(jstat -gc $(pgrep -f elasticsearch) 2>&1 | tail -n 1 | awk '{print ($3+$4+$5+$6+$7+$8)/1024}') if [[ $ES_MEMORY_PCT == *"g"* || $ES_MEMORY_PCT == *"G"* ]]; then ES_MAX_HEAP=${ES_MEMORY_PCT//[^0-9]/} ES_MAX_HEAP=$((ES_MAX_HEAP * 1024)) else ES_MAX_HEAP=${ES_MEMORY_PCT//[^0-9]/} fi ES_HEAP_PCT=$((ES_CURRENT_HEAP * 100 / ES_MAX_HEAP)) if [ $ES_HEAP_PCT -gt $ES_HEAP_THRESHOLD ]; then log_message "Elasticsearch heap usage at ${ES_HEAP_PCT}% - exceeds threshold of ${ES_HEAP_THRESHOLD}%. Restarting service." systemctl restart $ES_SERVICE return 0 fi return 1 } check_events_service_memory() { EVENTS_MEMORY_PCT=$(ps -C java -o cmd= | grep events-service | grep -o "Xmx[0-9]*[mMgG]" | head -1) EVENTS_CURRENT_HEAP=$(jstat -gc $(pgrep -f "events-service" | grep -v grep | head -1) 2>&1 | tail -n 1 | awk '{print ($3+$4+$5+$6+$7+$8)/1024}') if [[ $EVENTS_MEMORY_PCT == *"g"* || $EVENTS_MEMORY_PCT == *"G"* ]]; then EVENTS_MAX_HEAP=${EVENTS_MEMORY_PCT//[^0-9]/} EVENTS_MAX_HEAP=$((EVENTS_MAX_HEAP * 1024)) else EVENTS_MAX_HEAP=${EVENTS_MEMORY_PCT//[^0-9]/} fi EVENTS_HEAP_PCT=$((EVENTS_CURRENT_HEAP * 100 / EVENTS_MAX_HEAP)) if [ $EVENTS_HEAP_PCT -gt $EVENTS_HEAP_THRESHOLD ]; then log_message "Events Service heap usage at ${EVENTS_HEAP_PCT}% - exceeds threshold of ${EVENTS_HEAP_THRESHOLD}%. Restarting service." systemctl restart $EVENTS_SERVICE return 0 fi return 1 } # Check if services are running if ! systemctl is-active --quiet $ES_SERVICE; then log_message "Elasticsearch service is not running. Attempting to start." systemctl start $ES_SERVICE fi if ! systemctl is-active --quiet $EVENTS_SERVICE; then log_message "Events Service is not running. Attempting to start." systemctl start $EVENTS_SERVICE fi # Check memory usage check_elasticsearch_memory check_events_service_memory ``` 4. Enable and start the services: ```bash # Make scripts executable chmod +x /opt/appdynamics/scripts/events-service-health-check.sh chmod +x /opt/appdynamics/scripts/oom-watchdog.sh # Place service files cp elasticsearch.service /etc/systemd/system/ cp events-service.service /etc/systemd/system/ # Reload systemd, enable and start services systemctl daemon-reload systemctl enable elasticsearch.service systemctl enable events-service.service systemctl start elasticsearch.service systemctl start events-service.service # Add the OOM watchdog to cron (crontab -l 2>/dev/null; echo "*/5 * * * * /opt/appdynamics/scripts/oom-watchdog.sh") | crontab - ``` This setup addresses all your scenarios: 1. Graceful Shutdown: The systemd ExecStop commands ensure proper shutdown sequences 2. Unplanned Downtime: The Restart=always ensures services restart after VM reboots 3. Accidental Process Kill: Again, Restart=always handles this automatically 4. OOM Situations: Combination of OOMScoreAdjust, resource limits, and the custom watchdog script You may need to adjust paths and user/group settings to match your environment. This implementation provides comprehensive self-healing for AppDynamics Events Service and Elasticsearch.
While rebuilding forwarder database might sometimes help if it becomes corrupted or contains too many orphaned entries, the question worth looking into is how your UFs are deployed and configured. Ar... See more...
While rebuilding forwarder database might sometimes help if it becomes corrupted or contains too many orphaned entries, the question worth looking into is how your UFs are deployed and configured. Are you sure they aren't sharing the GUID and hostname?
Hi guys, I got the same issue. The proposed solution to clear the kvstore helped me once, I gave Karma for them, but this was not an acceptable solution for me. I wrote a little patch, based on ve... See more...
Hi guys, I got the same issue. The proposed solution to clear the kvstore helped me once, I gave Karma for them, but this was not an acceptable solution for me. I wrote a little patch, based on version 2.0.0 of the app, to solve the issue. I just posted it here. I hope this can help you too. B.
Wow @ephemeric thanks for the help!! -.-
One additional comment. As told in previous post splunk support that loginType=splunk works and there is always at least admin account which cannot be anything else than local. BUT if there are any... See more...
One additional comment. As told in previous post splunk support that loginType=splunk works and there is always at least admin account which cannot be anything else than local. BUT if there are any WAF before your Splunk service login then there could be some WAF rules which denies to add this additional loginType parameter into URL. If this happens then you need to discuss with those security staff that they will allow that additional parameter e.g. in some specific addresses.
Hi, I got the same issue. I wrote a small patch for the teams_subscription.py binary to solve it. It is based on release 2.0.0. The patch is attached as TA_MS_Teams-bruno.patch.txt. To use it, j... See more...
Hi, I got the same issue. I wrote a small patch for the teams_subscription.py binary to solve it. It is based on release 2.0.0. The patch is attached as TA_MS_Teams-bruno.patch.txt. To use it, just save the file as TA_MS_Teams-bruno.patch in the $SPLUNK_HOME/etc/apps directory and apply it using the following command in the TA_MS_Teams directory: pelai@xps MINGW64 /d/src/TA_MS_Teams $ patch -p1 < ../TA_MS_Teams-bruno.patch.txt patching file bin/teams_subscription.py pelai@xps MINGW64 /d/src/TA_MS_Teams $  It is possible to revert the patch at anytime just using patch with the -R parameter. I hope this can help. B.
Hi @kiran_panchavat  $10/GB? Where are you seeing that?  That would be nice.  (Edit - I see you've now removed this and replaced it with other content) Google suggests "Splunk typically costs betwe... See more...
Hi @kiran_panchavat  $10/GB? Where are you seeing that?  That would be nice.  (Edit - I see you've now removed this and replaced it with other content) Google suggests "Splunk typically costs between $1,800-$2,500 per GB/day for an annual license." but this is probably based on public pricing resources without any partner/volume discounts etc.  For what its worth, I agree with @isoutamo that Splunk Cloud would be a good option here, I thought smallest was 50GB but if its only 5GB then the annual cost for this is probably less than the cost of someone building, running and maintaining a cluster on-prem!  For what its worth - I did a conf talk in 2020 about moving to Cloud and the "cost" ("The effort, loss, or sacrifice necessary to achieve or obtain something") - TL;DR; Cloud was cheaper. https://conf.splunk.com/files/2020/slides/PLA1180C.pdf#page=37 The Free version wont suffice for the PoC because it doesnt have the features required such as clustering, authentication etc etc. Get in with Sales and they can help arrange a PoC license if they think you'll be looking to purchase a full license  https://www.splunk.com/en_us/talk-to-sales/pricing.html?expertCode=sales  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing    
I haven't tested it, but believe what you other have found. But as currently Splunk has some GUI tools like IA (ingest action) etc. there is a small possibility that this is a planned way to work? A... See more...
I haven't tested it, but believe what you other have found. But as currently Splunk has some GUI tools like IA (ingest action) etc. there is a small possibility that this is a planned way to work? As @livehybrid said there are best practices how those The 8 should be set based on those other values. You probably will create a support case for this and then we will get official answer is this bug or planned feature?
After 9.2 the reason for this is that DS needs local indexes for get information about DCs.  And best practices said that you should forward all logs into your indexer instead of keeping those into l... See more...
After 9.2 the reason for this is that DS needs local indexes for get information about DCs.  And best practices said that you should forward all logs into your indexer instead of keeping those into local node like DS. You could fix this by adding to stored those indexes also into locally.  https://docs.splunk.com/Documentation/Splunk/9.2.0/Updating/Upgradepre-9.2deploymentservers here is instructions how to do it.
The rebuild of forwarders assets should happen automatically with period what is defined into CMC -> Forwarders -> Forwarder Monitoring Setup: Data Collection Interval. If that time has gone after yo... See more...
The rebuild of forwarders assets should happen automatically with period what is defined into CMC -> Forwarders -> Forwarder Monitoring Setup: Data Collection Interval. If that time has gone after you have add this UF and you can see those logs in _internal index and this continue I propose that you create a support ticket that they could figure out why this forwarder asset hasn't updated as expected. Of course if time has elapsed less than that period you could update it manually or decrease that time and build it again. What is preferred time period for update is depending on your needs to get those UF into this list and how many UFs you have.
You could also use tstats with prestats=t paramter like | tstats prestats=t count where index=my_index by _time span=1h | timechart span=1h count | timewrap 1w  https://docs.splunk.com/Documentatio... See more...
You could also use tstats with prestats=t paramter like | tstats prestats=t count where index=my_index by _time span=1h | timechart span=1h count | timewrap 1w  https://docs.splunk.com/Documentation/Splunk/9.4.2/SearchReference/Tstats  
It's just like @PickleRick said, you should never use splunk as termination point for syslog! You should setup a real syslog server to avoid this kind of issues. As said we haven't enough informati... See more...
It's just like @PickleRick said, you should never use splunk as termination point for syslog! You should setup a real syslog server to avoid this kind of issues. As said we haven't enough information yet to make any real guesses what have caused this issue. But as there is only one server which has have this issue, I start to looking it first. Also check that is it in same network segment than those other. Has it been up and running this time? Any FW changes in it. Could it have any time issue or anything else related that time has changed and fixed later?
It's a bit unclear what you need. 1. Usually if you want to check the product for yourself and just see what it can do you can use the trial license. 2. If you want a PoC you'd rather involve your ... See more...
It's a bit unclear what you need. 1. Usually if you want to check the product for yourself and just see what it can do you can use the trial license. 2. If you want a PoC you'd rather involve your local Splunk Partner because usually a PoC involves demonstrating that some real-life scenarios can be realized using product or some actual problems can be solved. So it's best you simply contact one of your local friendly Splunk Partners and talk with them about a PoC or about getting a demo license for a short period of time. Anyway, 2GB/day doesn't usually warrant a "cluster".
Hi Splunk Gurus, We’re currently testing Splunk OpenTelemetry (Otel) Collectors on our Kubernetes clusters to collect logs and forward them to Splunk Cloud via HEC. We’re not using Splunk Observabil... See more...
Hi Splunk Gurus, We’re currently testing Splunk OpenTelemetry (Otel) Collectors on our Kubernetes clusters to collect logs and forward them to Splunk Cloud via HEC. We’re not using Splunk Observability at this time. Is it possible to manage or configure these OTel Collectors through the traditional Splunk Deployment Server? If so, could you please share any relevant documentation or guidance? I came across documentation related to the Splunk Add-on for the OpenTelemetry Collector, but it appears to be focused on Splunk Observability. Any clarification or direction would be greatly appreciated. Thanks in advance for your support!
Splunk free didn't contains needed feature for PoC! The minimum is trial, but it has ingestion size limits and also it didn't contains e.g. remote LM feature. One option is request developer licen... See more...
Splunk free didn't contains needed feature for PoC! The minimum is trial, but it has ingestion size limits and also it didn't contains e.g. remote LM feature. One option is request developer license (could take some times to get) or contact to your local Splunk partner and asked trial from them for Onprem PoC.
Have you looked these guides? https://machinelearningmastery.com/practical-guide-choosing-right-algorithm-your-problem/ https://www.geeksforgeeks.org/choosing-a-suitable-machine-learning-algorithm... See more...
Have you looked these guides? https://machinelearningmastery.com/practical-guide-choosing-right-algorithm-your-problem/ https://www.geeksforgeeks.org/choosing-a-suitable-machine-learning-algorithm/ https://labelyourdata.com/articles/how-to-choose-a-machine-learning-algorithm https://scikit-learn.org/stable/machine_learning_map.html I suppose that those helps you to select suitable algorithms with your test data.
Hello all, I am reviewing the Splunk add-on for vCenter Log and the Splunk add-on for VMware ESXi logs guides and have the following question:  Are both required in an environment where vCenter is p... See more...
Hello all, I am reviewing the Splunk add-on for vCenter Log and the Splunk add-on for VMware ESXi logs guides and have the following question:  Are both required in an environment where vCenter is present, or is the Splunk add-on for VMware ESXi logs not necessary if so? Or in other words, is the add-on for VMware ESXi just for simple bare metal installs that do not use vCenter?  Are ESXi builds with vCenter sending all of the ESXi logs up to vCenter anyhow and one needs only use the add-on for vCenter?   Second question, am I reading correctly that the add-on for vCenter requires BOTH a syslog output and a vCenter user account for API access?
@danielbb  Since your ingestion is minimal (2 GB/day), and assuming you already have on-prem infrastructure: Stick with on-prem if you want full control and already have the hardware. Choose Inge... See more...
@danielbb  Since your ingestion is minimal (2 GB/day), and assuming you already have on-prem infrastructure: Stick with on-prem if you want full control and already have the hardware. Choose Ingest-Based Term Licensing for predictable costs and flexibility. Consider Splunk Free (500 MB/day) for testing or very small-scale use. If you're open to cloud and want to reduce operational overhead, Splunk Cloud with pay-as-you-go could be a cost-effective and low-maintenance alternative. For such a small ingestion volume, Splunk Cloud might be worth considering if: You want to avoid infrastructure management. You prefer pay-as-you-go pricing  You value scalability and ease of updates. Pricing | Splunk  If you need further assistance or a detailed quote, contact Splunk sales or a partner like SP6, and consider a free trial to validate your setup.   https://www.splunk.com/en_us/products/pricing/platform-pricing.html https://sp6.io/blog/choosing-the-right-splunk-license/  https://www.reddit.com/r/Splunk/comments/1cbf9gu/what_the_deal_with_splunk_cloud_vs_on_prem/