Addon Builder 4.5.0. This app adds a data input automatically. This is a good thing, I then go to add new to complete the configuration. Everything is running good. A few days later, I thought of a...
See more...
Addon Builder 4.5.0. This app adds a data input automatically. This is a good thing, I then go to add new to complete the configuration. Everything is running good. A few days later, I thought of a better name for the input. I cloned the original, put a different name, and kept all the same config. I disabled the original. I noticed that I can still run the script and see the API output, but when I searched for the output, I did not find it. I started to see 401 errors instead. I went back to the data inputs and disabled the clone and enabled the original and all is back to normal. Is there a rule to cloning the data input for the addon builder that says not to clone?
Addon Builder 4.5.0, Modular input using my Python code. In this example the collection interval is set for 30 seconds. I added a log to verify it is running here: log_file = "/opt/splunk/etc/...
See more...
Addon Builder 4.5.0, Modular input using my Python code. In this example the collection interval is set for 30 seconds. I added a log to verify it is running here: log_file = "/opt/splunk/etc/apps/TA-api1/logs/vosfin_cli.log" The main page (Configure Data Collection) shows all the 'input names' that I built. But looking at the 'event count', I see a 0. When I go into the log, it shows it running and giving me data ok. Why doesn't the event count up every time the script runs? Is there addition configuration in inputs, props or web.conf that I need to add/edit to make it count up?
The fact is the same email settings were tested for UAT but in UAT all the email alerts rightly came to Inbox.Only form enterprise it is landing in Junk
Hi @livehybrid , Thank you for your response. We are actually testing the Universal Forwarder only. Also, just to clarify, the fresh installation is working fine on the Windows 10 VM. The issue oc...
See more...
Hi @livehybrid , Thank you for your response. We are actually testing the Universal Forwarder only. Also, just to clarify, the fresh installation is working fine on the Windows 10 VM. The issue occurs only during the upgrade process.
HI @debdutsaini , replace stats with table in the last line of your query like below index=* | eval device = coalesce(dvc, device_name) | eval is_valid_str=if(match(device, "^[a-zA-Z0-9_\-.,$]*...
See more...
HI @debdutsaini , replace stats with table in the last line of your query like below index=* | eval device = coalesce(dvc, device_name) | eval is_valid_str=if(match(device, "^[a-zA-Z0-9_\-.,$]*$"), "true", "false") | where is_valid_str="false" | table _time index device _raw
Hi @Namo , When Splunk alert emails land in your junk/spam folder, it's usually an issue not with Splunk itself, but with how the email is being handled by your mail server, client, or spam filte...
See more...
Hi @Namo , When Splunk alert emails land in your junk/spam folder, it's usually an issue not with Splunk itself, but with how the email is being handled by your mail server, client, or spam filters. If you control your mail client or domain filters: Add the From address to your safe sender list. Whitelist the Splunk server IP or domain in your Exchange / Outlook / Gmail policies.
Hi @Namo This is typically an email server/client configuration issue rather than a Splunk problem. The emails are being flagged as spam by your email provider's filters. Are you able to add Splun...
See more...
Hi @Namo This is typically an email server/client configuration issue rather than a Splunk problem. The emails are being flagged as spam by your email provider's filters. Are you able to add Splunk server to safe senders list? The other things to check are the email server reputation of the SMTP server configured in Splunk as bad reputation of email server can also cause your receiving server to flag as spam, the sending SMTP service should have proper SPF/DKIM/DMARC records to reduce being detected as spam. Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@Namo If Splunk is sending emails from a domain that lacks proper authentication (SPF, DKIM, DMARC), email providers may flag it as spam. Check this internally with your IT team. Ensure the sendi...
See more...
@Namo If Splunk is sending emails from a domain that lacks proper authentication (SPF, DKIM, DMARC), email providers may flag it as spam. Check this internally with your IT team. Ensure the sending domain is properly configured: SPF: Add Splunk’s sending IP to your domain’s SPF record. DKIM: Sign outgoing emails with DKIM if possible. DMARC: Set up a DMARC policy to monitor and enforce authentication
Junk folders are normally controlled by the email server (not Splunk). If the email server recognises it as (potential) junk, it will move it the junk folder. This is often based on whether the sende...
See more...
Junk folders are normally controlled by the email server (not Splunk). If the email server recognises it as (potential) junk, it will move it the junk folder. This is often based on whether the sender has a history of sending other "junk" email, the sender's address doesn't match the reply-to address, the email contains links to "unrecognised" sites, etc. There are many possible options. If you want this to be fixed, you should contact your email provider and ask them why the messages end up in junk and what can be done about recognising them as legitimate messages. If you share the email server with others, there may not be anything that the email provider is willing to do, as it might impact other users.
Hi team, I am in splunk 9.4 and have configured db connect. The SQL query will search for any failures from the table and will pass the result to splunk search.Configured a real time alert to sen...
See more...
Hi team, I am in splunk 9.4 and have configured db connect. The SQL query will search for any failures from the table and will pass the result to splunk search.Configured a real time alert to send the log details to my email id. However emails are landing in junk folder. Not able to figure out why it is landing in junk folder.Any help is appreciated
Hi @Saran Just to confirm - are you behind a proxy or firewall that could be intercepting traffic? Splunk Cloud Trial instances are slightly different in configuration to production instances and ...
See more...
Hi @Saran Just to confirm - are you behind a proxy or firewall that could be intercepting traffic? Splunk Cloud Trial instances are slightly different in configuration to production instances and have various restrictions, please could you try https://<stack>.splunkcloud.com:8088/services/collector/health If you are still getting the error with the above endpoint I think you will need to raise a support ticket via https://www.splunk.com/support - If you do not have any support entitlement with it being a trial then you might be able to reach out via sales and ask that they help you look into this (as potentially impacting sale and successful PoC). Fingers crossed! Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@azer271 "Bucket is already registered with the peer" means during bucket replication, that indexer peer attempted to replicate a bucket to another peer, but the target peer already has that bu...
See more...
@azer271 "Bucket is already registered with the peer" means during bucket replication, that indexer peer attempted to replicate a bucket to another peer, but the target peer already has that bucket registered possibly as a primary or searchable copy. Therefore, it refuses to overwrite or duplicate it. run the below rest command and check the health of the cluster | rest /services/cluster/master/buckets | table title, bucket_flags, replication_count, search_count, status and check for any standalone bucket issue, that also may be the reason
Hi @livehybrid , this is still not working curl -k "https://http-inputs-<instance>.splunkcloud.com/services/collector/health" curl: (56) CONNECT tunnel failed, response 503
Hi @Praz_123 This might highlight issues with getting data in to Splunk Cloud, but not necessarily issues outside the cloud environment itself. What I meant by this is that you could have issues e...
See more...
Hi @Praz_123 This might highlight issues with getting data in to Splunk Cloud, but not necessarily issues outside the cloud environment itself. What I meant by this is that you could have issues elsewhere that would not by captured here, for these you might want to create searches which checks that you are receiving the volume of events expected in a period of time, per index. This means if something slows down, or there are bottlenecks elsewhere that this can be detected. I personally use TrackMe app for this as it monitors all my sources and detects a number of issues - however you can do this yourself with some simple searches. Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @PrewinThomas , thanks for the reply. I tried using the ph-auth-token, but it's not working. It works for APIs like /rest/container/ and /rest/artifact/, but not for the /webhook endpoint. Ref...
See more...
Hi @PrewinThomas , thanks for the reply. I tried using the ph-auth-token, but it's not working. It works for APIs like /rest/container/ and /rest/artifact/, but not for the /webhook endpoint. Ref: https://help.splunk.com/en/splunk-soar/soar-cloud/administer-soar-cloud/manage-your-splunk-soar-cloud-apps-and-assets/add-and-configure-apps-and-assets-to-provide-actions-in-splunk-soar-cloud#ariaid-title7
What am following are as follows :- 1. Log into Monitoring Console :- Login to splunk cloud UI Search for cloud monitoring console under Apps 2. Check Indexing health :- Check In...
See more...
What am following are as follows :- 1. Log into Monitoring Console :- Login to splunk cloud UI Search for cloud monitoring console under Apps 2. Check Indexing health :- Check Indexing Performance: Go to Indexing -> Indexing Performance Review ingestion rate trends. Identify queue buildup (parsing, indexing, or pipeline queues). 3. Monitor data inputs Go to Forwarders > Forwarders deployment Check forwarder connectivity and status. Confirm data forwarding from Universal Forwarders or Heavy Forwarders. what other steps can be included in this
@soar_developer when you enable authentication, it typically expects a ph-auth-token header. Eg:
POST /rest/handler/<your_app>_<your_app_id>/... HTTP/1.1
Host: <your_soar_instance>
Content-Type: a...
See more...
@soar_developer when you enable authentication, it typically expects a ph-auth-token header. Eg:
POST /rest/handler/<your_app>_<your_app_id>/... HTTP/1.1
Host: <your_soar_instance>
Content-Type: application/json
ph-auth-token: <your_generated_token> Refer #https://help.splunk.com/en/splunk-soar/soar-cloud/rest-api-reference/using-the-splunk-soar-rest-api/using-the-rest-api-reference-for-splunk-soar-cloud Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
@debdutsaini If it's in Dashboard studio, You need to enable _internal fields to show the same in the dashboard. Edit -> Data Display-> Select Internal fields Regards, Prewin Splunk Enthusia...
See more...
@debdutsaini If it's in Dashboard studio, You need to enable _internal fields to show the same in the dashboard. Edit -> Data Display-> Select Internal fields Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!