All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi team,   I am in splunk 9.4 and have configured db connect. The SQL query will search for any failures from the table and will pass the result to splunk search.Configured a real time alert to sen... See more...
Hi team,   I am in splunk 9.4 and have configured db connect. The SQL query will search for any failures from the table and will pass the result to splunk search.Configured a real time alert to send the log details to my email id. However emails are landing in junk folder. Not able to figure out why it is landing in junk folder.Any help is appreciated
Hi @Saran  Just to confirm - are you behind a proxy or firewall that could be intercepting traffic? Splunk Cloud Trial instances are slightly different in configuration to production instances and ... See more...
Hi @Saran  Just to confirm - are you behind a proxy or firewall that could be intercepting traffic? Splunk Cloud Trial instances are slightly different in configuration to production instances and have various restrictions, please could you try https://<stack>.splunkcloud.com:8088/services/collector/health If you are still getting the error with the above endpoint I think you will need to raise a support ticket via https://www.splunk.com/support - If you do not have any support entitlement with it being a trial then you might be able to reach out via sales and ask that they help you look into this (as potentially impacting sale and successful PoC). Fingers crossed!  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@azer271  "Bucket is already registered with the peer" means during bucket replication, that indexer peer attempted to replicate a bucket to another peer, but the target peer already has that bu... See more...
@azer271  "Bucket is already registered with the peer" means during bucket replication, that indexer peer attempted to replicate a bucket to another peer, but the target peer already has that bucket registered possibly as a primary or searchable copy. Therefore, it refuses to overwrite or duplicate it. run the below rest command and check the health of the cluster | rest /services/cluster/master/buckets | table title, bucket_flags, replication_count, search_count, status and check for any standalone bucket issue, that also may be the reason
Hi @livehybrid , this is still not working curl -k "https://http-inputs-<instance>.splunkcloud.com/services/collector/health" curl: (56) CONNECT tunnel failed, response 503
Hi @Praz_123  This might highlight issues with getting data in to Splunk Cloud, but not necessarily issues outside the cloud environment itself. What I meant by this is that you could have issues e... See more...
Hi @Praz_123  This might highlight issues with getting data in to Splunk Cloud, but not necessarily issues outside the cloud environment itself. What I meant by this is that you could have issues elsewhere that would not by captured here, for these you might want to create searches which checks that you are receiving the volume of events expected in a period of time, per index. This means if something slows down, or there are bottlenecks elsewhere that this can be detected. I personally use TrackMe app for this as it monitors all my sources and detects a number of issues - however you can do this yourself with some simple searches.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @PrewinThomas , thanks for the reply. I tried using the ph-auth-token, but it's not working. It works for APIs like /rest/container/ and /rest/artifact/, but not for the /webhook endpoint. Ref... See more...
Hi @PrewinThomas , thanks for the reply. I tried using the ph-auth-token, but it's not working. It works for APIs like /rest/container/ and /rest/artifact/, but not for the /webhook endpoint. Ref: https://help.splunk.com/en/splunk-soar/soar-cloud/administer-soar-cloud/manage-your-splunk-soar-cloud-apps-and-assets/add-and-configure-apps-and-assets-to-provide-actions-in-splunk-soar-cloud#ariaid-title7
What am following are as follows :- 1. Log into Monitoring Console :- Login to splunk cloud UI Search for cloud monitoring console under Apps  2. Check Indexing health :-           Check In... See more...
What am following are as follows :- 1. Log into Monitoring Console :- Login to splunk cloud UI Search for cloud monitoring console under Apps  2. Check Indexing health :-           Check Indexing Performance: Go to Indexing -> Indexing Performance  Review ingestion rate trends. Identify queue buildup (parsing, indexing, or pipeline queues). 3. Monitor data inputs Go to Forwarders > Forwarders deployment  Check forwarder connectivity and status. Confirm data forwarding from Universal Forwarders or Heavy Forwarders. what other steps can be included in this 
@soar_developer  when you enable authentication, it typically expects a ph-auth-token header. Eg: POST /rest/handler/<your_app>_<your_app_id>/... HTTP/1.1 Host: <your_soar_instance> Content-Type: a... See more...
@soar_developer  when you enable authentication, it typically expects a ph-auth-token header. Eg: POST /rest/handler/<your_app>_<your_app_id>/... HTTP/1.1 Host: <your_soar_instance> Content-Type: application/json ph-auth-token: <your_generated_token> Refer #https://help.splunk.com/en/splunk-soar/soar-cloud/rest-api-reference/using-the-splunk-soar-rest-api/using-the-rest-api-reference-for-splunk-soar-cloud   Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
@debdutsaini  If it's in Dashboard studio, You need to enable _internal fields to show the same in the dashboard. Edit -> Data Display-> Select Internal fields Regards, Prewin Splunk Enthusia... See more...
@debdutsaini  If it's in Dashboard studio, You need to enable _internal fields to show the same in the dashboard. Edit -> Data Display-> Select Internal fields Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Hi all, I'm working on a Splunk SOAR connector where we plan to add support for webhooks (introduced in SOAR v6.4.1), allowing the connector to receive data from external sources. I see there's an o... See more...
Hi all, I'm working on a Splunk SOAR connector where we plan to add support for webhooks (introduced in SOAR v6.4.1), allowing the connector to receive data from external sources. I see there's an option to enable authentication for the webhook, but after enabling it, I'm unsure what type of information needs to be included in the request. I've tried using basic authentication and an auth token, but neither worked. Could someone please guide me on what information should be included in the request once authentication is enabled?
Hi PrewinThomas thank you very much. Works fantastic.  
I am trying to display raw logs in a dashboard but it removing the raw logs. Is there a way to display it? In standard search, it is showing the raw logs but not in dashboard. Sample Query: index... See more...
I am trying to display raw logs in a dashboard but it removing the raw logs. Is there a way to display it? In standard search, it is showing the raw logs but not in dashboard. Sample Query: index=* | eval device = coalesce( dvc, device_name) | eval is_valid_str=if(match(device, "^[a-zA-Z0-9_\-.,$]*$"), "true", "false") | where is_valid_str="false" | stats count by device, index, _raw  
Hi @Meta  According to the system requirements docs, Windows 10 does not support full Splunk Enterprise deployment, it only supports the Universal Forwarder.  It is likely that your Windows 10 inst... See more...
Hi @Meta  According to the system requirements docs, Windows 10 does not support full Splunk Enterprise deployment, it only supports the Universal Forwarder.  It is likely that your Windows 10 instance is missing key components/files which are required by Splunk which are only in the Windows Server varients. Check out https://help.splunk.com/en/splunk-enterprise/get-started/install-and-upgrade/10.0/plan-your-splunk-enterprise-installation/system-requirements-for-use-of-splunk-enterprise-on-premises#:~:text=for%20this%20platform.-,Windows%20operating%20systems,-The%20table%20lists for more info on what is supported.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi @Na_Kang_Lim  You will also need to update the  defaultGroup=my_indexer_cluster to  defaultGroup=my_indexer_cluster,elk_server so that it sends to both. The reason that you are getting the me... See more...
Hi @Na_Kang_Lim  You will also need to update the  defaultGroup=my_indexer_cluster to  defaultGroup=my_indexer_cluster,elk_server so that it sends to both. The reason that you are getting the metrics is that some inputs.conf such as splunk.version monitor stanza has "_TCP_ROUTING = *" which sends to all output groups. You will need to either make the change in the app where the defaultGroup is already defined, or push it out through another app which has a higher order of precedence. It might be easiest to change this in the existing app if possible.   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Saran  You need to prefix the stack name with "http-inputs-" to send to HEC, you should be able to see a healthcheck by visiting: https://http-inputs-<stackName>.splunkcloud.com/services/collec... See more...
Hi @Saran  You need to prefix the stack name with "http-inputs-" to send to HEC, you should be able to see a healthcheck by visiting: https://http-inputs-<stackName>.splunkcloud.com/services/collector/health?token=<yourToken> Or remove the ?token=<yourToken> to get a generic health check. If this works then HEC should be active and accessible. It looks like the main issue here is the missing http-inputs- prefix.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I am trying to push the data to Splunk cloud trial instance, but it returns "HTTP/1.1 503 Service Unavailable". Am I missing something or is my cloud trial instance down? The host url I am using ... See more...
I am trying to push the data to Splunk cloud trial instance, but it returns "HTTP/1.1 503 Service Unavailable". Am I missing something or is my cloud trial instance down? The host url I am using is "https://<my-instance>.splunkcloud.com:<port>/services/collector"  The request format is given below: curl -k https://<my-instance>.splunkcloud.com:<port>/services/collector -H "Authorization: Splunk <cloud-instance-token>"  -H "Content-Type: application/json" -d '{ "event": {payload} }'  
Hi, as the question suggest, I am trying to send 2 streams of logs. From the document Forward data to third-party systems - Splunk Documentation I know there are 2 limitations: - I can only send ra... See more...
Hi, as the question suggest, I am trying to send 2 streams of logs. From the document Forward data to third-party systems - Splunk Documentation I know there are 2 limitations: - I can only send raw data - I cannot filter only the data I want So sending all data is OK for me. Currently, my UF have this app called INDEXER_OUTPUT. Which in its default/outputs.conf have these configs:   [tcpout] defaultGroup=my_indexer_cluster autoLBFrequency=300 [tcpout:my_indexer_cluster] server=<indexer_01_ip>:9997,<indexer_02_ip>:9997,<indexer_03_ip>:9997,<indexer_04_ip>:9997 [tcpout-server://<indexer_01_ip>:9997] [tcpout-server://<indexer_02_ip>:9997] [tcpout-server://<indexer_03_ip>:9997] [tcpout-server://<indexer_04_ip>:9997]   So what I did was created another server class, with a single app within called ELK_OUTPUT. It also has a single default/outputs.conf file with this config:   [tcpout] [tcpout:elk_server] server=<elk_server_ip>:3514 sendCookedData=false   Upon adding the client to the server class, what I noticed is a weird behavior: I only get the metrics.log sent to the ELK server What I am suspecting is that maybe because my [WinEventLog://Security] input stanza contains "renderXML = true" and "evt_resolve_ad_obj = 1", so that it no longer considered as "raw data"?
@spisiakmi  You want 2 different searches for your conditions like below? Condition 1: Select rows from tmp1_2.csv where (WorkplaceId,ContractId) is NOT present in tmp1_1.csv as (WorkplaceId,Contra... See more...
@spisiakmi  You want 2 different searches for your conditions like below? Condition 1: Select rows from tmp1_2.csv where (WorkplaceId,ContractId) is NOT present in tmp1_1.csv as (WorkplaceId,Contract) | inputlookup tmp1_2.csv | eval key=WorkplaceId."-".ContractId | lookup tmp1_1.csv WorkplaceId as WorkplaceId, Contract as ContractId OUTPUT PK1 | where isnull(PK1) | table WorkplaceId, State, Timestamp, ContractId   Condition 2: Select rows from tmp1_2.csv where (WorkplaceId,ContractId) is present in tmp1_1.csv as (WorkplaceId,Contract) | inputlookup tmp1_2.csv | eval key=WorkplaceId."-".ContractId | lookup tmp1_1.csv WorkplaceId as WorkplaceId, Contract as ContractId OUTPUT PK1 | where isnotnull(PK1) | table WorkplaceId, State, Timestamp, ContractId   Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
For inputs.conf file, we've already enabled the Security log (and others). While other Security Event IDs, like those in the 472x range, are successfully searchable in Splunk, Event IDs 1104 and 1105... See more...
For inputs.conf file, we've already enabled the Security log (and others). While other Security Event IDs, like those in the 472x range, are successfully searchable in Splunk, Event IDs 1104 and 1105 are conspicuously absent from search results.  
@phamanh1652  What's your inputs.conf look like. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma.... See more...
@phamanh1652  What's your inputs.conf look like. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!