All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi all, I'm working on a Splunk SOAR connector where we plan to add support for webhooks (introduced in SOAR v6.4.1), allowing the connector to receive data from external sources. I see there's an o... See more...
Hi all, I'm working on a Splunk SOAR connector where we plan to add support for webhooks (introduced in SOAR v6.4.1), allowing the connector to receive data from external sources. I see there's an option to enable authentication for the webhook, but after enabling it, I'm unsure what type of information needs to be included in the request. I've tried using basic authentication and an auth token, but neither worked. Could someone please guide me on what information should be included in the request once authentication is enabled?
Hi PrewinThomas thank you very much. Works fantastic.  
I am trying to display raw logs in a dashboard but it removing the raw logs. Is there a way to display it? In standard search, it is showing the raw logs but not in dashboard. Sample Query: index... See more...
I am trying to display raw logs in a dashboard but it removing the raw logs. Is there a way to display it? In standard search, it is showing the raw logs but not in dashboard. Sample Query: index=* | eval device = coalesce( dvc, device_name) | eval is_valid_str=if(match(device, "^[a-zA-Z0-9_\-.,$]*$"), "true", "false") | where is_valid_str="false" | stats count by device, index, _raw  
Hi @Meta  According to the system requirements docs, Windows 10 does not support full Splunk Enterprise deployment, it only supports the Universal Forwarder.  It is likely that your Windows 10 inst... See more...
Hi @Meta  According to the system requirements docs, Windows 10 does not support full Splunk Enterprise deployment, it only supports the Universal Forwarder.  It is likely that your Windows 10 instance is missing key components/files which are required by Splunk which are only in the Windows Server varients. Check out https://help.splunk.com/en/splunk-enterprise/get-started/install-and-upgrade/10.0/plan-your-splunk-enterprise-installation/system-requirements-for-use-of-splunk-enterprise-on-premises#:~:text=for%20this%20platform.-,Windows%20operating%20systems,-The%20table%20lists for more info on what is supported.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hi @Na_Kang_Lim  You will also need to update the  defaultGroup=my_indexer_cluster to  defaultGroup=my_indexer_cluster,elk_server so that it sends to both. The reason that you are getting the me... See more...
Hi @Na_Kang_Lim  You will also need to update the  defaultGroup=my_indexer_cluster to  defaultGroup=my_indexer_cluster,elk_server so that it sends to both. The reason that you are getting the metrics is that some inputs.conf such as splunk.version monitor stanza has "_TCP_ROUTING = *" which sends to all output groups. You will need to either make the change in the app where the defaultGroup is already defined, or push it out through another app which has a higher order of precedence. It might be easiest to change this in the existing app if possible.   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Saran  You need to prefix the stack name with "http-inputs-" to send to HEC, you should be able to see a healthcheck by visiting: https://http-inputs-<stackName>.splunkcloud.com/services/collec... See more...
Hi @Saran  You need to prefix the stack name with "http-inputs-" to send to HEC, you should be able to see a healthcheck by visiting: https://http-inputs-<stackName>.splunkcloud.com/services/collector/health?token=<yourToken> Or remove the ?token=<yourToken> to get a generic health check. If this works then HEC should be active and accessible. It looks like the main issue here is the missing http-inputs- prefix.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I am trying to push the data to Splunk cloud trial instance, but it returns "HTTP/1.1 503 Service Unavailable". Am I missing something or is my cloud trial instance down? The host url I am using ... See more...
I am trying to push the data to Splunk cloud trial instance, but it returns "HTTP/1.1 503 Service Unavailable". Am I missing something or is my cloud trial instance down? The host url I am using is "https://<my-instance>.splunkcloud.com:<port>/services/collector"  The request format is given below: curl -k https://<my-instance>.splunkcloud.com:<port>/services/collector -H "Authorization: Splunk <cloud-instance-token>"  -H "Content-Type: application/json" -d '{ "event": {payload} }'  
Hi, as the question suggest, I am trying to send 2 streams of logs. From the document Forward data to third-party systems - Splunk Documentation I know there are 2 limitations: - I can only send ra... See more...
Hi, as the question suggest, I am trying to send 2 streams of logs. From the document Forward data to third-party systems - Splunk Documentation I know there are 2 limitations: - I can only send raw data - I cannot filter only the data I want So sending all data is OK for me. Currently, my UF have this app called INDEXER_OUTPUT. Which in its default/outputs.conf have these configs:   [tcpout] defaultGroup=my_indexer_cluster autoLBFrequency=300 [tcpout:my_indexer_cluster] server=<indexer_01_ip>:9997,<indexer_02_ip>:9997,<indexer_03_ip>:9997,<indexer_04_ip>:9997 [tcpout-server://<indexer_01_ip>:9997] [tcpout-server://<indexer_02_ip>:9997] [tcpout-server://<indexer_03_ip>:9997] [tcpout-server://<indexer_04_ip>:9997]   So what I did was created another server class, with a single app within called ELK_OUTPUT. It also has a single default/outputs.conf file with this config:   [tcpout] [tcpout:elk_server] server=<elk_server_ip>:3514 sendCookedData=false   Upon adding the client to the server class, what I noticed is a weird behavior: I only get the metrics.log sent to the ELK server What I am suspecting is that maybe because my [WinEventLog://Security] input stanza contains "renderXML = true" and "evt_resolve_ad_obj = 1", so that it no longer considered as "raw data"?
@spisiakmi  You want 2 different searches for your conditions like below? Condition 1: Select rows from tmp1_2.csv where (WorkplaceId,ContractId) is NOT present in tmp1_1.csv as (WorkplaceId,Contra... See more...
@spisiakmi  You want 2 different searches for your conditions like below? Condition 1: Select rows from tmp1_2.csv where (WorkplaceId,ContractId) is NOT present in tmp1_1.csv as (WorkplaceId,Contract) | inputlookup tmp1_2.csv | eval key=WorkplaceId."-".ContractId | lookup tmp1_1.csv WorkplaceId as WorkplaceId, Contract as ContractId OUTPUT PK1 | where isnull(PK1) | table WorkplaceId, State, Timestamp, ContractId   Condition 2: Select rows from tmp1_2.csv where (WorkplaceId,ContractId) is present in tmp1_1.csv as (WorkplaceId,Contract) | inputlookup tmp1_2.csv | eval key=WorkplaceId."-".ContractId | lookup tmp1_1.csv WorkplaceId as WorkplaceId, Contract as ContractId OUTPUT PK1 | where isnotnull(PK1) | table WorkplaceId, State, Timestamp, ContractId   Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
For inputs.conf file, we've already enabled the Security log (and others). While other Security Event IDs, like those in the 472x range, are successfully searchable in Splunk, Event IDs 1104 and 1105... See more...
For inputs.conf file, we've already enabled the Security log (and others). While other Security Event IDs, like those in the 472x range, are successfully searchable in Splunk, Event IDs 1104 and 1105 are conspicuously absent from search results.  
@phamanh1652  What's your inputs.conf look like. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma.... See more...
@phamanh1652  What's your inputs.conf look like. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
@meetmshah  I haven't tested this personally. But theoratically by creating two separate unidirectional configurations its feasible. Deployment A acts as a Federated Search Head with Deployment B as... See more...
@meetmshah  I haven't tested this personally. But theoratically by creating two separate unidirectional configurations its feasible. Deployment A acts as a Federated Search Head with Deployment B as its Federated Provider and deployment B also acts as a Federated Search Head with Deployment A as its Federated Provider. As per document Real-time searches are not supported in Federated Search mode. #https://docs.splunk.com/Documentation/ITSI/4.20.1/EA/FedSearch Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Hello team, We are currently testing the upgrade of Splunk Universal Forwarder (x86) version 10.0.0.0 on a Windows 10 32-bit virtual machine. However, the upgrade consistently fails with error code ... See more...
Hello team, We are currently testing the upgrade of Splunk Universal Forwarder (x86) version 10.0.0.0 on a Windows 10 32-bit virtual machine. However, the upgrade consistently fails with error code 1603.  https://download.splunk.com/products/universalforwarder/releases/10.0.0/windows/splunkforwarder-10.0.0-ea5bfadeac3a-windows-x86.msi Please note the following observations: Fresh installation of version 10.0.0.0 completes successfully. Upgrade from version 9.4.2.0 to 9.4.3.0 works without any issues. The upgrade was attempted both via UI and using silent switches, but the result was the same. Unfortunately, we were unable to attach the log file for reference. And actions are rolled back. Could you please assist us in identifying and resolving the root cause of this issue?
Thanks for the answer @livehybrid. With respect to - "Yes two different deployments can be fed. search clients for eachother"? - Have you seen an environment with the same? Because I couldn't find an... See more...
Thanks for the answer @livehybrid. With respect to - "Yes two different deployments can be fed. search clients for eachother"? - Have you seen an environment with the same? Because I couldn't find any of the Splunk Doc where it's mentioned that the environments can be interconnected.
Hi, any support please. I have 2 lookups. tmp1_1.csv WorkplaceId,PK1,Description,Contract 1234567890,7535712,Contract1,19 1123456789,7535712,Contract2,18 1234567890,7456072,Contract3,14 123456... See more...
Hi, any support please. I have 2 lookups. tmp1_1.csv WorkplaceId,PK1,Description,Contract 1234567890,7535712,Contract1,19 1123456789,7535712,Contract2,18 1234567890,7456072,Contract3,14 1234567890,7456072,Contract4,15 1234567891,7456072,Contract5,16 tmp1_2.csv WorkplaceId,State,Timestamp,ContractId 1234567890,Start,1752838050,12 1234567890,End,1752838633,12 1123456789,Start,1752838853,13 1123456789,Break,1752839380,13 1123456789,End,1752839691,13 1234567890,Start,1752839720,14 1234567890,Start,1752839745,15 1234567891,Start,1752839777,16 1234567891,Start,1752839790,18 1234567890,Start,1752839892,19   The primary key between these tables is WorkplaceId,Contract=WorkplaceId,ContractId The task is always to select the content from tmp1_2.csv based on conditions cond1: select everything from tmp1_2.csv where WorkplaceId,Contract!=WorkplaceId,ContractId. In this case the result should be WorkplaceId,State,Timestamp,ContractId 1234567890,Start,1752838050,12 1234567890,End,1752838633,12 1123456789,Start,1752838853,13 1123456789,Break,1752839380,13 1123456789,End,1752839691,13 2. cond2: select everything from tmp1_2.csv where WorkplaceId,Contract=WorkplaceId,ContractId. In this case the result should be WorkplaceId,State,Timestamp,ContractId 1234567890,Start,1752839720,14 1234567890,Start,1752839745,15 1234567891,Start,1752839777,16 1234567891,Start,1752839790,18 1234567890,Start,1752839892,19 Any support, please?
Hi @phamanh1652 , I suppose that you're using the Splunk_TA_Windows, did you checked if, in the inputs.log, there's a filter on WinEventLog:Security logs: sometimes not all the EventCodes areindexed... See more...
Hi @phamanh1652 , I suppose that you're using the Splunk_TA_Windows, did you checked if, in the inputs.log, there's a filter on WinEventLog:Security logs: sometimes not all the EventCodes areindexed. Ciao. Giuseppe
Environment: Product: Splunk Enterprise (Indexer) Deployment: On-premises Current Version: 9.3.2 Target Version: 9.4.x (tested 9.4.0, 9.4.2) Current KV Store Version: MongoDB 4.17 Expected KV ... See more...
Environment: Product: Splunk Enterprise (Indexer) Deployment: On-premises Current Version: 9.3.2 Target Version: 9.4.x (tested 9.4.0, 9.4.2) Current KV Store Version: MongoDB 4.17 Expected KV Store Version: MongoDB 7.x (per documentation) Issue Summary: Experiencing KV Store upgrade failures when upgrading Splunk Enterprise Indexer from 9.3.2 to any 9.4.x version. According to Splunk documentation, the upgrade from 9.3.x to 9.4.x should be seamless with automatic KV Store upgrade from MongoDB 4.x to 7.x. Both automatic and manual KV Store upgrade approaches have failed. Errors sample: -  alled Result::unwrap() on an Err value: UpgradeError { details: "Error updating status to 'INITIAL_UPGRADE_SEQUENCE' on 127.0.0.1:8191 document: Error { kind: Write(WriteError(WriteError { code: 11000, code_name: None, message: "E11000 duplicate key error collection: migration_metadata.migration_metadata index: id dup key: { _id: \"127.0.0.1:8191\" }", details: None })), labels: {}, wire_version: None, source: None }", kind: LocalError } - Failed to upgrade KV Store to the latest version. KV Store is running an old version, 4.2. Resolve upgrade errors and try to upgrade KV Store to the latest version again. Any others wiredTiger etc all might be wild fire and relevant Tried manuallay and also ansible automation both( same steps )   Question: Why is KV Store upgrading to 4.25 instead of directly to 7.x as documented? How to come out as we have big infra and we need to upgrade etc ?
Hello All, We send logs from Windows to Splunk via Universal Forwarder. We want to create alerts for Event ID 1104 - The security log is full and 1105 - Log automatic backup. However, when searchin... See more...
Hello All, We send logs from Windows to Splunk via Universal Forwarder. We want to create alerts for Event ID 1104 - The security log is full and 1105 - Log automatic backup. However, when searching, we cannot find either of these events. When reviewing the log files (EVTX), Event ID 1104 appears as the final entry in the archived log, while Event ID 1105 is the initial entry in the newly created EVTX file. Here is the configuration for log archiving:
@richgalloway This is one of the reason I am afraid of creating dedicated summary indexes again
What exactly is not working? Please share your search where you are using the token?