All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@meetmshah  I haven't tested this personally. But theoratically by creating two separate unidirectional configurations its feasible. Deployment A acts as a Federated Search Head with Deployment B as... See more...
@meetmshah  I haven't tested this personally. But theoratically by creating two separate unidirectional configurations its feasible. Deployment A acts as a Federated Search Head with Deployment B as its Federated Provider and deployment B also acts as a Federated Search Head with Deployment A as its Federated Provider. As per document Real-time searches are not supported in Federated Search mode. #https://docs.splunk.com/Documentation/ITSI/4.20.1/EA/FedSearch Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Hello team, We are currently testing the upgrade of Splunk Universal Forwarder (x86) version 10.0.0.0 on a Windows 10 32-bit virtual machine. However, the upgrade consistently fails with error code ... See more...
Hello team, We are currently testing the upgrade of Splunk Universal Forwarder (x86) version 10.0.0.0 on a Windows 10 32-bit virtual machine. However, the upgrade consistently fails with error code 1603.  https://download.splunk.com/products/universalforwarder/releases/10.0.0/windows/splunkforwarder-10.0.0-ea5bfadeac3a-windows-x86.msi Please note the following observations: Fresh installation of version 10.0.0.0 completes successfully. Upgrade from version 9.4.2.0 to 9.4.3.0 works without any issues. The upgrade was attempted both via UI and using silent switches, but the result was the same. Unfortunately, we were unable to attach the log file for reference. And actions are rolled back. Could you please assist us in identifying and resolving the root cause of this issue?
Thanks for the answer @livehybrid. With respect to - "Yes two different deployments can be fed. search clients for eachother"? - Have you seen an environment with the same? Because I couldn't find an... See more...
Thanks for the answer @livehybrid. With respect to - "Yes two different deployments can be fed. search clients for eachother"? - Have you seen an environment with the same? Because I couldn't find any of the Splunk Doc where it's mentioned that the environments can be interconnected.
Hi, any support please. I have 2 lookups. tmp1_1.csv WorkplaceId,PK1,Description,Contract 1234567890,7535712,Contract1,19 1123456789,7535712,Contract2,18 1234567890,7456072,Contract3,14 123456... See more...
Hi, any support please. I have 2 lookups. tmp1_1.csv WorkplaceId,PK1,Description,Contract 1234567890,7535712,Contract1,19 1123456789,7535712,Contract2,18 1234567890,7456072,Contract3,14 1234567890,7456072,Contract4,15 1234567891,7456072,Contract5,16 tmp1_2.csv WorkplaceId,State,Timestamp,ContractId 1234567890,Start,1752838050,12 1234567890,End,1752838633,12 1123456789,Start,1752838853,13 1123456789,Break,1752839380,13 1123456789,End,1752839691,13 1234567890,Start,1752839720,14 1234567890,Start,1752839745,15 1234567891,Start,1752839777,16 1234567891,Start,1752839790,18 1234567890,Start,1752839892,19   The primary key between these tables is WorkplaceId,Contract=WorkplaceId,ContractId The task is always to select the content from tmp1_2.csv based on conditions cond1: select everything from tmp1_2.csv where WorkplaceId,Contract!=WorkplaceId,ContractId. In this case the result should be WorkplaceId,State,Timestamp,ContractId 1234567890,Start,1752838050,12 1234567890,End,1752838633,12 1123456789,Start,1752838853,13 1123456789,Break,1752839380,13 1123456789,End,1752839691,13 2. cond2: select everything from tmp1_2.csv where WorkplaceId,Contract=WorkplaceId,ContractId. In this case the result should be WorkplaceId,State,Timestamp,ContractId 1234567890,Start,1752839720,14 1234567890,Start,1752839745,15 1234567891,Start,1752839777,16 1234567891,Start,1752839790,18 1234567890,Start,1752839892,19 Any support, please?
Hi @phamanh1652 , I suppose that you're using the Splunk_TA_Windows, did you checked if, in the inputs.log, there's a filter on WinEventLog:Security logs: sometimes not all the EventCodes areindexed... See more...
Hi @phamanh1652 , I suppose that you're using the Splunk_TA_Windows, did you checked if, in the inputs.log, there's a filter on WinEventLog:Security logs: sometimes not all the EventCodes areindexed. Ciao. Giuseppe
Environment: Product: Splunk Enterprise (Indexer) Deployment: On-premises Current Version: 9.3.2 Target Version: 9.4.x (tested 9.4.0, 9.4.2) Current KV Store Version: MongoDB 4.17 Expected KV ... See more...
Environment: Product: Splunk Enterprise (Indexer) Deployment: On-premises Current Version: 9.3.2 Target Version: 9.4.x (tested 9.4.0, 9.4.2) Current KV Store Version: MongoDB 4.17 Expected KV Store Version: MongoDB 7.x (per documentation) Issue Summary: Experiencing KV Store upgrade failures when upgrading Splunk Enterprise Indexer from 9.3.2 to any 9.4.x version. According to Splunk documentation, the upgrade from 9.3.x to 9.4.x should be seamless with automatic KV Store upgrade from MongoDB 4.x to 7.x. Both automatic and manual KV Store upgrade approaches have failed. Errors sample: -  alled Result::unwrap() on an Err value: UpgradeError { details: "Error updating status to 'INITIAL_UPGRADE_SEQUENCE' on 127.0.0.1:8191 document: Error { kind: Write(WriteError(WriteError { code: 11000, code_name: None, message: "E11000 duplicate key error collection: migration_metadata.migration_metadata index: id dup key: { _id: \"127.0.0.1:8191\" }", details: None })), labels: {}, wire_version: None, source: None }", kind: LocalError } - Failed to upgrade KV Store to the latest version. KV Store is running an old version, 4.2. Resolve upgrade errors and try to upgrade KV Store to the latest version again. Any others wiredTiger etc all might be wild fire and relevant Tried manuallay and also ansible automation both( same steps )   Question: Why is KV Store upgrading to 4.25 instead of directly to 7.x as documented? How to come out as we have big infra and we need to upgrade etc ?
Hello All, We send logs from Windows to Splunk via Universal Forwarder. We want to create alerts for Event ID 1104 - The security log is full and 1105 - Log automatic backup. However, when searchin... See more...
Hello All, We send logs from Windows to Splunk via Universal Forwarder. We want to create alerts for Event ID 1104 - The security log is full and 1105 - Log automatic backup. However, when searching, we cannot find either of these events. When reviewing the log files (EVTX), Event ID 1104 appears as the final entry in the archived log, while Event ID 1105 is the initial entry in the newly created EVTX file. Here is the configuration for log archiving:
@richgalloway This is one of the reason I am afraid of creating dedicated summary indexes again
What exactly is not working? Please share your search where you are using the token?
Hi @pdgill314  You could start with the `notable` search and then to a lookup on Rule_ID/event_id, however there might be an easier way. I believe the KV Store incident_review_lookup has an urgency ... See more...
Hi @pdgill314  You could start with the `notable` search and then to a lookup on Rule_ID/event_id, however there might be an easier way. I believe the KV Store incident_review_lookup has an urgency field... Try: | inputlookup incident_review_workflow_audit | lookup incident_review_lookup rule_id OUTPUT urgency | where urgency="critical" ``` then the rest as before ``` | where notable_time > relative_time(now(), "-1mon@mon") AND notable_time < relative_time(now(), "@mon") | eval EventOpenedEpoch = notable_time, TriageStartedEpoch = triage_time, ResolutionEpoch = notable_time + new_to_resolution_duration, DaysInNewStatus = round(new_duration/86400,2), DaysToResolution = round(new_to_resolution_duration/86400,2) | where new_to_resolution_duration>0 | eval "Event Opened" = strftime(EventOpenedEpoch, "%Y-%m-%d %H:%M:%S"), "Triage process started" = strftime(TriageStartedEpoch, "%Y-%m-%d %H:%M:%S"), "Event Resolved" = strftime(ResolutionEpoch, "%Y-%m-%d %H:%M:%S") | rename rule_id AS "Event ID" | table "Event ID", "Event Opened", "Triage process started", "Event Resolved", DaysInNewStatus, DaysToResolution urgency | sort - DaysToResolution Im not infront of an ES deployment at the minute so sorry I cant test completely!  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Apologies if I wasnt previously clear - the purpose of the check in the _internal index is to check that your forwarder is successfully sending its own internal logs to your indexer(s) - this allows ... See more...
Apologies if I wasnt previously clear - the purpose of the check in the _internal index is to check that your forwarder is successfully sending its own internal logs to your indexer(s) - this allows us to establish if the cause is due to a forwarding issue from the forwarder, or a problem reading in the data. Do you see your forwarder host sending *any* logs (not specific to Trellix) in the _internal index?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Ah my apologies, I misunderstood previously. As @PickleRick said, Splunk will only be aware of any changes/additional/removal of groups when a user logs in, so this is something worth considering, a... See more...
Ah my apologies, I misunderstood previously. As @PickleRick said, Splunk will only be aware of any changes/additional/removal of groups when a user logs in, so this is something worth considering, although isnt the issue here. Can I clarify - the 100s of roles that you're referring to here, are these *all* Splunk roles that also exist in Splunk where the AD role is mapped to a specific (unique) Splunk role? The REST endpoints will only return the Splunk role for a user not all their AD roles - I just want to make sure we're on the same page before I dig deeper! Thanks  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @daniela1  You can gain access to documents about and certifications relating to ISO27001 via the Customer Trust portal at https://customertrust.splunk.com/ - Note that you may be required to req... See more...
Hi @daniela1  You can gain access to documents about and certifications relating to ISO27001 via the Customer Trust portal at https://customertrust.splunk.com/ - Note that you may be required to request access to this, which can be done from the portal itself. Generally these certificates are not publicly available, however as @richgalloway mentioned you can see on that URL some of the products which have ISO27001 certifications.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @meetmshah  Yes two different deployments can be fed. search clients for eachother - however the connections will not really know of each other.  I dont know too much about the best practices he... See more...
Hi @meetmshah  Yes two different deployments can be fed. search clients for eachother - however the connections will not really know of each other.  I dont know too much about the best practices here, however *Federated Search for Splunk supports Splunk IT Service Intelligence version 4.16.0 and higher, for transparent mode federated search only* based on the docs. Note - the federated search docs suggest engaging with your account team and/or support when working with premium apps such as ITSI with federated search.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Thanks for your reply. I have checked the case and for the special characters, I even changed the token name from search_criteria to searchcriteria    
Thanks for the info, yes I have gone through ESCU and the Detections on the Splunk page.    Looks like I'm going to have to create my own detections I guess, if I have something solid I will contri... See more...
Thanks for the info, yes I have gone through ESCU and the Detections on the Splunk page.    Looks like I'm going to have to create my own detections I guess, if I have something solid I will contribute for sure.     Thanks again for replying. 
I am sorry for the typing mistake. The Token name is searchcriteria, there is no difference in the case: I know case difference is an issue, but I have made sure there is no case difference,... See more...
I am sorry for the typing mistake. The Token name is searchcriteria, there is no difference in the case: I know case difference is an issue, but I have made sure there is no case difference, and I am still having the same issue.  
Hi @hl  Have you already explored https://research.splunk.com/detections/ and/or Enterprise Security Content Updater? I havent worked with PA events for a while now, when I did I had to create my o... See more...
Hi @hl  Have you already explored https://research.splunk.com/detections/ and/or Enterprise Security Content Updater? I havent worked with PA events for a while now, when I did I had to create my own custom detections however I think there are more and more being added to ESCU.  If you do end up with additional detections for PaloAlto then its worth contributing these to https://github.com/splunk/security_content so that others can benefit from them, and this also demonstrates that the wider community needs detections from PA and thus helps grow the requirement for PA detections to be built by Splunk teams.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
You say you found no warm buckets, but what about cold buckets?  Did you find any of those? If you're not running out of disk space then maxVolumeDataSizeMB is not too high. The current settings ha... See more...
You say you found no warm buckets, but what about cold buckets?  Did you find any of those? If you're not running out of disk space then maxVolumeDataSizeMB is not too high. The current settings have buckets spanning 90 days.  Therefore, you should have 5 "generations" of buckets - 0-90 days (hot), 91-180 days, 181-270 days, 271-360 days, and 361-450 days.  That last one is because a bucket won't be frozen until *all* events in it exceed the retention time. Set maxHotSpanSecs to 86400 so each bucket only contains a single day of data and retention should improve.
Hi @jkamdar  Due to having things defined in different places here, it might be best to run a btool to the exact configuration (including default values) Please could you run: $SPLUNK_HOME/bin/spl... See more...
Hi @jkamdar  Due to having things defined in different places here, it might be best to run a btool to the exact configuration (including default values) Please could you run: $SPLUNK_HOME/bin/splunk cmd btool indexes list --debug <yourIndexName> When you talk about buckets "not rolling" - do you mean from Hot->Warm, or Cold->Frozen?  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing