All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi splunk community, I have a question on logs cloning/redirection Purpose : Extract logs containing "network-guest", and don't redirect this logs to a distant HF, but only to local indexers LOGS ... See more...
Hi splunk community, I have a question on logs cloning/redirection Purpose : Extract logs containing "network-guest", and don't redirect this logs to a distant HF, but only to local indexers LOGS ENTRY CONFIG Into an app Splunk_TA_FIREWALL inputs.conf [tcp://22000] sourcetype = fw:firewall index = fw_index _TCP_ROUTING = local_indexers This logs are perfectly working and are stored on my local indexers Now this logs must be cloned and redirected to a distant HF but not the logs containing "network-guest" THat my props and transforms config props.conf [fw:firewall] TRANSFORMS-clone = fwfirewall-route-network-guest-, fwfirewall-clone transforms.conf [fwfirewall-route-network-guest] REGEX = \bNETWORK-GUEST\b DEST_KEY = _SYSLOG_ROUTING FORMAT = local_indexers [fwfirewalll-clone] DEST_KEY = _SYSLOG_ROUTING FORMAT = distant_HF REGEX = . When I check into the logs, on the distant splunk, I don't see NETWORK-GUEST logs anymore, and I can see those logs on the local splunk Question is, I'm not sure I'm doing that the right way, and not sure if it works 100% Has someone a good knowledge on this kind of configuration ? Thanks a lot for the help Nico
@Haleb  Start by reviewing $SPLUNK_HOME/var/log/splunk/splunkd.log for specific error messages about the restart. Run netstat -tuln | grep 8089 or ss -tuln | grep 8089 to confirm if another pro... See more...
@Haleb  Start by reviewing $SPLUNK_HOME/var/log/splunk/splunkd.log for specific error messages about the restart. Run netstat -tuln | grep 8089 or ss -tuln | grep 8089 to confirm if another process is using the port
Hi there, after investigation my Search Head instance I found this in my task bar. Can somebody say is it expected behaviour?
Thanks for the info!  I just started an account and wanted to come back to this thread to give karma, really helpful yet simple fix. 
I think the easiest way is to use default certificates for KV store.
We have 40 dc server sending logs to onprem indexers but i see on Deployment server i can see only on App which has outputs.co
@livehybrid just to add on, after getting the data in. If the monitored folder json file has been update with either an appending of records or so. Do you happen to know why it indexes the whole json... See more...
@livehybrid just to add on, after getting the data in. If the monitored folder json file has been update with either an appending of records or so. Do you happen to know why it indexes the whole json file again? rather than just the appended new records?? since the naming of the json remain the same.    
@kiran_panchavat, After several attempts in my situation, I tried using the following settings for JSON. While it was able to read the data, each record/value ended up having duplicated values. I tri... See more...
@kiran_panchavat, After several attempts in my situation, I tried using the following settings for JSON. While it was able to read the data, each record/value ended up having duplicated values. I tried setting the relevant KV options, but it still didn’t resolve the issue. For now, I’ve decided to proceed without using INDEXED_EXTRACTIONS. It still works, but it treats the [ as a single entry. I'm still unsure how to fully resolve this. *Just a heads up. I'm also using transforms.conf, though I'm not entirely sure if that's what's causing the duplicate values* INDEXED_EXTRACTIONS = JSON either with or without the following: KV_MODE = none AUTO_KV_JSON = false   @livehybrid , Great! What you mentioned was part of the reason why two entries kept getting indexed together. After updating the configuration and removing the other stanza, I was able to index the JSON array as multiple events. I also noticed that it might have been due to my use of transforms.conf to assign the sourcetype.  
Thanks for the hint. Now I know where to search. Yes, the indexes were deployed but the wrong way (the indexes were created on the searchhead and not in the cluster...) In our environment, the app ... See more...
Thanks for the hint. Now I know where to search. Yes, the indexes were deployed but the wrong way (the indexes were created on the searchhead and not in the cluster...) In our environment, the app deployments does not get done by me and I have then to figure out what are the issues...
Hi, did you create new indexes, required by ES 8.0? Eg. mc_investigations,  mc_artifacts, mc_aux_incidents, mc_events, mc_incidents_backup, cms_main...? That could be your issue.
Hi We upgraded our ES7 to ES8 onprem and are testing it.   We currently have the issue, that the created investigations are not shown in the MissionControl. If we oben a finding that is assigned t... See more...
Hi We upgraded our ES7 to ES8 onprem and are testing it.   We currently have the issue, that the created investigations are not shown in the MissionControl. If we oben a finding that is assigned to an investigation, we can open them from there. If I read the documentation, the investigations should appear besides the findings inside of MissionControl . Did anyone have the same issue and have a solution for it? Thanks for your help/hints.
Hi @tangtangtang12 , whichAdd-On are you using to extract these information from your windows server? is you use the Splunk_TA_Window ( https://splunkbase.splunk.com/app/742 ) there's two inputs (d... See more...
Hi @tangtangtang12 , whichAdd-On are you using to extract these information from your windows server? is you use the Splunk_TA_Window ( https://splunkbase.splunk.com/app/742 ) there's two inputs (disabled by default and to enable) that you can use: [WinHostMon://Computer] [perfmon://Memory] Page Faults/sec; Available Bytes; Committed Bytes; Commit Limit; Write Copies/sec; Transition Faults/sec; Cache Faults/sec; Demand Zero Faults/sec; Pages/sec; Pages Input/sec; Page Reads/sec; Pages Output/sec; Pool Paged Bytes; Pool Nonpaged Bytes; Page Writes/sec; Pool Paged Allocs; Pool Nonpaged Allocs; Free System Page Table Entries; Cache Bytes; Cache Bytes Peak; Pool Paged Resident Bytes; System Code Total Bytes; System Code Resident Bytes; System Driver Total Bytes; System Driver Resident Bytes; System Cache Resident Bytes; % Committed Bytes In Use; Available KBytes; Available MBytes; Transition Pages RePurposed/sec; Free & Zero Page List Bytes; Modified Page List Bytes; Standby Cache Reserve Bytes; Standby Cache Normal Priority Bytes; Standby Cache Core Bytes; Long-Term Average Standby Cache Lifetime (s) Ciao. Giuseppe
Hi @EFonua  It seems that something must have changed in either the field extractions for your users, or the source data.  Have you updated any apps recently or made any field extractions changes? ... See more...
Hi @EFonua  It seems that something must have changed in either the field extractions for your users, or the source data.  Have you updated any apps recently or made any field extractions changes? Without the actual search you are running it is hard for us to determine the issue here, but I would start out by running the search manually to see what user values you get, then work back from there to determine why the correct value isnt appearing.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @ws  For All-in-one then you are good, and then yes - once ready to deploy you would put this on your HF. One things Ive just notice which I missed before is that you are changing the sourcetype... See more...
Hi @ws  For All-in-one then you are good, and then yes - once ready to deploy you would put this on your HF. One things Ive just notice which I missed before is that you are changing the sourcetypes. The second set of props probably arent applying to the new sourcetype name (you cant have 2 bites of the same cherry...) so try applying the event breaker props to the original sourcetype in the [preprocess_case] stanza.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @sabollam  I think you first need to address the issue of the multiple JSON events displaying in a single event as per your screenshot. I suspect that the reason you are getting the "none" value ... See more...
Hi @sabollam  I think you first need to address the issue of the multiple JSON events displaying in a single event as per your screenshot. I suspect that the reason you are getting the "none" value is because its failing to do the json_extract to get the timestamp value because the JSON is not valid/there are multiple events. If you are able to get the event breaking properly then I think the INGEST_EVAL should work. As others have said, its worth making sure you are consciously doing this based on valid decision - there may be other ways to achieve this.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @livehybrid, For testing purpose, my architecture is all-in-one setup.  For my actual deployment, to mine understand the props.conf and transforms.conf will be at my HF right? since the pulling ... See more...
Hi @livehybrid, For testing purpose, my architecture is all-in-one setup.  For my actual deployment, to mine understand the props.conf and transforms.conf will be at my HF right? since the pulling of the json file land at my HF local server.
Hi @ws  Can you confirm where you applied those props/transforms and what your architecture looks like? They need applying to either HF or Indexers depending where the data lands.  Did this answe... See more...
Hi @ws  Can you confirm where you applied those props/transforms and what your architecture looks like? They need applying to either HF or Indexers depending where the data lands.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @kiran_panchavat, I noticed that the sample provided aligns with what I’m trying to achieve. However, after applying the same settings for testing, I’m still not getting the same results as you. ... See more...
Hi @kiran_panchavat, I noticed that the sample provided aligns with what I’m trying to achieve. However, after applying the same settings for testing, I’m still not getting the same results as you. I’ve attached a screenshot for your reference—please help point out any mistakes or adjustments that may be needed. I don’t believe the issue lies with the transforms.conf configuration. JSON file: [ { "attribute":{ "type": "case" }, "Id": "I0000005", "name": "ws", "email": "ws@gmail.com", "case type__c": "Service Case", "date": "17/4/2025", "time": "16:15", "account":{ "attribute": { "type": "account" }, "Id": "I0000005" } }, { "attribute":{ "type": "case" }, "Id": "I0000006", "name": "thomas", "email": "thomas@gmail.com", "case type__c": "Transaction Case", "date": "17/4/2025", "time": "16:15", "account":{ "attribute": { "type": "account" }, "Id": "I0000006" } } ]   Search Head:   Props.conf   Transforms.conf  
Thanks @livehybrid, preliminary testing shows this seems to be working - great! Will of course mark as solution after some further testing. To answer your initial question... it was allowing all 46... See more...
Thanks @livehybrid, preliminary testing shows this seems to be working - great! Will of course mark as solution after some further testing. To answer your initial question... it was allowing all 4662 events.
1. Ekhm, your "dev team" cannot handle epoch timestamp? That is... surprising to say the least. 2. Who produces those logs? Another app written by another "dev team"?