All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

This splunk_server_group is e.g your defined additional group in MC setup like az_hec_test
It's normal to see "splunkd -p 8089 restart" in the process list because that is the command that launched Splunk.  I'm not sure, however, about why it appears so many times.  AIUI, there should be a... See more...
It's normal to see "splunkd -p 8089 restart" in the process list because that is the command that launched Splunk.  I'm not sure, however, about why it appears so many times.  AIUI, there should be a single "splunkd -p 8089 restart" process and additional "splunkd" processes for running searches.  I could be mistaken, however.
Hi @Haleb  *Yes* - This is to be expected, this is based on how the Splunk instance was started.  Essentially if Splunk started by "$SPLUNK_HOME/bin/splunk start" it will appended with "start", if ... See more...
Hi @Haleb  *Yes* - This is to be expected, this is based on how the Splunk instance was started.  Essentially if Splunk started by "$SPLUNK_HOME/bin/splunk start" it will appended with "start", if it was "$SPLUNK_HOME/bin/splunk restart" it will be appended with "restart". If you use systemd then it could be something like "splunkd --under-systemd --systemd-delegate=yes -p 8089 _internal_launch_under_systemd" Check out the following Community post with a little more info if interested: https://community.splunk.com/t5/Monitoring-Splunk/Difference-between-splunkd-p-8089-restart-or-splunkd-p-8089/m-p/255553  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @Nicolas2203  You are referencing _SYSLOG_ROUTING which is for syslog routing, whereas your input is using _TCP_ROUTING. Did you mean to use _TCP_ROUTING in your transforms? Another thing is th... See more...
Hi @Nicolas2203  You are referencing _SYSLOG_ROUTING which is for syslog routing, whereas your input is using _TCP_ROUTING. Did you mean to use _TCP_ROUTING in your transforms? Another thing is that this will not clone your data, it will only *change* the routing.  When you specify multiple items in a TRANSFORMS they are processed in order, meaning that your network guest route is applied, then the second one. In your scenario the second transform applies to ALL events because of the "." in the REGEX which means that will be the routing which is applied. I think what you are looking for is the following: == props.conf == [fw:firewall] TRANSFORMS-clone = fwfirewall-route-network-guest, fwfirewall-clone == transforms.conf == [fwfirewall-clone] DEST_KEY = _TCP_ROUTING FORMAT = distant_HF,local_indexers REGEX = . [fwfirewall-route-network-guest] REGEX = \bNETWORK-GUEST\b DEST_KEY = _TCP_ROUTING FORMAT = local_indexers How this works is by specifying both outputs in _TCP_ROUTING when the REGEX matches "." (always) and then changes it to local_indexers IF the event contains NETWORK-GUEST. This could actually be simplified by setting the duplicate output in the input, then just overriding for the local_indexers if it contains NETWORK-GUEST: == inputs.conf == [tcp://22000] sourcetype = fw:firewall index = fw_index _TCP_ROUTING = distant_HF,local_indexers == props.conf == [fw:firewall] TRANSFORMS-redirectLocal = fwfirewall-route-network-guest == transforms.conf == [fwfirewall-route-network-guest] REGEX = \bNETWORK-GUEST\b DEST_KEY = _TCP_ROUTING FORMAT = local_indexers  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hello kiran, yes this is a typo .
@Nicolas2203  Could you please check this? In your props.conf, you've referenced "fwfirewall-route-network-guest-", but in transforms.conf, the stanza is named "fwfirewall-route-network-guest". Is t... See more...
@Nicolas2203  Could you please check this? In your props.conf, you've referenced "fwfirewall-route-network-guest-", but in transforms.conf, the stanza is named "fwfirewall-route-network-guest". Is this a typo?
Hi @Hemant_h , et me understand you use case: you have around 40 DC with the Universal Forwarder. on these UFs I suppose that there are some add-ons, one of them contains outputs.conf file and it... See more...
Hi @Hemant_h , et me understand you use case: you have around 40 DC with the Universal Forwarder. on these UFs I suppose that there are some add-ons, one of them contains outputs.conf file and it's role is to address the system to receive data forwarders by UFs. It seems that these data from UFs are forwarderd to one (or more) Heavy Forwarders that forward data to the Indexers. then you have a Deployment server that deployes the add-on contaiing outsputs.conf. Usually on the UFs is installed (deployed by the DS) also the Splunk_TA_Windows to ingest logs. is this add-on installed on the UFs? have you this add-on deployed by the DS? are you receiving logs on Splunk? which logs are you receiving: internal, Windows or both? if you have only the outputs.conf add-on, you should receive only _* logs (internal Splunk logs). Anyway, the flow is usual from the UFs to the HF or directly to IDXs. Ciao. Giuseppe
Hi splunk community, I have a question on logs cloning/redirection Purpose : Extract logs containing "network-guest", and don't redirect this logs to a distant HF, but only to local indexers LOGS ... See more...
Hi splunk community, I have a question on logs cloning/redirection Purpose : Extract logs containing "network-guest", and don't redirect this logs to a distant HF, but only to local indexers LOGS ENTRY CONFIG Into an app Splunk_TA_FIREWALL inputs.conf [tcp://22000] sourcetype = fw:firewall index = fw_index _TCP_ROUTING = local_indexers This logs are perfectly working and are stored on my local indexers Now this logs must be cloned and redirected to a distant HF but not the logs containing "network-guest" THat my props and transforms config props.conf [fw:firewall] TRANSFORMS-clone = fwfirewall-route-network-guest-, fwfirewall-clone transforms.conf [fwfirewall-route-network-guest] REGEX = \bNETWORK-GUEST\b DEST_KEY = _SYSLOG_ROUTING FORMAT = local_indexers [fwfirewalll-clone] DEST_KEY = _SYSLOG_ROUTING FORMAT = distant_HF REGEX = . When I check into the logs, on the distant splunk, I don't see NETWORK-GUEST logs anymore, and I can see those logs on the local splunk Question is, I'm not sure I'm doing that the right way, and not sure if it works 100% Has someone a good knowledge on this kind of configuration ? Thanks a lot for the help Nico
@Haleb  Start by reviewing $SPLUNK_HOME/var/log/splunk/splunkd.log for specific error messages about the restart. Run netstat -tuln | grep 8089 or ss -tuln | grep 8089 to confirm if another pro... See more...
@Haleb  Start by reviewing $SPLUNK_HOME/var/log/splunk/splunkd.log for specific error messages about the restart. Run netstat -tuln | grep 8089 or ss -tuln | grep 8089 to confirm if another process is using the port
Hi there, after investigation my Search Head instance I found this in my task bar. Can somebody say is it expected behaviour?
Thanks for the info!  I just started an account and wanted to come back to this thread to give karma, really helpful yet simple fix. 
I think the easiest way is to use default certificates for KV store.
We have 40 dc server sending logs to onprem indexers but i see on Deployment server i can see only on App which has outputs.co
@livehybrid just to add on, after getting the data in. If the monitored folder json file has been update with either an appending of records or so. Do you happen to know why it indexes the whole json... See more...
@livehybrid just to add on, after getting the data in. If the monitored folder json file has been update with either an appending of records or so. Do you happen to know why it indexes the whole json file again? rather than just the appended new records?? since the naming of the json remain the same.    
@kiran_panchavat, After several attempts in my situation, I tried using the following settings for JSON. While it was able to read the data, each record/value ended up having duplicated values. I tri... See more...
@kiran_panchavat, After several attempts in my situation, I tried using the following settings for JSON. While it was able to read the data, each record/value ended up having duplicated values. I tried setting the relevant KV options, but it still didn’t resolve the issue. For now, I’ve decided to proceed without using INDEXED_EXTRACTIONS. It still works, but it treats the [ as a single entry. I'm still unsure how to fully resolve this. *Just a heads up. I'm also using transforms.conf, though I'm not entirely sure if that's what's causing the duplicate values* INDEXED_EXTRACTIONS = JSON either with or without the following: KV_MODE = none AUTO_KV_JSON = false   @livehybrid , Great! What you mentioned was part of the reason why two entries kept getting indexed together. After updating the configuration and removing the other stanza, I was able to index the JSON array as multiple events. I also noticed that it might have been due to my use of transforms.conf to assign the sourcetype.  
Thanks for the hint. Now I know where to search. Yes, the indexes were deployed but the wrong way (the indexes were created on the searchhead and not in the cluster...) In our environment, the app ... See more...
Thanks for the hint. Now I know where to search. Yes, the indexes were deployed but the wrong way (the indexes were created on the searchhead and not in the cluster...) In our environment, the app deployments does not get done by me and I have then to figure out what are the issues...
Hi, did you create new indexes, required by ES 8.0? Eg. mc_investigations,  mc_artifacts, mc_aux_incidents, mc_events, mc_incidents_backup, cms_main...? That could be your issue.
Hi We upgraded our ES7 to ES8 onprem and are testing it.   We currently have the issue, that the created investigations are not shown in the MissionControl. If we oben a finding that is assigned t... See more...
Hi We upgraded our ES7 to ES8 onprem and are testing it.   We currently have the issue, that the created investigations are not shown in the MissionControl. If we oben a finding that is assigned to an investigation, we can open them from there. If I read the documentation, the investigations should appear besides the findings inside of MissionControl . Did anyone have the same issue and have a solution for it? Thanks for your help/hints.
Hi @tangtangtang12 , whichAdd-On are you using to extract these information from your windows server? is you use the Splunk_TA_Window ( https://splunkbase.splunk.com/app/742 ) there's two inputs (d... See more...
Hi @tangtangtang12 , whichAdd-On are you using to extract these information from your windows server? is you use the Splunk_TA_Window ( https://splunkbase.splunk.com/app/742 ) there's two inputs (disabled by default and to enable) that you can use: [WinHostMon://Computer] [perfmon://Memory] Page Faults/sec; Available Bytes; Committed Bytes; Commit Limit; Write Copies/sec; Transition Faults/sec; Cache Faults/sec; Demand Zero Faults/sec; Pages/sec; Pages Input/sec; Page Reads/sec; Pages Output/sec; Pool Paged Bytes; Pool Nonpaged Bytes; Page Writes/sec; Pool Paged Allocs; Pool Nonpaged Allocs; Free System Page Table Entries; Cache Bytes; Cache Bytes Peak; Pool Paged Resident Bytes; System Code Total Bytes; System Code Resident Bytes; System Driver Total Bytes; System Driver Resident Bytes; System Cache Resident Bytes; % Committed Bytes In Use; Available KBytes; Available MBytes; Transition Pages RePurposed/sec; Free & Zero Page List Bytes; Modified Page List Bytes; Standby Cache Reserve Bytes; Standby Cache Normal Priority Bytes; Standby Cache Core Bytes; Long-Term Average Standby Cache Lifetime (s) Ciao. Giuseppe
Hi @EFonua  It seems that something must have changed in either the field extractions for your users, or the source data.  Have you updated any apps recently or made any field extractions changes? ... See more...
Hi @EFonua  It seems that something must have changed in either the field extractions for your users, or the source data.  Have you updated any apps recently or made any field extractions changes? Without the actual search you are running it is hard for us to determine the issue here, but I would start out by running the search manually to see what user values you get, then work back from there to determine why the correct value isnt appearing.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing