All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello, It did work if the data arrived from a UF.  Which as far as I can tell for this related ticket is how the data is coming in.  What I was trying to achieve is changing a custom sourcetype to ... See more...
Hello, It did work if the data arrived from a UF.  Which as far as I can tell for this related ticket is how the data is coming in.  What I was trying to achieve is changing a custom sourcetype to a standard one for a specific log type.  Since we do not have full access to specific splunk environments having it changed on the indexer level is likely the only way to do it.   If there's a HF involved, then my team should have access to it and we can make the appropriate changes on the endpoint.  
Thank you that worked if the data arrives from a UF.  I had started with a HF which I could not change the sourcetype on, but that didn't surprise me either.   We do not have access to the entire Sp... See more...
Thank you that worked if the data arrives from a UF.  I had started with a HF which I could not change the sourcetype on, but that didn't surprise me either.   We do not have access to the entire Splunk environment in every case and still waiting on a ticket to see if a custom sourcetype needs to be changed to a standard one.   Most cases where there is a heavy forwarder it's likely we will have access to it and can take care of the sourcetype more directly through the endpoint.    
Yes, I'm expecting the whitelist to be applied on both machineTypesFilter AND packageTypesFilter. Therefore, I'm expecting to have only clients with name=srv* AND OS=windows-x64 AND package=universal... See more...
Yes, I'm expecting the whitelist to be applied on both machineTypesFilter AND packageTypesFilter. Therefore, I'm expecting to have only clients with name=srv* AND OS=windows-x64 AND package=universal_forwarder. This is not working with whitelist AND machineType AND packageType in the same stanza. The doc doesn't tell if both TypesFilter parameters can be used in the same stanza.
@livehybridand @isoutamo  Thanks for both of your answers! I more or less know what I need to put in those files, so have that part figured out already. Yes, as far as my understanding goes, the a... See more...
@livehybridand @isoutamo  Thanks for both of your answers! I more or less know what I need to put in those files, so have that part figured out already. Yes, as far as my understanding goes, the app is supposed to go on a heavy forwarder node. We have no plans in place for using a deployment server. For the initial POC phase, I believe that adding the app az a simple zip file would suffice. As for outputs.conf, is it possible to somehow dinamically generate its content? I mean, to ask the user for a hostname or IP address, and then use that value for the server value. I will try adding my configs to the app, and will report back in a few days. Now looking at the page https://dev.splunk.com/enterprise/docs/developapps/extensionpoints It does mention props.conf and transforms.conf, but there seems to be no mention of outputs.conf. Is it possible to have an outputs.conf in the app, and force Splunk to somehow use it regardless of not being present in the list above?
Hello @NoSpaces did you report to support as this looks like splunk init shcluster-config bug?
Hi @thanh_on  Please can you confirm that you are running Splunk on a VM/Physical server, not inside Docker?  Also - please could you run the following and let us know what is returned? index=_int... See more...
Hi @thanh_on  Please can you confirm that you are running Splunk on a VM/Physical server, not inside Docker?  Also - please could you run the following and let us know what is returned? index=_introspection host=YourHostname component=HostWide earliest=-60m | dedup data.instance_guid | table data.mem* and | rest /services/server/info splunk_server=local | table guid host physicalMemoryMB Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hello, thanks for your help, had similar issue due to splunk init shcluster-config, secret key wasn't correctly encrypted in server.conf apparently, as seen with help of splunk show-decrypted
Hi @lar06  The packageTypesFilter key uses the logic in the whitelist/blacklist as well as machineTypesFilter, therefore the whitelist you have specified is trying to be applied to both your machine... See more...
Hi @lar06  The packageTypesFilter key uses the logic in the whitelist/blacklist as well as machineTypesFilter, therefore the whitelist you have specified is trying to be applied to both your machineTypesFilter AND packageTypesFilter. packageTypesFilter = <comma-separated list> * Optional. * Boolean OR logic is employed: a match against any element in the list constitutes a match. * This filter is used in boolean AND logic with 'whitelist'/'blacklist' filters. Only clients which match the 'whitelist'/'blacklist' AND which match this packageTypesFilter are included. * In other words, the match is an intersection of the matches for the 'whitelist'/'blacklist' and the matches for 'packageTypesFilter'. For more information have a look at https://docs.splunk.com/Documentation/Splunk/9.4.1/Admin/Serverclassconf Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
I checked in HF under var/lib/splunk and created index is showing (but hot and cold path is incorrect) because volumes are configured in indexers hence index configurations are correct in indexers an... See more...
I checked in HF under var/lib/splunk and created index is showing (but hot and cold path is incorrect) because volumes are configured in indexers hence index configurations are correct in indexers and this should be applied finally...
Hi @wowbaggerHU  How you create an app would ultimately depend on what your architecture looks like.  Aswell as Props/Transforms, You mentioned that you are looking to make changes outputs.conf - i... See more...
Hi @wowbaggerHU  How you create an app would ultimately depend on what your architecture looks like.  Aswell as Props/Transforms, You mentioned that you are looking to make changes outputs.conf - is this app to go on a heavy forwarder?  If so, are you deploying via a deployment server? Ultimately an app in Splunk can be as simple as a folder structure with specific files, your simple app might looks like this $SPLUNK_HOME/ etc/ apps/ yourAppName/ local/ props.conf transforms.conf outputs.conf If you need to deploy this app via a deployment server then your would put the app in $SPLUNK_HOME/etc/deployment-apps and then configure the deployment server configuration to deploy it to your instance(s). Check out the following useful pages on options for the relevant conf files you are looking to create: Props - https://docs.splunk.com/Documentation/Splunk/9.4.1/Admin/Propsconf Transforms - https://docs.splunk.com/Documentation/Splunk/9.4.1/Admin/Transformsconf Outputs - https://docs.splunk.com/Documentation/Splunk/9.4.1/Admin/Outputsconf Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will  
Hello Using Splunk 9.3.2 I want to deploy an app to all Windows UF only.  This config doesn't work.  [serverClass:scallufwin:app:btoolufwin] restartSplunkWeb = 0 restartSplunkd = 1 stateOnClient ... See more...
Hello Using Splunk 9.3.2 I want to deploy an app to all Windows UF only.  This config doesn't work.  [serverClass:scallufwin:app:btoolufwin] restartSplunkWeb = 0 restartSplunkd = 1 stateOnClient = enabled [serverClass:scallufwin] machineTypesFilter = windows-x64 packageTypesFilter = universal_forwarder whitelist.0 = srv*   Moving the packageTypesFilter parameter to the serverClass:app stanza fixed the issue. [serverClass:scallufwin:app:btoolufwin] restartSplunkWeb = 0 restartSplunkd = 1 stateOnClient = enabled packageTypesFilter = universal_forwarder [serverClass:scallufwin] machineTypesFilter = windows-x64 whitelist.0 = srv* Does that mean packageTypesFilter and machineTypesFilter can't be used in the same stanza ? Thanks
When you have set in outputs.conf as described earlier in this thread all events goes only into indexers. Of course you can log in into HF and look if there are anything under /opt/splunk/var/lib/spl... See more...
When you have set in outputs.conf as described earlier in this thread all events goes only into indexers. Of course you can log in into HF and look if there are anything under /opt/splunk/var/lib/splunk where all indexes are. Just check if in your own indexes have events or not.
Hi You should start with dev.splunk.com where is described how to create an app. That documentation is base for understanding app development for Splunk. Then there are https://docs.splunk.com/Docu... See more...
Hi You should start with dev.splunk.com where is described how to create an app. That documentation is base for understanding app development for Splunk. Then there are https://docs.splunk.com/Documentation/Splunk/9.4.1/AdvancedDev/Whatsinthismanual where you can found more information. Also conf.splunk.com contains lot of good presentations. Personally I haven't use YouTube.com for splunk, but there should be also something about this. For details about props.conf and transforms.conf you can found https://docs.splunk.com/Documentation/Splunk/9.4.1/Admin/Propsconf With google you will found lot of other sources too. r. Ismo
Dear Members, I have a use case where I would need to update or insert configuration to transforms.conf, props.conf and outputs.conf. I was told that it is possible to do this via a creating an app.... See more...
Dear Members, I have a use case where I would need to update or insert configuration to transforms.conf, props.conf and outputs.conf. I was told that it is possible to do this via a creating an app. That would make it easier for users to make the necessary changes, instead of doing it via the error-prone manual procedure. Nevertheless, I haven't come across any documentation that would illustrate and explain how to do it. Does someone have any experience with that? Or perhaps can someone point me to the relevant documentation? Thanks in advance!
@isoutamo we are receiving data from data input. In data input i am going to select the identical index created in HF (which is there in indexers) and save it. Not sure whether my events data has thi... See more...
@isoutamo we are receiving data from data input. In data input i am going to select the identical index created in HF (which is there in indexers) and save it. Not sure whether my events data has this index when it forwards to indexers?
If that name which is in event's metadata is exactly sama than in indexes.conf file on that indexer where events comes. Basically you can change that name in props+transforms.conf on first full splu... See more...
If that name which is in event's metadata is exactly sama than in indexes.conf file on that indexer where events comes. Basically you can change that name in props+transforms.conf on first full splunk instance (in your case that HF which collect that data). If indexers haven't have correctly named index in their index list then there should be index called lastChangeIndex see below lastChanceIndex = <index name> * An index that receives events that are otherwise not associated with a valid index. * If you do not specify a valid index with this setting, such events are dropped entirely. * Routes the following kinds of events to the specified index: * events with a non-existent index specified at an input layer, like an invalid "index" setting in inputs.conf * events with a non-existent index computed at index-time, like an invalid _MetaData:Index value set from a "FORMAT" setting in transforms.conf * You must set 'lastChanceIndex' to an existing, enabled index. Splunk software cannot start otherwise. * If set to "default", then the default index specified by the 'defaultDatabase' setting is used as a last chance index. * Default: empty string where all those events ,which haven't have name which exists in indexes.conf files, will send. As you see there is no default for that and this leads that indexer drops those events. For that reason you should always have this lastChangeIndex defined and restricted access to it. Also you should monitor it to react when someone try to send events into wrong index.
@PickleRick@isoutamo @ So finally the index which is present in indexers (indexes.conf) will be used right? ---> please confirm this?
Here is more about it IndexAndForward Processor----- and indexAndForward attribute  #----TCP Output Global Configuration ----- # You can overwrite the global configurations specified here in the # ... See more...
Here is more about it IndexAndForward Processor----- and indexAndForward attribute  #----TCP Output Global Configuration ----- # You can overwrite the global configurations specified here in the # [tcpout] stanza in stanzas for specific target groups, as described later. # You can only set the 'defaultGroup' and 'indexAndForward' settings # here, at the global level. # # Starting with version 4.2, the [tcpout] stanza is no longer required. [tcpout] indexAndForward = <boolean> * Set to "true" to index all data locally, in addition to forwarding it. * This is known as an "index-and-forward" configuration. * This setting is only available for heavy forwarders. * This setting is only available at the top level [tcpout] stanza. It cannot be overridden in a target group. * Default: false  
Except newer DS which must have it like this [indexAndForward] index = true selectiveIndexing = true   On HF side it's actually depends on TA are those indexes needed on that server or not. At lea... See more...
Except newer DS which must have it like this [indexAndForward] index = true selectiveIndexing = true   On HF side it's actually depends on TA are those indexes needed on that server or not. At least some e.g. DBX have this field as text where you could just write those names instead of select those from popup list.
1) Real example from today where trade was created and sent to market(7h46) and another action on the trade(7h57): INFO | xxxxx | 2025/03/07 07:57:49 |226123425|Trade sent to market| INFO | xxxxx |... See more...
1) Real example from today where trade was created and sent to market(7h46) and another action on the trade(7h57): INFO | xxxxx | 2025/03/07 07:57:49 |226123425|Trade sent to market| INFO | xxxxx | 2025/03/07 07:57:48 |226123425|Trade received INFO | xxxxx | 2025/03/07 07:46:43 |226123425|Trade sent to market| INFO | xxxxx | 2025/03/07 07:46:42 |226123425|Trade received 2) Yes I guarantee they always arrive in the same order and do not overlap.