All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Verify the index name specified in inputs.conf on the UF exists on the indexers. Please share the query you're using to find the data.
How to create @gcusello ?
There is some configuration, to be able to expand the characters of the explanation of another query to be able to see the complete query.
Hello, Is there any way where we can know what are all applications are accessed by the user instead of just logon/log off activities from the winevent logs? Can someone help me with the search?   ... See more...
Hello, Is there any way where we can know what are all applications are accessed by the user instead of just logon/log off activities from the winevent logs? Can someone help me with the search?   Thanks
You're definitely on the right track with your method - you wouldn't actually need to add in the custom code block for this, you can just add it into the commented section of your action block. Depen... See more...
You're definitely on the right track with your method - you wouldn't actually need to add in the custom code block for this, you can just add it into the commented section of your action block. Depending on your use cases and if it's something you expect to need to do often, it might warrant a custom function.  An alternative, if you're only ever expecting a single value returned in that list, could be to use the format blocks, but it's definitely not as clean a solution there. If you haven't read through it yet, this was a super helpful link in the documentation to help jumpstart my overall SOAR knowledge - https://docs.splunk.com/Documentation/SOAR/current/PlaybookAPI/Datapaths
Thanks for your response @richgalloway  I have performed changes as suggested in the link you provided. And have restarted the splunk UF too.  Still facing the same issue and have no error in splun... See more...
Thanks for your response @richgalloway  I have performed changes as suggested in the link you provided. And have restarted the splunk UF too.  Still facing the same issue and have no error in splunkd.log.  
when you say "to match those expected by the DM ("high", "medium", "low", etc. in severity, for example)", where I can see this information in the DM? 
OK thanks richgalloway and isoutamo for the help So if I understand correctly, if no outputs.conf is defined in the app conf then : - Splunk will used the first outputs file in the btool list, un... See more...
OK thanks richgalloway and isoutamo for the help So if I understand correctly, if no outputs.conf is defined in the app conf then : - Splunk will used the first outputs file in the btool list, unless a default group is set ?
Hi @keorus, I suppose that your events are truncated also at raw level. In this case, you have to intervene in the input phase adding an option to your props.conf [your_sourcetype] TRUNCATE = 1000... See more...
Hi @keorus, I suppose that your events are truncated also at raw level. In this case, you have to intervene in the input phase adding an option to your props.conf [your_sourcetype] TRUNCATE = 1000000 a reasonable value for the max lenght of your events. You have to put this props.conf in the Indexers or (if present) in the first Heavy Forwarder that your dat pass through. Ciao. Giuseppe
Hi don’t use NAS/NFS (Not For Splunk) for storing active splunk buckets! Use only local disks not any network storage like NAS or remote computer. Only exception could be storage for frozen buckets... See more...
Hi don’t use NAS/NFS (Not For Splunk) for storing active splunk buckets! Use only local disks not any network storage like NAS or remote computer. Only exception could be storage for frozen buckets. And even then only when you are moving them from cold to frozen.  r. Ismo
Apps don't use configs.  Splunk uses configs specified by apps.  If an app doesn't provide a config file then another file will be used by order of precedence.  This is what btool shows us. IOW, wha... See more...
Apps don't use configs.  Splunk uses configs specified by apps.  If an app doesn't provide a config file then another file will be used by order of precedence.  This is what btool shows us. IOW, what you see in the btool output is what Splunk will use.
Don't create your own DM.  That defeats the purpose of CIM. Your app should produce fields listed in the CIM manual for the Intrusion Detection model.  It doesn't have to produce all of the fields, ... See more...
Don't create your own DM.  That defeats the purpose of CIM. Your app should produce fields listed in the CIM manual for the Intrusion Detection model.  It doesn't have to produce all of the fields, but as many as apply to the data.  It also may have to adjust field values to match those expected by the DM ("high", "medium", "low", etc. in severity, for example).  Tag the data as expected by the DM.
Good morning, I come to you because after looking for an answer to my problem, my last solution is to come and seek help on the splunk forum. Here is the context: I have hundreds of message... See more...
Good morning, I come to you because after looking for an answer to my problem, my last solution is to come and seek help on the splunk forum. Here is the context: I have hundreds of messages with identical node parameters, only the parameter values change. example: "jobs": dev "position": 3 "city": NY "name": Leo ....... “jobs”: HR "position": 4 “city”: CA "name": Mike ........ The goal is that these hundreds of messages are sometimes truncated because their responses are too large, I would like to find a solution to display them in full? I had thought about increasing the capacity in splink but this is not possible for my project and the truncated logs are -1% so a big change for few logs, not really good moves. My second solution, I thought of making a regex which finds the truncated message grouped into several pieces, is this possible?   I also try some regex to find my message like this, but it not working index="" | eval key="<value i want>" | table _raw If not, maybe you have another idea ?   Thank you for your help and time. Have a good evening
Hi #----TCP Output Global Configuration ----- # You can overwrite the global configurations specified here in the # [tcpout] stanza in stanzas for specific target groups, as described later. # You c... See more...
Hi #----TCP Output Global Configuration ----- # You can overwrite the global configurations specified here in the # [tcpout] stanza in stanzas for specific target groups, as described later. # You can only set the 'defaultGroup' and 'indexAndForward' settings # here, at the global level. # # Starting with version 4.2, the [tcpout] stanza is no longer required. [tcpout] defaultGroup = <comma-separated list> * A comma-separated list of one or more target group names, specified later in [tcpout:<target_group>] stanzas. * The forwarder sends all data to the specified groups. * If you don't want to forward data automatically, don't configure this setting. * Can be overridden by the '_TCP_ROUTING' setting in the inputs.conf file, which in turn can be overridden by a props.conf or transforms.conf modifier. * Starting with version 4.2, this setting is no longer required.  Unless you haven’t set defaultGroup to anything, which you have defined separately, node don’t sent events anywhere. Of course you could define something special ininputs.conf too. r. Ismo
Hi! this worked. Thank you!
Does the answer to this similar question help? Solved: Changing max length of field - Splunk Community
Hi richgalloway, thanks for your time, OK I see, and in case of an app which has no outputs.conf init, how can I know which output will be used by this app ?
Imagine I have an app with Intrusion Detection data  If I want to make my app CIM compliant I need to add aliases, tags and calculated fields like in the Intrusion Detection Datamodel ? For example... See more...
Imagine I have an app with Intrusion Detection data  If I want to make my app CIM compliant I need to add aliases, tags and calculated fields like in the Intrusion Detection Datamodel ? For example, if I have a field called "Alert level", I need to create an aliases in my app in order to rename it as "severity_id"? Or is it better to create a own datamodel from my app and to query from this datamodel with tstats? | tstats count from datamodel=TEST
my splunk query results are getting truncated when creating a table Is there any workaround to avoid this ?? index=gbi_* (AppName=*) | table SQL
The btool command shows *all* of the settings that will be applied the next time Splunk restarts.  It takes file precedence into account when generating the output.  The first column produced by the ... See more...
The btool command shows *all* of the settings that will be applied the next time Splunk restarts.  It takes file precedence into account when generating the output.  The first column produced by the --debug option is the name of the file from which the setting was read.