All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello I have a working dashboard where I have various fields that can be defined (field1 and field2 in the example), and some events have a field that is an extracted JSON object. I have successfull... See more...
Hello I have a working dashboard where I have various fields that can be defined (field1 and field2 in the example), and some events have a field that is an extracted JSON object. I have successfully accessed various elements within the JSON object... but what I am trying to do is create ONE column called "Additional Details" where only certain elements, IF THEY EXIST, will populate in this column.  The search below technically works, but as you can probably see, it will just add a NULL value if the specified element from field3 does not exist. Is there a way to check for other values in the JSON object, and populate those values in that single column, only if they exist? i.e. If field3 has "Attribute Name", "Resource Name", and "ID", but many events have only one of these fields, is it possible to have the value from the field, only if it exists, populate in the "Additional Details" column?   index=test field1=* field2=* | spath input=field3 #(which is a json_object)# | fillnull value=NULL | eval type=if(isnotnull(element_from_field3), ElementName, NULL) | stats count values(type) as "Additional Details" by Other    
Splunk can only tell you what it is told by Windows.  Are you running sysmon on the Windows devices?  If so, then you can get detailed user activity; otherwise, you're limited to what's in the event ... See more...
Splunk can only tell you what it is told by Windows.  Are you running sysmon on the Windows devices?  If so, then you can get detailed user activity; otherwise, you're limited to what's in the event logs (that have been indexed).
Only the fields provided by the metadata command can be displayed unless you add other commands that search indexes for names ending with "srx".  If the desired field *is* returned by metadata then i... See more...
Only the fields provided by the metadata command can be displayed unless you add other commands that search indexes for names ending with "srx".  If the desired field *is* returned by metadata then include it in the fields command then use where or search to filter the desired values. | metadata type=hosts | where recentTime < now() - 10800| eval lastSeen = strftime(recentTime, "%F %T") | fields + host lastSeen | search host="*srx"  
Sorry, I did not completely explain, - I would like it to return the full name ending in srx.
You have to edit the DM to see the values, but it's much easier to read it from the manual.
As I said earlier, the output from btool is *everything* Splunk will use.  Btool has already selected the appropriate files based on what is available in apps and defaults after applying file precede... See more...
As I said earlier, the output from btool is *everything* Splunk will use.  Btool has already selected the appropriate files based on what is available in apps and defaults after applying file precedence rules. IOW, Splunk will use *all* of the outputs.conf files listed by btool, not just the first.
Verify the index name specified in inputs.conf on the UF exists on the indexers. Please share the query you're using to find the data.
How to create @gcusello ?
There is some configuration, to be able to expand the characters of the explanation of another query to be able to see the complete query.
Hello, Is there any way where we can know what are all applications are accessed by the user instead of just logon/log off activities from the winevent logs? Can someone help me with the search?   ... See more...
Hello, Is there any way where we can know what are all applications are accessed by the user instead of just logon/log off activities from the winevent logs? Can someone help me with the search?   Thanks
You're definitely on the right track with your method - you wouldn't actually need to add in the custom code block for this, you can just add it into the commented section of your action block. Depen... See more...
You're definitely on the right track with your method - you wouldn't actually need to add in the custom code block for this, you can just add it into the commented section of your action block. Depending on your use cases and if it's something you expect to need to do often, it might warrant a custom function.  An alternative, if you're only ever expecting a single value returned in that list, could be to use the format blocks, but it's definitely not as clean a solution there. If you haven't read through it yet, this was a super helpful link in the documentation to help jumpstart my overall SOAR knowledge - https://docs.splunk.com/Documentation/SOAR/current/PlaybookAPI/Datapaths
Thanks for your response @richgalloway  I have performed changes as suggested in the link you provided. And have restarted the splunk UF too.  Still facing the same issue and have no error in splun... See more...
Thanks for your response @richgalloway  I have performed changes as suggested in the link you provided. And have restarted the splunk UF too.  Still facing the same issue and have no error in splunkd.log.  
when you say "to match those expected by the DM ("high", "medium", "low", etc. in severity, for example)", where I can see this information in the DM? 
OK thanks richgalloway and isoutamo for the help So if I understand correctly, if no outputs.conf is defined in the app conf then : - Splunk will used the first outputs file in the btool list, un... See more...
OK thanks richgalloway and isoutamo for the help So if I understand correctly, if no outputs.conf is defined in the app conf then : - Splunk will used the first outputs file in the btool list, unless a default group is set ?
Hi @keorus, I suppose that your events are truncated also at raw level. In this case, you have to intervene in the input phase adding an option to your props.conf [your_sourcetype] TRUNCATE = 1000... See more...
Hi @keorus, I suppose that your events are truncated also at raw level. In this case, you have to intervene in the input phase adding an option to your props.conf [your_sourcetype] TRUNCATE = 1000000 a reasonable value for the max lenght of your events. You have to put this props.conf in the Indexers or (if present) in the first Heavy Forwarder that your dat pass through. Ciao. Giuseppe
Hi don’t use NAS/NFS (Not For Splunk) for storing active splunk buckets! Use only local disks not any network storage like NAS or remote computer. Only exception could be storage for frozen buckets... See more...
Hi don’t use NAS/NFS (Not For Splunk) for storing active splunk buckets! Use only local disks not any network storage like NAS or remote computer. Only exception could be storage for frozen buckets. And even then only when you are moving them from cold to frozen.  r. Ismo
Apps don't use configs.  Splunk uses configs specified by apps.  If an app doesn't provide a config file then another file will be used by order of precedence.  This is what btool shows us. IOW, wha... See more...
Apps don't use configs.  Splunk uses configs specified by apps.  If an app doesn't provide a config file then another file will be used by order of precedence.  This is what btool shows us. IOW, what you see in the btool output is what Splunk will use.
Don't create your own DM.  That defeats the purpose of CIM. Your app should produce fields listed in the CIM manual for the Intrusion Detection model.  It doesn't have to produce all of the fields, ... See more...
Don't create your own DM.  That defeats the purpose of CIM. Your app should produce fields listed in the CIM manual for the Intrusion Detection model.  It doesn't have to produce all of the fields, but as many as apply to the data.  It also may have to adjust field values to match those expected by the DM ("high", "medium", "low", etc. in severity, for example).  Tag the data as expected by the DM.
Good morning, I come to you because after looking for an answer to my problem, my last solution is to come and seek help on the splunk forum. Here is the context: I have hundreds of message... See more...
Good morning, I come to you because after looking for an answer to my problem, my last solution is to come and seek help on the splunk forum. Here is the context: I have hundreds of messages with identical node parameters, only the parameter values change. example: "jobs": dev "position": 3 "city": NY "name": Leo ....... “jobs”: HR "position": 4 “city”: CA "name": Mike ........ The goal is that these hundreds of messages are sometimes truncated because their responses are too large, I would like to find a solution to display them in full? I had thought about increasing the capacity in splink but this is not possible for my project and the truncated logs are -1% so a big change for few logs, not really good moves. My second solution, I thought of making a regex which finds the truncated message grouped into several pieces, is this possible?   I also try some regex to find my message like this, but it not working index="" | eval key="<value i want>" | table _raw If not, maybe you have another idea ?   Thank you for your help and time. Have a good evening
Hi #----TCP Output Global Configuration ----- # You can overwrite the global configurations specified here in the # [tcpout] stanza in stanzas for specific target groups, as described later. # You c... See more...
Hi #----TCP Output Global Configuration ----- # You can overwrite the global configurations specified here in the # [tcpout] stanza in stanzas for specific target groups, as described later. # You can only set the 'defaultGroup' and 'indexAndForward' settings # here, at the global level. # # Starting with version 4.2, the [tcpout] stanza is no longer required. [tcpout] defaultGroup = <comma-separated list> * A comma-separated list of one or more target group names, specified later in [tcpout:<target_group>] stanzas. * The forwarder sends all data to the specified groups. * If you don't want to forward data automatically, don't configure this setting. * Can be overridden by the '_TCP_ROUTING' setting in the inputs.conf file, which in turn can be overridden by a props.conf or transforms.conf modifier. * Starting with version 4.2, this setting is no longer required.  Unless you haven’t set defaultGroup to anything, which you have defined separately, node don’t sent events anywhere. Of course you could define something special ininputs.conf too. r. Ismo