All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

_configtracker index is currently excellent place to found those changes. It logging those even those are done when splunk is down. Just add changes when splunk is starting. Unfortunately there could... See more...
_configtracker index is currently excellent place to found those changes. It logging those even those are done when splunk is down. Just add changes when splunk is starting. Unfortunately there could be some differences on SCP side? At least earlier it didn't log all SCP platform changes or at least even sc_admin cannot see those, but I suppose that Splunk own SREs can see also those.
It will also not work if your inheritance is nested. Unfortunately, there is no good way of tracking all inheritances for a role except for listing effective capabilities for a given user.
When you are replacing :: in _meta fields then receiving splunk instance don't recognize it any more as _meta data. And if there is no those mandatory meta fields then splunk cannot guess those and do... See more...
When you are replacing :: in _meta fields then receiving splunk instance don't recognize it any more as _meta data. And if there is no those mandatory meta fields then splunk cannot guess those and do what is needed for those events. Then based on receiver side configuration this data goes to default index or it will dropped.
I'm assuming you're receiving this on SC4S. So as you've changed the format of sent data, the receiving end probably doesn't know what to do with that. First thing to check would be to sniff the tra... See more...
I'm assuming you're receiving this on SC4S. So as you've changed the format of sent data, the receiving end probably doesn't know what to do with that. First thing to check would be to sniff the traffic to see whether the data is being sent and what it looks like.
I checked it, but unfortunately it does not seem to work. Now I can't seem to find logs that contain any metadata, so I assume they are being dropped due to some problem. Where should I look for cl... See more...
I checked it, but unfortunately it does not seem to work. Now I can't seem to find logs that contain any metadata, so I assume they are being dropped due to some problem. Where should I look for clues?
Valid point. No its quite limited and logs only GUI edits (still could be useful in cloud). Just tested editing authorize.conf in system and via new and existing app. Everything can be seen in _confi... See more...
Valid point. No its quite limited and logs only GUI edits (still could be useful in cloud). Just tested editing authorize.conf in system and via new and existing app. Everything can be seen in _configtracker with following simple search index="_configtracker" data.path="*authorize.conf" GUI Edits to existing role are as expected logged under configtracker.
1. You're posting this one in a "Getting data in" section with "HEC" and "scripted input" labels. Are you sure it's really about getting data into your Splunk? 2. What kind of script are you talking... See more...
1. You're posting this one in a "Getting data in" section with "HEC" and "scripted input" labels. Are you sure it's really about getting data into your Splunk? 2. What kind of script are you talking about? A JS code in your browser? A script on the Search Head? Something else? 3. Are you aware of the security implications?
As a side note - pretty much every solution involving Windows and third party syslog breaks stuff somewhere. Either breaks Splunk parsing or breaks the third party parsing. At some point something i... See more...
As a side note - pretty much every solution involving Windows and third party syslog breaks stuff somewhere. Either breaks Splunk parsing or breaks the third party parsing. At some point something is almost sure to break.
Hello, I have a Windows machine with UF that sends its logs to a HF, which has the SC4S derived config loaded (see the opening entry's link). That allows to reformat the logs that passed through t... See more...
Hello, I have a Windows machine with UF that sends its logs to a HF, which has the SC4S derived config loaded (see the opening entry's link). That allows to reformat the logs that passed through the HF to IETF 5424 syslog (with framing enabled) and forward them to a syslog instance. That reformatting pretty much alters most parts of the original message. In the output you will generally see the first half of the message (not counting the SDATA part) will contain the metadata fields in the key::value format. I would like to change that in the syslog output generated by the config on the HF node.
OK. So this is not about Splunk's metadata format as much as rendering it for export. I suppose you can tweak it a little. The key part here is this transform [metadata_meta] SOURCE_KEY = _meta RE... See more...
OK. So this is not about Splunk's metadata format as much as rendering it for export. I suppose you can tweak it a little. The key part here is this transform [metadata_meta] SOURCE_KEY = _meta REGEX = (?ims)(.*) FORMAT = ~~~SM~~~$1~~~EM~~~$0 DEST_KEY = _raw It's being called as the first one (except for the one manipulating routing) and it exports whole _meta as-is. So you need to change it to: [sanitize_metadata] INGEST_EVAL = escaped_meta=replace(_meta,"::","=") [metadata_meta] SOURCE_KEY = escaped_meta REGEX = (?ims)(.*) FORMAT = ~~~SM~~~$1~~~EM~~~$0 DEST_KEY = _raw And of course adjust props to call the sanitize_metadata first TRANSFORMS-zza-syslog = syslog_canforward, sanitize_metadata, metadata_meta, metadata_source, metadata_sourcetype, metadata_index, metadata_host, metadata_subsecond, metadata_time, syslog_prefix, syslog_drop_zero
What you are referring to is the syslog serialized data or SDATA (see RFC 5424) portion of the message. That consists of only 5 values (same as the Splunk JSON envelope's 5 top-level fields). And yes... See more...
What you are referring to is the syslog serialized data or SDATA (see RFC 5424) portion of the message. That consists of only 5 values (same as the Splunk JSON envelope's 5 top-level fields). And yes, those use the equals sign as a separator. On the other hand the main part of the message will look like this: ~~~SM~~~env::env01~~~EM~~~11/29/2024 02:01:55 PM\nLogName=Security\nEventCode=4624\nEventType=0\nComputerName=DESKTOP-OOU0O6E\nSourceName=Microsoft Windows security auditing.\nType=Information\nRecordNumber=49513\nKeywords=Audit Success\nTaskCategory=Logon\nOpCode=Info\nMessage=An account was successfully logged on.\r\n\r\nSubject:\r\n\tSecurity ID:\t\tNT AUTHORITY\\SYSTEM\r\n\tAccount Name:\t\tDESKTOP-OOU0O6E$\r\n\tAccount Domain:\t\tWORKGROUP\r\n\tLogon ID:\t\t0x3E7\r\n\r\nLogon Information:\r\n\tLogon Type:\t\t5\r\n\tRestricted Admin Mode:\t-\r\n\tVirtual Account:\t\tNo\r\n\tElevated Token:\t\tYes\r\n\r\nImpersonation Level:\t\tImpersonation\r\n\r\nNew Logon:\r\n\tSecurity ID:\t\tNT AUTHORITY\\SYSTEM\r\n\tAccount Name:\t\tSYSTEM\r\n\tAccount Domain:\t\tNT AUTHORITY\r\n\tLogon ID:\t\t0x3E7\r\n\tLinked Logon ID:\t\t0x0\r\n\tNetwork Account Name:\t-\r\n\tNetwork Account Domain:\t-\r\n\tLogon GUID:\t\t{00000000-0000-0000-0000-000000000000}\r\n\r\nProcess Information:\r\n\tProcess ID:\t\t0x2d4\r\n\tProcess Name:\t\tC:\\Windows\\System32\\services.exe\r\n\r\nNetwork Information:\r\n\tWorkstation Name:\t-\r\n\tSource Network Address:\t-\r\n\tSource Port:\t\t-\r\n\r\nDetailed Authentication Information:\r\n\tLogon Process:\t\tAdvapi \r\n\tAuthentication Package:\tNegotiate\r\n\tTransited Services:\t-\r\n\tPackage Name (NTLM only):\t-\r\n\tKey Length:\t\t0\r\n\r\nThis event is generated when a logon session is created. It is generated on the computer that was accessed.\r\n\r\nThe subject fields indicate the account on the local system which requested the logon. This is most commonly a service such as the Server service, or a local process such as Winlogon.exe or Services.exe.\r\n\r\nThe logon type field indicates the kind of logon that occurred. The most common types are 2 (interactive) and 3 (network).\r\n\r\nThe New Logon fields indicate the account for whom the new logon was created, i.e. the account that was logged on.\r\n\r\nThe network fields indicate where a remote logon request originated. Workstation name is not always available and may be left blank in some cases.\r\n\r\nThe impersonation level field indicates the extent to which a process in the logon session can impersonate.\r\n\r\nThe authentication information fields provide detailed information about this specific logon request.\r\n\t- Logon GUID is a unique identifier that can be used to correlate this event with a KDC event.\r\n\t- Transited services indicate which intermediate services have participated in this logon request.\r\n\t- Package name indicates which sub-protocol was used among the NTLM protocols.\r\n\t- Key length indicates the length of the generated session key. This will be 0 if no session key was requested. I would like to have the first part of the syslog message to have the metadata as env=env01 or env:env01. As I understand the SC4S derived config allows you to modify most parts of the message. But is it possible for the metadata part too? If yes, how do I match to the metadata key-value pairs?
Wait a moment. As far as I can read this - https://splunk.github.io/splunk-connect-for-syslog/main/sources/vendor/Splunk/heavyforwarder/ - the forwarded data will be formatted like st="sourcetype" i... See more...
Wait a moment. As far as I can read this - https://splunk.github.io/splunk-connect-for-syslog/main/sources/vendor/Splunk/heavyforwarder/ - the forwarded data will be formatted like st="sourcetype" i="index" and so on. So where's the problem?
Hi Based on your sample data and if your props.conf is just what you have shown to us this should be work as @PickleRick told. Quite probably you have something else for those event in your input f... See more...
Hi Based on your sample data and if your props.conf is just what you have shown to us this should be work as @PickleRick told. Quite probably you have something else for those event in your input file. Can you found those problematic events and one before and after from it? Then add those inside editors </> -block, so we can be sure that there haven't been any editor changes when you are posting those into this thread. r. Ismo
SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+)<UNKNOWN>  This should do the trick. Of course you need to set your timestamp recognition as well but that's another story.
Hi I'm not sure if I understand your requirements correctly? You want to reformat syslog feed before it has modified by HF? Or you want use some other metadata separator than :: ? You could modify ... See more...
Hi I'm not sure if I understand your requirements correctly? You want to reformat syslog feed before it has modified by HF? Or you want use some other metadata separator than :: ? You could modify the data if you want before HF set it into metadata (and indexed fields). BUT you cannot use your own metadata separator like =. In Splunk :: is fixed metadata separator and you must use it in transforms.conf and/or inputs.conf like _meta foo::bar  r. Ismo
I don't want to change how fields are indexed. I just want to reformat the metadata (to use different key-value separators) via the transforms.conf prior to being forwarded to syslog-ng.
Thank you for the reply.  I have changed the props.conf to  [cyberark:snmplogs] LINE_BREAKER = ([\r\n]+)\<UNKNOWN\> NO_BINARY_CHECK = true category = Custom pulldown_type = 1 disabled = false... See more...
Thank you for the reply.  I have changed the props.conf to  [cyberark:snmplogs] LINE_BREAKER = ([\r\n]+)\<UNKNOWN\> NO_BINARY_CHECK = true category = Custom pulldown_type = 1 disabled = false However, the line breaking is still wrong. Sometimes, Splunk even only ingest the first line for that event (16:04:48). Do you have any idea on the reason behind this?   Actual log file: <UNKNOWN> - 2025-01-13 16:04:48 UDP: [10.0.216.39]:53916->[10.0.214.14]:162 SNMPv2-SMI::mib-2.1.3.0 30:22:35:56.00 SNMPv2-SMI::snmpModules.1.1.4.1.0 CYBER-ARK-MIB::osDRServiceNameNotification CYBER-ARK-MIB::osServiceName "CyberArk Vault Disaster Recovery" CYBER-ARK-MIB::osServiceStatus "Stopped" CYBER-ARK-MIB::osServiceTrapState "Alert" <UNKNOWN> - 2025-01-13 16:06:17 UDP: [10.0.216.39]:53916->[10.0.214.14]:162 SNMPv2-SMI::mib-2.1.3.0 30:22:37:25.00 SNMPv2-SMI::snmpModules.1.1.4.1.0 CYBER-ARK-MIB::osDiskFreeSpaceNotification CYBER-ARK-MIB::osDiskDrive "C:\\" CYBER-ARK-MIB::osDiskPercentageFreeSpace "71.56" CYBER-ARK-MIB::osDiskFreeSpace "58183" CYBER-ARK-MIB::osDiskTrapState "Alert" <UNKNOWN> - 2025-01-13 16:06:17 UDP: [10.0.216.39]:53916->[10.0.214.14]:162 SNMPv2-SMI::mib-2.1.3.0 30:22:37:25.00 SNMPv2-SMI::snmpModules.1.1.4.1.0 CYBER-ARK-MIB::osSwapMemoryUsageNotification CYBER-ARK-MIB::osMemoryTotalKbPhysical 16776172 CYBER-ARK-MIB::osMemoryAvailKbPhysical 13521168 CYBER-ARK-MIB::osMemoryTotalKbSwap 19266540 CYBER-ARK-MIB::osMemoryAvailKbSwap 3651932 CYBER-ARK-MIB::osMemoryTrapState "Alert" <UNKNOWN> - 2025-01-13 16:06:18 UDP: [10.0.216.39]:53916->[10.0.214.14]:162 SNMPv2-SMI::mib-2.1.3.0 30:22:37:25.00 SNMPv2-SMI::snmpModules.1.1.4.1.0 CYBER-ARK-MIB::osCpuUsageNotification CYBER-ARK-MIB::osCpuUsage "0.000000" CYBER-ARK-MIB::osCpuTrapState "Alert" <UNKNOWN> - 2025-01-13 16:06:18 UDP: [10.0.216.39]:53916->[10.0.214.14]:162 SNMPv2-SMI::mib-2.1.3.0 30:22:37:25.00 SNMPv2-SMI::snmpModules.1.1.4.1.0 CYBER-ARK-MIB::osMemoryUsageNotification CYBER-ARK-MIB::osMemoryTotalKbPhysical 16776172 CYBER-ARK-MIB::osMemoryAvailKbPhysical 13521168 CYBER-ARK-MIB::osMemoryTotalKbSwap 19266540 CYBER-ARK-MIB::osMemoryAvailKbSwap 3651932 CYBER-ARK-MIB::osMemoryTrapState "Alert"  
No. Indexed fields are indexed as key::value dearch terms. That's by design.
Hi @rohithvr19 , as I said in my answer to yourp revious question it's possible, but think to your requirements, because the performance of a script button will be very very low and the better appro... See more...
Hi @rohithvr19 , as I said in my answer to yourp revious question it's possible, but think to your requirements, because the performance of a script button will be very very low and the better approach isn't the script execution with a button, but a near real time scheduled search, as described in https://community.splunk.com/t5/Splunk-Search/Export-Logs-from-Zabbix-to-Splunk-Dashboard-via-API-on-Button/m-p/708529#M239598 Ciao. Giuseppe
Don't use SHOULD_LINEMERGE=true. It's a very very rarely useful option. In your case it will be probably just LINE_BREAKER=([\r\n]+)<UNKNOWN> You might need to escape < and > and maybe enclose <UN... See more...
Don't use SHOULD_LINEMERGE=true. It's a very very rarely useful option. In your case it will be probably just LINE_BREAKER=([\r\n]+)<UNKNOWN> You might need to escape < and > and maybe enclose <UNKNOWN> in a non-capturing group.