All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Based on your sample data and if your props.conf is just what you have shown to us this should be work as @PickleRick told. Quite probably you have something else for those event in your input f... See more...
Hi Based on your sample data and if your props.conf is just what you have shown to us this should be work as @PickleRick told. Quite probably you have something else for those event in your input file. Can you found those problematic events and one before and after from it? Then add those inside editors </> -block, so we can be sure that there haven't been any editor changes when you are posting those into this thread. r. Ismo
SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+)<UNKNOWN>  This should do the trick. Of course you need to set your timestamp recognition as well but that's another story.
Hi I'm not sure if I understand your requirements correctly? You want to reformat syslog feed before it has modified by HF? Or you want use some other metadata separator than :: ? You could modify ... See more...
Hi I'm not sure if I understand your requirements correctly? You want to reformat syslog feed before it has modified by HF? Or you want use some other metadata separator than :: ? You could modify the data if you want before HF set it into metadata (and indexed fields). BUT you cannot use your own metadata separator like =. In Splunk :: is fixed metadata separator and you must use it in transforms.conf and/or inputs.conf like _meta foo::bar  r. Ismo
I don't want to change how fields are indexed. I just want to reformat the metadata (to use different key-value separators) via the transforms.conf prior to being forwarded to syslog-ng.
Thank you for the reply.  I have changed the props.conf to  [cyberark:snmplogs] LINE_BREAKER = ([\r\n]+)\<UNKNOWN\> NO_BINARY_CHECK = true category = Custom pulldown_type = 1 disabled = false... See more...
Thank you for the reply.  I have changed the props.conf to  [cyberark:snmplogs] LINE_BREAKER = ([\r\n]+)\<UNKNOWN\> NO_BINARY_CHECK = true category = Custom pulldown_type = 1 disabled = false However, the line breaking is still wrong. Sometimes, Splunk even only ingest the first line for that event (16:04:48). Do you have any idea on the reason behind this?   Actual log file: <UNKNOWN> - 2025-01-13 16:04:48 UDP: [10.0.216.39]:53916->[10.0.214.14]:162 SNMPv2-SMI::mib-2.1.3.0 30:22:35:56.00 SNMPv2-SMI::snmpModules.1.1.4.1.0 CYBER-ARK-MIB::osDRServiceNameNotification CYBER-ARK-MIB::osServiceName "CyberArk Vault Disaster Recovery" CYBER-ARK-MIB::osServiceStatus "Stopped" CYBER-ARK-MIB::osServiceTrapState "Alert" <UNKNOWN> - 2025-01-13 16:06:17 UDP: [10.0.216.39]:53916->[10.0.214.14]:162 SNMPv2-SMI::mib-2.1.3.0 30:22:37:25.00 SNMPv2-SMI::snmpModules.1.1.4.1.0 CYBER-ARK-MIB::osDiskFreeSpaceNotification CYBER-ARK-MIB::osDiskDrive "C:\\" CYBER-ARK-MIB::osDiskPercentageFreeSpace "71.56" CYBER-ARK-MIB::osDiskFreeSpace "58183" CYBER-ARK-MIB::osDiskTrapState "Alert" <UNKNOWN> - 2025-01-13 16:06:17 UDP: [10.0.216.39]:53916->[10.0.214.14]:162 SNMPv2-SMI::mib-2.1.3.0 30:22:37:25.00 SNMPv2-SMI::snmpModules.1.1.4.1.0 CYBER-ARK-MIB::osSwapMemoryUsageNotification CYBER-ARK-MIB::osMemoryTotalKbPhysical 16776172 CYBER-ARK-MIB::osMemoryAvailKbPhysical 13521168 CYBER-ARK-MIB::osMemoryTotalKbSwap 19266540 CYBER-ARK-MIB::osMemoryAvailKbSwap 3651932 CYBER-ARK-MIB::osMemoryTrapState "Alert" <UNKNOWN> - 2025-01-13 16:06:18 UDP: [10.0.216.39]:53916->[10.0.214.14]:162 SNMPv2-SMI::mib-2.1.3.0 30:22:37:25.00 SNMPv2-SMI::snmpModules.1.1.4.1.0 CYBER-ARK-MIB::osCpuUsageNotification CYBER-ARK-MIB::osCpuUsage "0.000000" CYBER-ARK-MIB::osCpuTrapState "Alert" <UNKNOWN> - 2025-01-13 16:06:18 UDP: [10.0.216.39]:53916->[10.0.214.14]:162 SNMPv2-SMI::mib-2.1.3.0 30:22:37:25.00 SNMPv2-SMI::snmpModules.1.1.4.1.0 CYBER-ARK-MIB::osMemoryUsageNotification CYBER-ARK-MIB::osMemoryTotalKbPhysical 16776172 CYBER-ARK-MIB::osMemoryAvailKbPhysical 13521168 CYBER-ARK-MIB::osMemoryTotalKbSwap 19266540 CYBER-ARK-MIB::osMemoryAvailKbSwap 3651932 CYBER-ARK-MIB::osMemoryTrapState "Alert"  
No. Indexed fields are indexed as key::value dearch terms. That's by design.
Hi @rohithvr19 , as I said in my answer to yourp revious question it's possible, but think to your requirements, because the performance of a script button will be very very low and the better appro... See more...
Hi @rohithvr19 , as I said in my answer to yourp revious question it's possible, but think to your requirements, because the performance of a script button will be very very low and the better approach isn't the script execution with a button, but a near real time scheduled search, as described in https://community.splunk.com/t5/Splunk-Search/Export-Logs-from-Zabbix-to-Splunk-Dashboard-via-API-on-Button/m-p/708529#M239598 Ciao. Giuseppe
Don't use SHOULD_LINEMERGE=true. It's a very very rarely useful option. In your case it will be probably just LINE_BREAKER=([\r\n]+)<UNKNOWN> You might need to escape < and > and maybe enclose <UN... See more...
Don't use SHOULD_LINEMERGE=true. It's a very very rarely useful option. In your case it will be probably just LINE_BREAKER=([\r\n]+)<UNKNOWN> You might need to escape < and > and maybe enclose <UNKNOWN> in a non-capturing group.
Hi everyone, recently we had an use case where we had to use the scheduled png export function from a dashboard studio dashboard (Enterprise 9.4) Unfortunately it´s only working with some kind of l... See more...
Hi everyone, recently we had an use case where we had to use the scheduled png export function from a dashboard studio dashboard (Enterprise 9.4) Unfortunately it´s only working with some kind of limitations (Bug?). If you change the custom input fields of the export for subject and message it´s not considered in the mail. In the Dev Tools you will find something like "action.email.subject"  for the subject and action.email.message for the message with the informationen written to the export schedule, seems to be okay so far. If you start the export again, only the following fields will be considered in the email, which seems to be predefined and not changable at all via GUI. "action.email.message.view": "Splunk Dashboard: api_modify_dashboard_backup "action.email.subject.view" : A PDF was generated for api_modify_dashboard_backup (even if you select the png-export)   Anyone else experienced this or even better has found a solution? Thanks to all  
Your problem description is a bit vague. What do you mean by "not working"? What does your ingestion process look like? Have you set proper source types? Does your linebreaking work properly? Tim... See more...
Your problem description is a bit vague. What do you mean by "not working"? What does your ingestion process look like? Have you set proper source types? Does your linebreaking work properly? Timestamp recognition? Do your events get any fields extracted or none at all? Did you configure event export on the Checkpoint's side properly?
Your setup is a bit unusual as you seem to have some duties but don't have access typically associated with those duties or assistance of someone with such access. So as @isoutamo said - you should c... See more...
Your setup is a bit unusual as you seem to have some duties but don't have access typically associated with those duties or assistance of someone with such access. So as @isoutamo said - you should check your check your management processes and work on this issue with a party responsible for administering your environment.
I am having the same problem. The Keycloak server is on ECS and uses https://:8443. I cannot use this app to get logs from my Keycloak server.
@finchgdx Hello, Do you have that usage field in your tutorial data or CSV ? 
Hi, I have problem with parsing log in Splunk Add-on for Check Point Log Exporter. I have install it in both SH and HF, but log from checkpoint not parsing properly. I have try change REGEX to ([a-zA... See more...
Hi, I have problem with parsing log in Splunk Add-on for Check Point Log Exporter. I have install it in both SH and HF, but log from checkpoint not parsing properly. I have try change REGEX to ([a-zA-Z0-9_-]+)[:=]+([^|]+) and try to change DEPTH_LIMIT to 200000 like in troubleshooting said but it still not working.  Can you give me some advice?  Thank you so much !
Hello, I was trying to ingest snmptrapd logs with self file monitoring (Only one Splunk Instance in my environment) Here is the log format: <UNKNOWN> - 2025-01-13 10:55:44 UDP: [10.0.216.39]:53916... See more...
Hello, I was trying to ingest snmptrapd logs with self file monitoring (Only one Splunk Instance in my environment) Here is the log format: <UNKNOWN> - 2025-01-13 10:55:44 UDP: [10.0.216.39]:53916->[10.0.214.14]:162 SNMPv2-SMI::mib-2.1.3.0 30:17:26:51.00 SNMPv2-SMI::snmpModules.1.1.4.1.0 CYBER-ARK-MIB::osDiskFreeSpaceNotification CYBER-ARK-MIB::osDiskDrive "C:\\" CYBER-ARK-MIB::osDiskPercentageFreeSpace "71.61" CYBER-ARK-MIB::osDiskFreeSpace "58221" CYBER-ARK-MIB::osDiskTrapState "Alert" <UNKNOWN> - 2025-01-13 10:55:44 UDP: [10.0.216.39]:53916->[10.0.214.14]:162 SNMPv2-SMI::mib-2.1.3.0 30:17:26:51.00 SNMPv2-SMI::snmpModules.1.1.4.1.0 CYBER-ARK-MIB::osMemoryUsageNotification CYBER-ARK-MIB::osMemoryTotalKbPhysical 16776172 CYBER-ARK-MIB::osMemoryAvailKbPhysical 13524732 CYBER-ARK-MIB::osMemoryTotalKbSwap 19266540 CYBER-ARK-MIB::osMemoryAvailKbSwap 3660968 CYBER-ARK-MIB::osMemoryTrapState "Alert" <UNKNOWN> - 2025-01-13 10:55:44 UDP: [10.0.216.39]:53916->[10.0.214.14]:162 SNMPv2-SMI::mib-2.1.3.0 30:17:26:51.00 SNMPv2-SMI::snmpModules.1.1.4.1.0 CYBER-ARK-MIB::osSwapMemoryUsageNotification CYBER-ARK-MIB::osMemoryTotalKbPhysical 16776172 CYBER-ARK-MIB::osMemoryAvailKbPhysical 13524732 CYBER-ARK-MIB::osMemoryTotalKbSwap 19266540 CYBER-ARK-MIB::osMemoryAvailKbSwap 3660968 CYBER-ARK-MIB::osMemoryTrapState "Alert" I tried to use "<UNKNOWN>" as the line breaker, but it does not work at all and the event is broke in a weird way(sometimes it works, most of the time it doesn't) Please find the props.conf setting as below: [cyberark:snmplogs] LINE_BREAKER = \<UNKNOWN\> NO_BINARY_CHECK = true SHOULD_LINEMERGE = true category = Custom pulldown_type = 1 BREAK_ONLY_BEFORE = \<UNKNOWN\> MUST_NOT_BREAK_BEFORE = \<UNKNOWN\> disabled = false LINE_BREAKER_LOOKBEHIND = 2000     Line Breaking Result in Splunk:
I am watching the training for the core user certification path on STEP and they are using an index that has the usage field.  I have uploaded the tutorial data from the community site but it doesn't... See more...
I am watching the training for the core user certification path on STEP and they are using an index that has the usage field.  I have uploaded the tutorial data from the community site but it doesn't have the usage field.  I don't know how to rectify this and I cannot replicate the activity in the learning material.  Does anyone have a suggestion?   EDIT - i just made up my own CSV and imported that data. ggwp
Your application logs are stored in Splunk Cloud. Splunk Observability Cloud does not store any application logs--it uses the Log Observer Connect integration to read them from Splunk Cloud and displ... See more...
Your application logs are stored in Splunk Cloud. Splunk Observability Cloud does not store any application logs--it uses the Log Observer Connect integration to read them from Splunk Cloud and display them.
Two problems with the search. In an evaluation function, deep path payload.status needs to be single quoted (i.e., 'payload.status') to dereference its value.  Otherwise bare word payload.status ev... See more...
Two problems with the search. In an evaluation function, deep path payload.status needs to be single quoted (i.e., 'payload.status') to dereference its value.  Otherwise bare word payload.status evaluates to null. If you want to use count(is_ok), you should make the "other" value disappear, i.e., make it be a null, not a "real" value of 0.  If you think 0 is a better representation for "other", use sum as @ITWhisperer suggests. In other words, on mock event sequence _raw payload.status seq {"seq":1,"payload":{"status":"ok"}} ok 1 {"seq":2,"payload":{"status":"degraded"}} degraded 2 {"seq":3,"payload":{"status":"ok"}} ok 3 either   | eval is_ok=if('payload.status'=="ok", 1, null()) | stats count as total, count(is_ok) as ok_count   or   | eval is_ok=if('payload.status'=="ok", 1, 0) | stats count as total, sum(is_ok) as ok_count   or even   | eval is_ok=if('payload.status'=="ok", 1, 0) | stats count as total, count(eval(is_ok = 1)) as ok_count   should give you total ok_count 3 2 This is an emulation you can play with and compare with real data   | makeresults format=json data="[ { \"seq\": 1, \"payload\": { \"status\": \"ok\", } }, { \"seq\": 2, \"payload\": { \"status\": \"degraded\", } }, { \"seq\": 3, \"payload\": { \"status\": \"ok\", } } ]" | fields - payload, seq, _time | spath ``` data emulation above ```    
Worked like a charm. This line seems to be making all the difference: | spath path=payload.status output=status.
It still fails in that it appears that the if(payload.status==...) always evaluates to false, despite there being both "ok" and "degraded" events, so the sum is equal to the count of all events.