All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks.  I will  try to update the HEC URL to /raw instead and test with the new line breaker configuration.
Trying to use time tokens in dashboard studio under sub search, $time.earliest$ and $time.latest$ works for Presets - Today & Yesterday. But doesn't if date range is selected. Can someone kindly hel... See more...
Trying to use time tokens in dashboard studio under sub search, $time.earliest$ and $time.latest$ works for Presets - Today & Yesterday. But doesn't if date range is selected. Can someone kindly help.   | inputlookup daily_distinct_count.csv | rename avg_dc_count as avg_val | search Page="Application" | eval _time=relative_time(now(), "-1d@d"), value=avg_val, Page="Application" | append [ search index="143576" earliest=$token.earliest$ latest=$token.latest$ | eval Page=case( match(URI, "Auth"),  "Application", true(), "UNKNOWN" ) | where Page="Application" | stats dc(user) as value | eval _time=now(), Page="Application" ] | table _time Page value | timechart span=1d latest(value) as value by Page
I'm attempting to suppress an alert if a follow up event (condition) is received within 60 seconds of the initial event (condition) from the same host.  This is a network switch alerting for BFD neig... See more...
I'm attempting to suppress an alert if a follow up event (condition) is received within 60 seconds of the initial event (condition) from the same host.  This is a network switch alerting for BFD neighbor down event.  I want to suppress the alert if a BFD neighbor up event is received within 60 seconds. This is the event data received: Initial BFD Down: 2025-05-07T07:20:40.482713-04:00 "switch_name" : 2025 May 7 07:20:40 EDT: %BFD-5-SESSION_STATE_DOWN: BFD session 1124073489 to neighbor "IP Address" on interface Vlan43 has gone down. Reason: Administratively Down. host = "switch_name" Second event to nullify the alert: 2025-05-07T07:20:41.482771-04:00 "switch_name" : 2025 May 7 07:20:41 EDT: %BFD-5-SESSION_STATE_UP: BFD session 1124073489 to neighbor "IP Address" on interface Vlan43 is up. host = "switch_name"  
Here is indexing pipelines by HEC endpoints https://www.aplura.com/assets/pdf/hec_pipelines.pdf
Hello everybody! The problem that I have is that when I try to make a Backup of the KVStore on my Search Head, it fails after it is done dumping or while dumping the data.  Splunk tells me to look ... See more...
Hello everybody! The problem that I have is that when I try to make a Backup of the KVStore on my Search Head, it fails after it is done dumping or while dumping the data.  Splunk tells me to look into the logs but besides some basic info that the backup has failed I cant find any info in splunkd and mongo logs. From my understanding, it is important that, since I'm using the point_in_time option, I have to make sure no searches are writing into the KV Store when I start the backup. Since Splunk makes a Snapshot of the moment I'm starting the backup, searches that modify the KVStores afterwards shoudln't impact the backup, right? I made sure no searches have the running status when starting the Backup. Does anybody have tips or threads that are about this topic? I thought about stopping the scheduler during the backup, but since there are important searches running I want to look into all the options I have before taking drastic measures. Thanks for any Tips and Hints in Advance!
Agree 100%.  Hope they consider implementing a self-updating feature if they expect to have the frequency of updates that come along with postgresql.
It's not fixed in upcoming releases.  However the fix (whenever part of a release) will also be same as the workaround. [prometheus] disabled = true
Hi Team, We are getting the Dynatrace metrics and log4j logs to Splunk ITSI. Currently we created the universal correlation search manually (which needs fine tuning whenever needed) for grouping not... See more...
Hi Team, We are getting the Dynatrace metrics and log4j logs to Splunk ITSI. Currently we created the universal correlation search manually (which needs fine tuning whenever needed) for grouping notable events.  So, does Splunk ITSI or any Splunk Products provides their own AI model to perform the automatic event correlation without any manual intervention? Any inputs are much appreciated. Please let me know if any additional details are required. Thank you.
I do not see any fix for this in the just released 9.4.2 that was released month after it was discovered in 9.4.0 There is a setting in the next beta for Splunk so maybe it will come in 9.4.3 Als... See more...
I do not see any fix for this in the just released 9.4.2 that was released month after it was discovered in 9.4.0 There is a setting in the next beta for Splunk so maybe it will come in 9.4.3 Also strange that this setting is not mention in the latest documentation: https://docs.splunk.com/Documentation/Splunk/latest/Admin/Serverconf [prometheus] disabled = true
Thank you. THis worked perfectly!
Thank you ... both posted solutions worked perfectly. Much appreciated.
Is this issue likely to be fixed in an upcoming version release?
Hi @marcohirschmann  I havent seen any apps on Splunkbase for Cisco Contact Center and looking at the documentation, it really looks like pulling the logs/metrics isnt as easy as it could be! There... See more...
Hi @marcohirschmann  I havent seen any apps on Splunkbase for Cisco Contact Center and looking at the documentation, it really looks like pulling the logs/metrics isnt as easy as it could be! Theres an option to download specific logs adhoc - which isnt suitable, or there is a Monitoring API which does look more promising, details at https://www.cisco.com/c/en/us/td/docs/voice_ip_comm/cust_contact/contact_center/WebexCCE/End_User_Guides/Guide/wxcce_b_monitoring-user-guide/webexcce_m_api_endpoint.html.  You would probably need to look at creating a custom Python modular input to collect the metrics from these inputs and write them in to Splunk. Probably the easiest way to make a start with this would be using UCC framework (https://splunk.github.io/addonfactory-ucc-generator/)   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @gazoscreek  I think the line breaker regex is too complicated for what you're receiving I would expect the following to work for you: [azure:keyvault] LINE_BREAKER = ([\r\n]+)\{ SHOULD_LINEMERG... See more...
Hi @gazoscreek  I think the line breaker regex is too complicated for what you're receiving I would expect the following to work for you: [azure:keyvault] LINE_BREAKER = ([\r\n]+)\{ SHOULD_LINEMERGE = false TRUNCATE = 0 TIME_PREFIX = \"time\":\s*\" TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%N MAX_TIMESTAMP_LOOKAHEAD = 30  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@gazoscreek  I took sample events and tried in my lab, please have a look  Sample events: Mixed Metric and Audit Events { "count": 3, "total": 3, "minimum": 3, "maximum": 3, "average": 3, "resourc... See more...
@gazoscreek  I took sample events and tried in my lab, please have a look  Sample events: Mixed Metric and Audit Events { "count": 3, "total": 3, "minimum": 3, "maximum": 3, "average": 3, "resourceId": "/SUBSCRIPTIONS/blah/blah", "time": "2025-05-07T14:08:00.0000000Z", "metricName": "ServiceApiError", "timeGrain": "PT1M"} { "time": "2025-05-07T14:08:04.9876543Z", "category": "AuditEvent", "operationName": "DeleteSecret", "status": "Succeeded", "callerIpAddress": "52.191.18.74", "clientRequestId": "abcdef-12345-67890", "correlationId": "67890-abcdef-12345", "resourceId": "/SUBSCRIPTIONS/blah/blah" }   [sourcetype ] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n\s]*)(?=\{) TIME_PREFIX="time":\s*" TIME_FORMAT=%Y-%m-%dT%H:%M:%S.%7QZ MAX_TIMESTAMP_LOOKAHEAD=50 TRUNCATE=0 CHARSET=UTF-8 INDEXED_EXTRACTIONS=JSON    
For eg:   If one log event has  uniqueId=abc123 with "Wonder Exist here" and for this uniqueId with  "Message=Limit the occurrence" AND "FinderField=ZEOUS" DO NOT exist then that one should not be ... See more...
For eg:   If one log event has  uniqueId=abc123 with "Wonder Exist here" and for this uniqueId with  "Message=Limit the occurrence" AND "FinderField=ZEOUS" DO NOT exist then that one should not be in result and same in reverse also should satisfy so uniqueId only with log of "Message=Limit the occurrence" AND "FinderField=ZEOUS" should not come in result
I have multiple formats of json data coming in from Azure Keyvault. I can't seem to get the linebreaking to work properly and Splunk AddOn for Microsoft Cloudservices doesn't provide any props for ma... See more...
I have multiple formats of json data coming in from Azure Keyvault. I can't seem to get the linebreaking to work properly and Splunk AddOn for Microsoft Cloudservices doesn't provide any props for many of these json blobs. ( multiple matching lines per ingested event } { "count": 1, "total": 1, "minimum": 1, "maximum": 1, "average": 1, "resourceId": "/SUBSCRIPTIONS/blah/blah", "time": "2025-05-07T14:08:00.0000000Z", "metricName": "ServiceApiHit", "timeGrain": "PT1M"} { "count": 1, "total": 14, "minimum": 14, "maximum": 14, "average": 14, "resourceId": "/SUBSCRIPTIONS/blah/blah", "time": "2025-05-07T14:08:00.0000000Z", "metricName": "ServiceApiLatency", "timeGrain": "PT1M"} And some look like this: { "time": "2025-05-07T14:07:58.7286344Z", "category": "AuditEvent", ....... "13"} { "time": "2025-05-07T14:08:02.8617508Z", "category": "AuditEvent", ....... "13"} I've tried numerous combinations of regexes ... nothing's working. LINE_BREAKER = (\}([\r\n]\s*,[\r\n]\s*){|\{\s+\"(count|time)\") Any suggestions would be greatly helpful.
Hello Team,    Is there a way to use Splunk with Cisco Contact Centers and real time data? 
@abhi  Edit deploymentclient.conf [deployment-client] [target-broker:deploymentServer] targetUri= https://10.128.0.5:8089 NOTE: You can run the below commands directly on the UF and it will crea... See more...
@abhi  Edit deploymentclient.conf [deployment-client] [target-broker:deploymentServer] targetUri= https://10.128.0.5:8089 NOTE: You can run the below commands directly on the UF and it will create the deploymentclient.conf /opt/splunk/bin/splunk set deploy-poll <IP_address/hostname>:<management_port> /opt/splunk/bin/splunk restart Refer: https://docs.splunk.com/Documentation/Splunk/9.4.2/Updating/Configuredeploymentclients#Use_the_CLI  You could always look in forwarder management on the deployment server, or use this REST command on the Deployment server:  | rest splunk_server=local /services/deployment/server/clients Try run this command to check the clients: /opt/splunk/bin/splunk list deploy-clients -auth <username>:<password> If your DS is 9.2.x, read this https://community.splunk.com/t5/Deployment-Architecture/The-Client-forwarder-management-not-showing-the-clients/m-p/677225#M27893 
yes, the below query would extract log events from which I am expecting final list of data    index=finder_db AND (host="host1" OR host="host2") AND (("Wonder Exist here")  OR ("Message=Limit the o... See more...
yes, the below query would extract log events from which I am expecting final list of data    index=finder_db AND (host="host1" OR host="host2") AND (("Wonder Exist here")  OR ("Message=Limit the occurrence" AND "FinderField=ZEOUS")) this query will give 2 log events and both events will include uniqueId in it. So for final result I want to have  uniqueId, FinderField as table where uniqueId is listed when both log events have it and also above string exists with the same uniqueId.