All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Team, We are getting the Dynatrace metrics and log4j logs to Splunk ITSI. Currently we created the universal correlation search manually (which needs fine tuning whenever needed) for grouping not... See more...
Hi Team, We are getting the Dynatrace metrics and log4j logs to Splunk ITSI. Currently we created the universal correlation search manually (which needs fine tuning whenever needed) for grouping notable events.  So, does Splunk ITSI or any Splunk Products provides their own AI model to perform the automatic event correlation without any manual intervention? Any inputs are much appreciated. Please let me know if any additional details are required. Thank you.
I do not see any fix for this in the just released 9.4.2 that was released month after it was discovered in 9.4.0 There is a setting in the next beta for Splunk so maybe it will come in 9.4.3 Als... See more...
I do not see any fix for this in the just released 9.4.2 that was released month after it was discovered in 9.4.0 There is a setting in the next beta for Splunk so maybe it will come in 9.4.3 Also strange that this setting is not mention in the latest documentation: https://docs.splunk.com/Documentation/Splunk/latest/Admin/Serverconf [prometheus] disabled = true
Thank you. THis worked perfectly!
Thank you ... both posted solutions worked perfectly. Much appreciated.
Is this issue likely to be fixed in an upcoming version release?
Hi @marcohirschmann  I havent seen any apps on Splunkbase for Cisco Contact Center and looking at the documentation, it really looks like pulling the logs/metrics isnt as easy as it could be! There... See more...
Hi @marcohirschmann  I havent seen any apps on Splunkbase for Cisco Contact Center and looking at the documentation, it really looks like pulling the logs/metrics isnt as easy as it could be! Theres an option to download specific logs adhoc - which isnt suitable, or there is a Monitoring API which does look more promising, details at https://www.cisco.com/c/en/us/td/docs/voice_ip_comm/cust_contact/contact_center/WebexCCE/End_User_Guides/Guide/wxcce_b_monitoring-user-guide/webexcce_m_api_endpoint.html.  You would probably need to look at creating a custom Python modular input to collect the metrics from these inputs and write them in to Splunk. Probably the easiest way to make a start with this would be using UCC framework (https://splunk.github.io/addonfactory-ucc-generator/)   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @gazoscreek  I think the line breaker regex is too complicated for what you're receiving I would expect the following to work for you: [azure:keyvault] LINE_BREAKER = ([\r\n]+)\{ SHOULD_LINEMERG... See more...
Hi @gazoscreek  I think the line breaker regex is too complicated for what you're receiving I would expect the following to work for you: [azure:keyvault] LINE_BREAKER = ([\r\n]+)\{ SHOULD_LINEMERGE = false TRUNCATE = 0 TIME_PREFIX = \"time\":\s*\" TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%N MAX_TIMESTAMP_LOOKAHEAD = 30  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
@gazoscreek  I took sample events and tried in my lab, please have a look  Sample events: Mixed Metric and Audit Events { "count": 3, "total": 3, "minimum": 3, "maximum": 3, "average": 3, "resourc... See more...
@gazoscreek  I took sample events and tried in my lab, please have a look  Sample events: Mixed Metric and Audit Events { "count": 3, "total": 3, "minimum": 3, "maximum": 3, "average": 3, "resourceId": "/SUBSCRIPTIONS/blah/blah", "time": "2025-05-07T14:08:00.0000000Z", "metricName": "ServiceApiError", "timeGrain": "PT1M"} { "time": "2025-05-07T14:08:04.9876543Z", "category": "AuditEvent", "operationName": "DeleteSecret", "status": "Succeeded", "callerIpAddress": "52.191.18.74", "clientRequestId": "abcdef-12345-67890", "correlationId": "67890-abcdef-12345", "resourceId": "/SUBSCRIPTIONS/blah/blah" }   [sourcetype ] SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n\s]*)(?=\{) TIME_PREFIX="time":\s*" TIME_FORMAT=%Y-%m-%dT%H:%M:%S.%7QZ MAX_TIMESTAMP_LOOKAHEAD=50 TRUNCATE=0 CHARSET=UTF-8 INDEXED_EXTRACTIONS=JSON    
For eg:   If one log event has  uniqueId=abc123 with "Wonder Exist here" and for this uniqueId with  "Message=Limit the occurrence" AND "FinderField=ZEOUS" DO NOT exist then that one should not be ... See more...
For eg:   If one log event has  uniqueId=abc123 with "Wonder Exist here" and for this uniqueId with  "Message=Limit the occurrence" AND "FinderField=ZEOUS" DO NOT exist then that one should not be in result and same in reverse also should satisfy so uniqueId only with log of "Message=Limit the occurrence" AND "FinderField=ZEOUS" should not come in result
I have multiple formats of json data coming in from Azure Keyvault. I can't seem to get the linebreaking to work properly and Splunk AddOn for Microsoft Cloudservices doesn't provide any props for ma... See more...
I have multiple formats of json data coming in from Azure Keyvault. I can't seem to get the linebreaking to work properly and Splunk AddOn for Microsoft Cloudservices doesn't provide any props for many of these json blobs. ( multiple matching lines per ingested event } { "count": 1, "total": 1, "minimum": 1, "maximum": 1, "average": 1, "resourceId": "/SUBSCRIPTIONS/blah/blah", "time": "2025-05-07T14:08:00.0000000Z", "metricName": "ServiceApiHit", "timeGrain": "PT1M"} { "count": 1, "total": 14, "minimum": 14, "maximum": 14, "average": 14, "resourceId": "/SUBSCRIPTIONS/blah/blah", "time": "2025-05-07T14:08:00.0000000Z", "metricName": "ServiceApiLatency", "timeGrain": "PT1M"} And some look like this: { "time": "2025-05-07T14:07:58.7286344Z", "category": "AuditEvent", ....... "13"} { "time": "2025-05-07T14:08:02.8617508Z", "category": "AuditEvent", ....... "13"} I've tried numerous combinations of regexes ... nothing's working. LINE_BREAKER = (\}([\r\n]\s*,[\r\n]\s*){|\{\s+\"(count|time)\") Any suggestions would be greatly helpful.
Hello Team,    Is there a way to use Splunk with Cisco Contact Centers and real time data? 
@abhi  Edit deploymentclient.conf [deployment-client] [target-broker:deploymentServer] targetUri= https://10.128.0.5:8089 NOTE: You can run the below commands directly on the UF and it will crea... See more...
@abhi  Edit deploymentclient.conf [deployment-client] [target-broker:deploymentServer] targetUri= https://10.128.0.5:8089 NOTE: You can run the below commands directly on the UF and it will create the deploymentclient.conf /opt/splunk/bin/splunk set deploy-poll <IP_address/hostname>:<management_port> /opt/splunk/bin/splunk restart Refer: https://docs.splunk.com/Documentation/Splunk/9.4.2/Updating/Configuredeploymentclients#Use_the_CLI  You could always look in forwarder management on the deployment server, or use this REST command on the Deployment server:  | rest splunk_server=local /services/deployment/server/clients Try run this command to check the clients: /opt/splunk/bin/splunk list deploy-clients -auth <username>:<password> If your DS is 9.2.x, read this https://community.splunk.com/t5/Deployment-Architecture/The-Client-forwarder-management-not-showing-the-clients/m-p/677225#M27893 
yes, the below query would extract log events from which I am expecting final list of data    index=finder_db AND (host="host1" OR host="host2") AND (("Wonder Exist here")  OR ("Message=Limit the o... See more...
yes, the below query would extract log events from which I am expecting final list of data    index=finder_db AND (host="host1" OR host="host2") AND (("Wonder Exist here")  OR ("Message=Limit the occurrence" AND "FinderField=ZEOUS")) this query will give 2 log events and both events will include uniqueId in it. So for final result I want to have  uniqueId, FinderField as table where uniqueId is listed when both log events have it and also above string exists with the same uniqueId.
I am tempted to switch this post to the "solution", it is not "exactly" what I asked for but it did achiecve the effect I was looking for. As per Use the deployer to distribute apps and configuratio... See more...
I am tempted to switch this post to the "solution", it is not "exactly" what I asked for but it did achiecve the effect I was looking for. As per Use the deployer to distribute apps and configuration updates - Splunk Documentation. After an initial push of PSC and MLTK from the deployment server we added a local/app.conf file containing only  [shclustering] deployer_push_mode = local_only The "push-time" dropped from 167 seconds to 47 seconds with push mode set in the PSC app, then to 24 seconds when also changing push mode in the MLTK app. So this did have the effect I was after, even though it is not strictly speaking a "local only install" of an app. All the best
Hello @bowesmana Yes outer query is for 1 hr and inner is for 5 hrs . Are u saying these two in separate panel and use result of these in third one and append it ?
Hi @Crabbok  You shouldnt need to specify a hard-coded list of users, that was just me creating some test data. Assuming the Message field is available to run the regex you should just be able to do... See more...
Hi @Crabbok  You shouldnt need to specify a hard-coded list of users, that was just me creating some test data. Assuming the Message field is available to run the regex you should just be able to do: | rex field=Message "has (?<action>[a-zA-Z]+) the event session" | eval {action}_time=_time | sort UserXXID | streamstats count as userEventNum min(joined_time) as session_joined_time, max(left_time) as session_left_time by UserXXID reset_after="action=\"left\"" | eval action_time=strptime(_time, "%Y-%m-%d %H:%M:%S") | stats range(action_time) as sessionDurationSeconds, values(action) as actions, max(_time) as session_left_time by UserXXID session_joined_time  If UserXXID is not a field then you can use rex to get this too. If you want to stick to using transactions, try adding *keepevicted=true* to your transaction command as I think this might keep the non-completed transaction events.  See https://docs.splunk.com/Documentation/SplunkCloud/latest/SearchReference/Transaction#:~:text=Syntax%3A%20keepevicted%3D%3Cbool%3E  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Essentially, you need to do the hard work! First untable the stats from the timechart results, find each user's maximum, sort the results by maximum and user, then count the users and find the "middl... See more...
Essentially, you need to do the hard work! First untable the stats from the timechart results, find each user's maximum, sort the results by maximum and user, then count the users and find the "middle 20", then convert back to chart format. index=os sourcetype=ps (tag=dcv-na-himem) NOT tag::USER="LNX_SYSTEM_USER" | timechart span=1m sum(eval(RSZ_KB/1024/1024)) as Mem_Used_GB by USER useother=false limit=0 | untable _time USER Mem_Used_GB | eventstats max(Mem_Used_GB) as max by USER | sort 0 max USER desc | streamstats dc(USER) as user_number | eventstats dc(USER) as total | where user_number > (total - 20)/2 and user_number < 20+((total - 20)/2) | xyseries _time USER Mem_Used_GB
OK. Let's start at the start index=finder_db AND (host="host1" OR host="host2") AND (("Wonder Exist here") OR ("Message=Limit the occurrence" AND "FinderField=ZEOUS")) This will select the event... See more...
OK. Let's start at the start index=finder_db AND (host="host1" OR host="host2") AND (("Wonder Exist here") OR ("Message=Limit the occurrence" AND "FinderField=ZEOUS")) This will select the events for further processing. But the question is whether you're extracting any fields from those events. Before we're going anywhere further, we need to know whether: 1) The uniqueId field (to which you're referring in subsequent posts in a case-inconsistent manner) is extracted. 2) The "data" field(s) which you want to "merge" are extracted. Generally, the field extraction should be (actually, should already have been) handled at data onboarding stage. When you have this one covered, you can get to the second part - handling the logic behind "joining" your events.
I have used the rex field=msgTxt but I keep getting errors. I'm sorry but I've worked on this for hours, and nothing seems to work. I'm still pretty new to Splunk and this is not in my skill-set. Ma... See more...
I have used the rex field=msgTxt but I keep getting errors. I'm sorry but I've worked on this for hours, and nothing seems to work. I'm still pretty new to Splunk and this is not in my skill-set. Maybe I should start over.. However, the results I'm looking for have slightly changed. The field or log that contains my results are located in msgTxt  and I would like to pull both  Latitude/Longitude values and the WarningMessages.  The field has Latitude and Longitude listed twice. Most of the time the first set will return 0's and the log will always be in this format. The log looks like this: StandardizedAddressService SUCCEEDED - FROM: {"Address1":"63 Somewhere NW ST","Address2":null,"City":"OKLAND CITY","County":null,"State":"OK","ZipCode":"99999-1111","Latitude":97.999,"Longitude":-97.999,"IsStandardized":false,"AddressStandardizationStatus":0,"AddressStandardizationType":0} RESULT: 1 | {"AddressDetails":[{"AssociatedName":"","HouseNumber":"63","Predirection":"NW","StreetName":"Somewhere","Suffix":"ST","Postdirection":"","SuiteName":"","SuiteRange":"","City":"OKLAND CITY","CityAbbreviation":"OKLAND CITY","State":"OK","ZipCode":"99999","Zip4":"1111","County":"Oklahoma","CountyFips":"40109","CoastalCounty":0,"Latitude":97.999,"Longitude":-97.999"Fulladdress1":"63 Somewhere NW ST","Fulladdress2":"","HighRiseDefault":false}],"WarningMessages":[],"ErrorMessages":[],"GeoErrorMessages":[],"Succeeded":true,"ErrorMessage":null} I'm hoping to see the following results: Latitude    Longitude   Latitude   Longitude    WarningMessages 99.2541     -25.214       99.254     -25.214        NULL 00.0000     -00.000        99.254     -21.218       NULL 00.0000     -00.000       00.000    -00.000        Error message with something The results for all of the phrases will be different and I will be searching through1000's of logs. If it's too much work to show both set of the Latitude/Longitude values, then the second set would work. Your help is greatly appreciated..  Thanks    
Thank you for the quick response!   This looks like a great solution - however the reason I was using transaction is because we have new users all the time and I can't pre-add every possible username... See more...
Thank you for the quick response!   This looks like a great solution - however the reason I was using transaction is because we have new users all the time and I can't pre-add every possible username.    I know people say that transaction is awful but for the life of me I've never had a single problem using it.