All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @shawngsharp  So I think I know what you're looking for - you dont want it to match if *only* my_field_42 contains the string? So it must be in one of the other fields? You could try this - Im n... See more...
Hi @shawngsharp  So I think I know what you're looking for - you dont want it to match if *only* my_field_42 contains the string? So it must be in one of the other fields? You could try this - Im not sure how performant it will be at scale but working for me: index=YourIndex *widget* | tojson | eval orig_field_42=json_extract(_raw,"my_field_42") | eval _raw=json_delete(_raw,"my_field_42") | search *widget* | eval _raw=json_set(_raw,"my_field_42",orig_field_42) This works by temporarily removing the my_field_42 from the results before applying a secondary search - Ive gone with "tojson" which converts all the fields into json object in _raw.  Below is a sample query if it helps: |makeresults format=csv data="my_field1, my_field_2, my_field_23, my_field_42 \"hello world\",\"AwesomeWidget69\",\"\",\"your mom\" \"hello world\",\"\",\"Widgets are cool\",\"Look, a widget!\" \"hello world\",\"\",\"Some value here\",\"your widget\"" | tojson | eval orig_field_42=json_extract(_raw,"my_field_42") | eval _raw=json_delete(_raw,"my_field_42") | search *widget*  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I am trying to do a query that will search for arbitrary strings, but will ignore if the string is/isn't in a specific field. I still want to see the results from that field, though. Example: index... See more...
I am trying to do a query that will search for arbitrary strings, but will ignore if the string is/isn't in a specific field. I still want to see the results from that field, though. Example: index = my_index AND *widget* | <ignore> my_field_42 Whether my_field_42 contains the word "widget" or not should not matter to the search, but it should still show it's field values in the results.  Result 1: my_field_1 = "hello world" my_field_2 = "AwesomeWidget69" ... my_field_42 = "your mom" Result 2: my_field_1 = "hello world" my_field_23 = "Widgets are cool" ... my_field_42 = "Look, a widget!"  
Apps can be archived due to inactivity or per developer request. Since the last update of that app was 5 years ago, I'm assuming it was the former.  Note that most of the suggestions in this (16-yea... See more...
Apps can be archived due to inactivity or per developer request. Since the last update of that app was 5 years ago, I'm assuming it was the former.  Note that most of the suggestions in this (16-year-old) topic are considered dangerous in modern Splunk Enterprise and Splunk Cloud. The supported solution is to bundle your app's dependencies in the /bin directory of your app. Do not modify the version of Python shipped with Splunk; do not do on-stack compilation of assets your app needs; do not attempt to establish virtualization environments; etc. 
Uploaded screenshot of ldd command run.
Remember that /raw endpoint accepts just raw data whereas /event endpoint requires specific format which it then "unpacks". So just posting the same data to /raw endpoint will result in differently r... See more...
Remember that /raw endpoint accepts just raw data whereas /event endpoint requires specific format which it then "unpacks". So just posting the same data to /raw endpoint will result in differently represented events.
I'm not asking whether the right events are selected. I'm asking whether the fields are extracted. If you do index=finder_db AND (host="host1" OR host="host2") AND (("Wonder Exist here") OR ("Messa... See more...
I'm not asking whether the right events are selected. I'm asking whether the fields are extracted. If you do index=finder_db AND (host="host1" OR host="host2") AND (("Wonder Exist here") OR ("Message=Limit the occurrence" AND "FinderField=ZEOUS")) | table uniqueId FinderField Message Is your table populated with field values or are they empty?
Hi @Glasses2  Im glad you managed to fix your mongo issue - Not the first time SSL expiry has caught people out! There is an app "SSL Certificate Checker" on Splunkbase at https://splunkbase.splunk... See more...
Hi @Glasses2  Im glad you managed to fix your mongo issue - Not the first time SSL expiry has caught people out! There is an app "SSL Certificate Checker" on Splunkbase at https://splunkbase.splunk.com/app/3172 which looks to solve this. You configure it with the path of the certificates you wish to monitor on a Splunk host and it reports the expiry into Splunk for you to create an alert for. Setting up for /opt/splunk/etc/auth/ should capture most things unless you have other custom certs in use.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi, I recently had an issue where my SHCluster was throwing Kvstore errors. The Kvstore status was abnormal. The resolution was checking the server.pem expiration date in /opt/splunk/etc/auth use... See more...
Hi, I recently had an issue where my SHCluster was throwing Kvstore errors. The Kvstore status was abnormal. The resolution was checking the server.pem expiration date in /opt/splunk/etc/auth use >>> openssl x509 -in /opt/splunk/etc/auth/server.pem -noout -text After removing the server.pem and restarting, Kvstore was back up. Does anyone have a way to monitor the expiration dates for all the server.pem(s) in the deployment? Thanks
Search for up/down events and take the most recent for each host (switch).  Discard all of the up events and anything newer than 60 seconds.  The remainder will be down events at least a minute old w... See more...
Search for up/down events and take the most recent for each host (switch).  Discard all of the up events and anything newer than 60 seconds.  The remainder will be down events at least a minute old without a following up event. index=foo ("SESSION_STATE_DOWN" OR "SESSION_STATE_UP") | dedup host | where match(_raw, "SESSION_STATE_DOWN") AND _time<relative_time(now(), "-60s")  
Hi @dflynn235  Does the following do what you are looking for?  | eval status=case(searchmatch("has gone down"),"Down",searchmatch("is up"),"Up",true(),"Unknown") | rex "on interface (?<iface>[a-zA... See more...
Hi @dflynn235  Does the following do what you are looking for?  | eval status=case(searchmatch("has gone down"),"Down",searchmatch("is up"),"Up",true(),"Unknown") | rex "on interface (?<iface>[a-zA-Z0-9]+)" | stats range(_time) as downTime latest(status) as latestStatus by iface | where downTime>60 Here is a working example with sample data, just add the | where to limit as required. | makeresults count=1 | eval _raw="2025-05-07T07:20:40.482713-04:00 \"switch_name\" : 2025 May 7 07:20:40 EDT: %BFD-5-SESSION_STATE_DOWN: BFD session 1124073489 to neighbor \"IP Address\" on interface Vlan43 has gone down. Reason: Administratively Down." | eval host="switch_name" | append [| makeresults count=1 | eval _raw="2025-05-07T07:20:41.482771-04:00 \"switch_name\" : 2025 May 7 07:20:41 EDT: %BFD-5-SESSION_STATE_UP: BFD session 1124073489 to neighbor \"IP Address\" on interface Vlan43 is up." | eval host="switch_name"] | rex "^(?<timeStr>[^\s]+)" | eval _time=strptime(timeStr,"%Y-%m-%dT%H:%M:%S.%6N%Z") | eval status=case(searchmatch("has gone down"),"Down",searchmatch("is up"),"Up",true(),"Unknown") | rex "on interface (?<iface>[a-zA-Z0-9]+)" | stats range(_time) as downTime latest(status) as latestStatus by iface  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing    
Hi @abhishekP  This is an interesting one. When selecting a relative time window the earliest/latest are values like "-1d@d" which are valid for the earliest/latest field in a search - however when ... See more...
Hi @abhishekP  This is an interesting one. When selecting a relative time window the earliest/latest are values like "-1d@d" which are valid for the earliest/latest field in a search - however when you select specific dates/between dates etc then it returns the full date string such as "2025-05-07T18:47:22.565Z" Such a value is not supported by the earliest/latest field in a Splunk search, to get around this I have put together a table off the side of the display with a search which converts dates into epoch where required. you can then use "$timetoken:result.earliest_epoch$" and "$timetoken:result.latest_epoch$" as tokens in your other searches like this:   Below is the full JSON of the dashboard so you can have a play around with it - hopefully this helps! { "title": "testing", "description": "", "inputs": { "input_global_trp": { "options": { "defaultValue": "-24h@h,now", "token": "global_time" }, "title": "Global Time Range", "type": "input.timerange" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "earliest": "$global_time.earliest$", "latest": "$global_time.latest$" } } } } }, "visualizations": { "viz_2FDRkepv": { "dataSources": { "primary": "ds_IPGx8Y5Y" }, "options": {}, "type": "splunk.events" }, "viz_V1oldcrB": { "options": { "markdown": "earliest: $global_time.earliest$ \nlatest: $global_time.latest$ \nearliest_epoch: $timetoken:result.earliest_epoch$ \nlatest_epoch:$timetoken:result.latest_epoch$" }, "type": "splunk.markdown" }, "viz_bhZcZ5Cz": { "containerOptions": {}, "context": {}, "dataSources": { "primary": "ds_KXR2SF6V" }, "options": {}, "showLastUpdated": false, "showProgressBar": false, "type": "splunk.table" } }, "dataSources": { "ds_IPGx8Y5Y": { "name": "timetoken", "options": { "enableSmartSources": true, "query": "| makeresults \n| eval earliest=$global_time.earliest|s$, latest=$global_time.latest|s$\n| eval earliest_epoch = IF(match(earliest,\"[0-9]T[0-9]\"),strptime(earliest, \"%Y-%m-%dT%H:%M:%S.%3N%Z\"),earliest), latest_epoch = IF(match(latest,\"[0-9]T[0-9]\"),strptime(latest, \"%Y-%m-%dT%H:%M:%S.%3N%Z\"),latest)" }, "type": "ds.search" }, "ds_KXR2SF6V": { "name": "Search_1", "options": { "query": "index=_internal earliest=$timetoken:result.earliest_epoch$ latest=$timetoken:result.latest_epoch$\n| stats count by host" }, "type": "ds.search" } }, "layout": { "globalInputs": [ "input_global_trp" ], "layoutDefinitions": { "layout_1": { "options": { "display": "auto", "height": 960, "width": 1440 }, "structure": [ { "item": "viz_V1oldcrB", "position": { "h": 80, "w": 310, "x": 20, "y": 20 }, "type": "block" }, { "item": "viz_2FDRkepv", "position": { "h": 260, "w": 460, "x": 1500, "y": 20 }, "type": "block" }, { "item": "viz_bhZcZ5Cz", "position": { "h": 380, "w": 1420, "x": 10, "y": 140 }, "type": "block" } ], "type": "absolute" } }, "tabs": { "items": [ { "label": "New tab", "layoutId": "layout_1" } ] } } }  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Thanks.  I will  try to update the HEC URL to /raw instead and test with the new line breaker configuration.
Trying to use time tokens in dashboard studio under sub search, $time.earliest$ and $time.latest$ works for Presets - Today & Yesterday. But doesn't if date range is selected. Can someone kindly hel... See more...
Trying to use time tokens in dashboard studio under sub search, $time.earliest$ and $time.latest$ works for Presets - Today & Yesterday. But doesn't if date range is selected. Can someone kindly help.   | inputlookup daily_distinct_count.csv | rename avg_dc_count as avg_val | search Page="Application" | eval _time=relative_time(now(), "-1d@d"), value=avg_val, Page="Application" | append [ search index="143576" earliest=$token.earliest$ latest=$token.latest$ | eval Page=case( match(URI, "Auth"),  "Application", true(), "UNKNOWN" ) | where Page="Application" | stats dc(user) as value | eval _time=now(), Page="Application" ] | table _time Page value | timechart span=1d latest(value) as value by Page
I'm attempting to suppress an alert if a follow up event (condition) is received within 60 seconds of the initial event (condition) from the same host.  This is a network switch alerting for BFD neig... See more...
I'm attempting to suppress an alert if a follow up event (condition) is received within 60 seconds of the initial event (condition) from the same host.  This is a network switch alerting for BFD neighbor down event.  I want to suppress the alert if a BFD neighbor up event is received within 60 seconds. This is the event data received: Initial BFD Down: 2025-05-07T07:20:40.482713-04:00 "switch_name" : 2025 May 7 07:20:40 EDT: %BFD-5-SESSION_STATE_DOWN: BFD session 1124073489 to neighbor "IP Address" on interface Vlan43 has gone down. Reason: Administratively Down. host = "switch_name" Second event to nullify the alert: 2025-05-07T07:20:41.482771-04:00 "switch_name" : 2025 May 7 07:20:41 EDT: %BFD-5-SESSION_STATE_UP: BFD session 1124073489 to neighbor "IP Address" on interface Vlan43 is up. host = "switch_name"  
Here is indexing pipelines by HEC endpoints https://www.aplura.com/assets/pdf/hec_pipelines.pdf
Hello everybody! The problem that I have is that when I try to make a Backup of the KVStore on my Search Head, it fails after it is done dumping or while dumping the data.  Splunk tells me to look ... See more...
Hello everybody! The problem that I have is that when I try to make a Backup of the KVStore on my Search Head, it fails after it is done dumping or while dumping the data.  Splunk tells me to look into the logs but besides some basic info that the backup has failed I cant find any info in splunkd and mongo logs. From my understanding, it is important that, since I'm using the point_in_time option, I have to make sure no searches are writing into the KV Store when I start the backup. Since Splunk makes a Snapshot of the moment I'm starting the backup, searches that modify the KVStores afterwards shoudln't impact the backup, right? I made sure no searches have the running status when starting the Backup. Does anybody have tips or threads that are about this topic? I thought about stopping the scheduler during the backup, but since there are important searches running I want to look into all the options I have before taking drastic measures. Thanks for any Tips and Hints in Advance!
Agree 100%.  Hope they consider implementing a self-updating feature if they expect to have the frequency of updates that come along with postgresql.
It's not fixed in upcoming releases.  However the fix (whenever part of a release) will also be same as the workaround. [prometheus] disabled = true
Hi Team, We are getting the Dynatrace metrics and log4j logs to Splunk ITSI. Currently we created the universal correlation search manually (which needs fine tuning whenever needed) for grouping not... See more...
Hi Team, We are getting the Dynatrace metrics and log4j logs to Splunk ITSI. Currently we created the universal correlation search manually (which needs fine tuning whenever needed) for grouping notable events.  So, does Splunk ITSI or any Splunk Products provides their own AI model to perform the automatic event correlation without any manual intervention? Any inputs are much appreciated. Please let me know if any additional details are required. Thank you.
I do not see any fix for this in the just released 9.4.2 that was released month after it was discovered in 9.4.0 There is a setting in the next beta for Splunk so maybe it will come in 9.4.3 Als... See more...
I do not see any fix for this in the just released 9.4.2 that was released month after it was discovered in 9.4.0 There is a setting in the next beta for Splunk so maybe it will come in 9.4.3 Also strange that this setting is not mention in the latest documentation: https://docs.splunk.com/Documentation/Splunk/latest/Admin/Serverconf [prometheus] disabled = true