All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

it is working fine , but when i am refreshing the entire dashboard unsolved color becomes opposite meaning the panel which is showing green shows red and other panels are showing green .  
I would go with foreach as @livehybrid does, but the code could be simpler. |foreach * [eval <<FIELD>> = if(match(<<FIELD>>, "(?i)widget") OR "<<FIELD>>" == "my_field_42", <<FIELD>>, null())] Usi... See more...
I would go with foreach as @livehybrid does, but the code could be simpler. |foreach * [eval <<FIELD>> = if(match(<<FIELD>>, "(?i)widget") OR "<<FIELD>>" == "my_field_42", <<FIELD>>, null())] Using the same emulation, you get my_field1 my_field_2 my_field_23 my_field_42   AwesomeWidget69   your mom     Widgets are cool Look, a widget!       your widget
What should happen if the data is my_field_1 = "hello world" my_field_23 = "goodbye my friend" ... my_field_42 = "Look, a widget!" i.e. widget ONLY appears in the field you want to ignore
It seems like things are moving under your feet - the syntax of your log message has changed from your original example, which had the text StandardizedAddressService, now it's StandardizedAddress. ... See more...
It seems like things are moving under your feet - the syntax of your log message has changed from your original example, which had the text StandardizedAddressService, now it's StandardizedAddress. Note that if you create a regex to extract the fields, and the message changes, it will break the extraction. It would be useful, when you say you have errors - to show what you tried and what the result was, otherwise it's almost impossible to come up with some solution. So, on these assumptions. a) you have a JSON object after FROM: {} b) another JSON object after RESULT:  1 | {} - is "1" a fixed value or variable? Note that your example does NOT show valid JSON for the result. It is missing a comma after the Longitude value before the F - not sure if that is a typo or in your data. 97.999,"Longitude":-97.999"F Assuming it is a typo then your search should be this Your base data search goes here... ``` This line extracts the from and result JSON objects from your msgTxt field ``` | rex field=msgTxt "FROM:\s*(?<from>.*) RESULT:[^{]*(?<result>.*)" ``` This extracts the JSON from each of those objects ``` | spath input=from | spath input=result ``` and this makes the field names a bit more sensible ``` | rename AddressDetails{}.* as Result.*, WarningMessages{} as Result.WarningMessages | table Latitude Longitude *.Latitude *.Longitude Result.WarningMessages If you reply to these, please post your code in code blocks, so that it's easy to read
You didn't answer how long your search is running for - I didn't mean the time range, I mean the amount of time the search takes to run. Also, see the other questions. I'm suggesting you split out t... See more...
You didn't answer how long your search is running for - I didn't mean the time range, I mean the amount of time the search takes to run. Also, see the other questions. I'm suggesting you split out the searches just to experiment if both are giving the correct count when run individually in the dashboard AND in a manual search. If you shorten the time window do the results then work. You will need to provide more detail. Look at the search job properties and look at result count and scanCount.    
Hi @shawngsharp  Further to my last post, you could also use: |foreach * [eval field_matches = mvappend(field_matches, if(match(<<FIELD>>, "(?i)widget"), "<<FIELD>>", null()))] | eval field_matches... See more...
Hi @shawngsharp  Further to my last post, you could also use: |foreach * [eval field_matches = mvappend(field_matches, if(match(<<FIELD>>, "(?i)widget"), "<<FIELD>>", null()))] | eval field_matches=mvfilter(NOT match(field_matches,"my_field_42")) | where field_matches!="" Where your string match is inside the match statement, this works by looking in each field and then creating a multi-value field of all the fields which match, then removing my_field_42 and searching where there is one or more fields that match.   |makeresults format=csv data="my_field1, my_field_2, my_field_23, my_field_42 \"hello world\",\"AwesomeWidget69\",\"\",\"your mom\" \"hello world\",\"\",\"Widgets are cool\",\"Look, a widget!\" \"hello world\",\"\",\"Some value here\",\"your widget\"" |foreach * [eval field_matches = mvappend(field_matches, if(match(<<FIELD>>, "(?i)widget"), "<<FIELD>>", null()))] | eval field_matches=mvfilter(NOT match(field_matches,"my_field_42")) | where field_matches!=""  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @shawngsharp  So I think I know what you're looking for - you dont want it to match if *only* my_field_42 contains the string? So it must be in one of the other fields? You could try this - Im n... See more...
Hi @shawngsharp  So I think I know what you're looking for - you dont want it to match if *only* my_field_42 contains the string? So it must be in one of the other fields? You could try this - Im not sure how performant it will be at scale but working for me: index=YourIndex *widget* | tojson | eval orig_field_42=json_extract(_raw,"my_field_42") | eval _raw=json_delete(_raw,"my_field_42") | search *widget* | eval _raw=json_set(_raw,"my_field_42",orig_field_42) This works by temporarily removing the my_field_42 from the results before applying a secondary search - Ive gone with "tojson" which converts all the fields into json object in _raw.  Below is a sample query if it helps: |makeresults format=csv data="my_field1, my_field_2, my_field_23, my_field_42 \"hello world\",\"AwesomeWidget69\",\"\",\"your mom\" \"hello world\",\"\",\"Widgets are cool\",\"Look, a widget!\" \"hello world\",\"\",\"Some value here\",\"your widget\"" | tojson | eval orig_field_42=json_extract(_raw,"my_field_42") | eval _raw=json_delete(_raw,"my_field_42") | search *widget*  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I am trying to do a query that will search for arbitrary strings, but will ignore if the string is/isn't in a specific field. I still want to see the results from that field, though. Example: index... See more...
I am trying to do a query that will search for arbitrary strings, but will ignore if the string is/isn't in a specific field. I still want to see the results from that field, though. Example: index = my_index AND *widget* | <ignore> my_field_42 Whether my_field_42 contains the word "widget" or not should not matter to the search, but it should still show it's field values in the results.  Result 1: my_field_1 = "hello world" my_field_2 = "AwesomeWidget69" ... my_field_42 = "your mom" Result 2: my_field_1 = "hello world" my_field_23 = "Widgets are cool" ... my_field_42 = "Look, a widget!"  
Apps can be archived due to inactivity or per developer request. Since the last update of that app was 5 years ago, I'm assuming it was the former.  Note that most of the suggestions in this (16-yea... See more...
Apps can be archived due to inactivity or per developer request. Since the last update of that app was 5 years ago, I'm assuming it was the former.  Note that most of the suggestions in this (16-year-old) topic are considered dangerous in modern Splunk Enterprise and Splunk Cloud. The supported solution is to bundle your app's dependencies in the /bin directory of your app. Do not modify the version of Python shipped with Splunk; do not do on-stack compilation of assets your app needs; do not attempt to establish virtualization environments; etc. 
Uploaded screenshot of ldd command run.
Remember that /raw endpoint accepts just raw data whereas /event endpoint requires specific format which it then "unpacks". So just posting the same data to /raw endpoint will result in differently r... See more...
Remember that /raw endpoint accepts just raw data whereas /event endpoint requires specific format which it then "unpacks". So just posting the same data to /raw endpoint will result in differently represented events.
I'm not asking whether the right events are selected. I'm asking whether the fields are extracted. If you do index=finder_db AND (host="host1" OR host="host2") AND (("Wonder Exist here") OR ("Messa... See more...
I'm not asking whether the right events are selected. I'm asking whether the fields are extracted. If you do index=finder_db AND (host="host1" OR host="host2") AND (("Wonder Exist here") OR ("Message=Limit the occurrence" AND "FinderField=ZEOUS")) | table uniqueId FinderField Message Is your table populated with field values or are they empty?
Hi @Glasses2  Im glad you managed to fix your mongo issue - Not the first time SSL expiry has caught people out! There is an app "SSL Certificate Checker" on Splunkbase at https://splunkbase.splunk... See more...
Hi @Glasses2  Im glad you managed to fix your mongo issue - Not the first time SSL expiry has caught people out! There is an app "SSL Certificate Checker" on Splunkbase at https://splunkbase.splunk.com/app/3172 which looks to solve this. You configure it with the path of the certificates you wish to monitor on a Splunk host and it reports the expiry into Splunk for you to create an alert for. Setting up for /opt/splunk/etc/auth/ should capture most things unless you have other custom certs in use.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi, I recently had an issue where my SHCluster was throwing Kvstore errors. The Kvstore status was abnormal. The resolution was checking the server.pem expiration date in /opt/splunk/etc/auth use... See more...
Hi, I recently had an issue where my SHCluster was throwing Kvstore errors. The Kvstore status was abnormal. The resolution was checking the server.pem expiration date in /opt/splunk/etc/auth use >>> openssl x509 -in /opt/splunk/etc/auth/server.pem -noout -text After removing the server.pem and restarting, Kvstore was back up. Does anyone have a way to monitor the expiration dates for all the server.pem(s) in the deployment? Thanks
Search for up/down events and take the most recent for each host (switch).  Discard all of the up events and anything newer than 60 seconds.  The remainder will be down events at least a minute old w... See more...
Search for up/down events and take the most recent for each host (switch).  Discard all of the up events and anything newer than 60 seconds.  The remainder will be down events at least a minute old without a following up event. index=foo ("SESSION_STATE_DOWN" OR "SESSION_STATE_UP") | dedup host | where match(_raw, "SESSION_STATE_DOWN") AND _time<relative_time(now(), "-60s")  
Hi @dflynn235  Does the following do what you are looking for?  | eval status=case(searchmatch("has gone down"),"Down",searchmatch("is up"),"Up",true(),"Unknown") | rex "on interface (?<iface>[a-zA... See more...
Hi @dflynn235  Does the following do what you are looking for?  | eval status=case(searchmatch("has gone down"),"Down",searchmatch("is up"),"Up",true(),"Unknown") | rex "on interface (?<iface>[a-zA-Z0-9]+)" | stats range(_time) as downTime latest(status) as latestStatus by iface | where downTime>60 Here is a working example with sample data, just add the | where to limit as required. | makeresults count=1 | eval _raw="2025-05-07T07:20:40.482713-04:00 \"switch_name\" : 2025 May 7 07:20:40 EDT: %BFD-5-SESSION_STATE_DOWN: BFD session 1124073489 to neighbor \"IP Address\" on interface Vlan43 has gone down. Reason: Administratively Down." | eval host="switch_name" | append [| makeresults count=1 | eval _raw="2025-05-07T07:20:41.482771-04:00 \"switch_name\" : 2025 May 7 07:20:41 EDT: %BFD-5-SESSION_STATE_UP: BFD session 1124073489 to neighbor \"IP Address\" on interface Vlan43 is up." | eval host="switch_name"] | rex "^(?<timeStr>[^\s]+)" | eval _time=strptime(timeStr,"%Y-%m-%dT%H:%M:%S.%6N%Z") | eval status=case(searchmatch("has gone down"),"Down",searchmatch("is up"),"Up",true(),"Unknown") | rex "on interface (?<iface>[a-zA-Z0-9]+)" | stats range(_time) as downTime latest(status) as latestStatus by iface  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing    
Hi @abhishekP  This is an interesting one. When selecting a relative time window the earliest/latest are values like "-1d@d" which are valid for the earliest/latest field in a search - however when ... See more...
Hi @abhishekP  This is an interesting one. When selecting a relative time window the earliest/latest are values like "-1d@d" which are valid for the earliest/latest field in a search - however when you select specific dates/between dates etc then it returns the full date string such as "2025-05-07T18:47:22.565Z" Such a value is not supported by the earliest/latest field in a Splunk search, to get around this I have put together a table off the side of the display with a search which converts dates into epoch where required. you can then use "$timetoken:result.earliest_epoch$" and "$timetoken:result.latest_epoch$" as tokens in your other searches like this:   Below is the full JSON of the dashboard so you can have a play around with it - hopefully this helps! { "title": "testing", "description": "", "inputs": { "input_global_trp": { "options": { "defaultValue": "-24h@h,now", "token": "global_time" }, "title": "Global Time Range", "type": "input.timerange" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "earliest": "$global_time.earliest$", "latest": "$global_time.latest$" } } } } }, "visualizations": { "viz_2FDRkepv": { "dataSources": { "primary": "ds_IPGx8Y5Y" }, "options": {}, "type": "splunk.events" }, "viz_V1oldcrB": { "options": { "markdown": "earliest: $global_time.earliest$ \nlatest: $global_time.latest$ \nearliest_epoch: $timetoken:result.earliest_epoch$ \nlatest_epoch:$timetoken:result.latest_epoch$" }, "type": "splunk.markdown" }, "viz_bhZcZ5Cz": { "containerOptions": {}, "context": {}, "dataSources": { "primary": "ds_KXR2SF6V" }, "options": {}, "showLastUpdated": false, "showProgressBar": false, "type": "splunk.table" } }, "dataSources": { "ds_IPGx8Y5Y": { "name": "timetoken", "options": { "enableSmartSources": true, "query": "| makeresults \n| eval earliest=$global_time.earliest|s$, latest=$global_time.latest|s$\n| eval earliest_epoch = IF(match(earliest,\"[0-9]T[0-9]\"),strptime(earliest, \"%Y-%m-%dT%H:%M:%S.%3N%Z\"),earliest), latest_epoch = IF(match(latest,\"[0-9]T[0-9]\"),strptime(latest, \"%Y-%m-%dT%H:%M:%S.%3N%Z\"),latest)" }, "type": "ds.search" }, "ds_KXR2SF6V": { "name": "Search_1", "options": { "query": "index=_internal earliest=$timetoken:result.earliest_epoch$ latest=$timetoken:result.latest_epoch$\n| stats count by host" }, "type": "ds.search" } }, "layout": { "globalInputs": [ "input_global_trp" ], "layoutDefinitions": { "layout_1": { "options": { "display": "auto", "height": 960, "width": 1440 }, "structure": [ { "item": "viz_V1oldcrB", "position": { "h": 80, "w": 310, "x": 20, "y": 20 }, "type": "block" }, { "item": "viz_2FDRkepv", "position": { "h": 260, "w": 460, "x": 1500, "y": 20 }, "type": "block" }, { "item": "viz_bhZcZ5Cz", "position": { "h": 380, "w": 1420, "x": 10, "y": 140 }, "type": "block" } ], "type": "absolute" } }, "tabs": { "items": [ { "label": "New tab", "layoutId": "layout_1" } ] } } }  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Thanks.  I will  try to update the HEC URL to /raw instead and test with the new line breaker configuration.
Trying to use time tokens in dashboard studio under sub search, $time.earliest$ and $time.latest$ works for Presets - Today & Yesterday. But doesn't if date range is selected. Can someone kindly hel... See more...
Trying to use time tokens in dashboard studio under sub search, $time.earliest$ and $time.latest$ works for Presets - Today & Yesterday. But doesn't if date range is selected. Can someone kindly help.   | inputlookup daily_distinct_count.csv | rename avg_dc_count as avg_val | search Page="Application" | eval _time=relative_time(now(), "-1d@d"), value=avg_val, Page="Application" | append [ search index="143576" earliest=$token.earliest$ latest=$token.latest$ | eval Page=case( match(URI, "Auth"),  "Application", true(), "UNKNOWN" ) | where Page="Application" | stats dc(user) as value | eval _time=now(), Page="Application" ] | table _time Page value | timechart span=1d latest(value) as value by Page
I'm attempting to suppress an alert if a follow up event (condition) is received within 60 seconds of the initial event (condition) from the same host.  This is a network switch alerting for BFD neig... See more...
I'm attempting to suppress an alert if a follow up event (condition) is received within 60 seconds of the initial event (condition) from the same host.  This is a network switch alerting for BFD neighbor down event.  I want to suppress the alert if a BFD neighbor up event is received within 60 seconds. This is the event data received: Initial BFD Down: 2025-05-07T07:20:40.482713-04:00 "switch_name" : 2025 May 7 07:20:40 EDT: %BFD-5-SESSION_STATE_DOWN: BFD session 1124073489 to neighbor "IP Address" on interface Vlan43 has gone down. Reason: Administratively Down. host = "switch_name" Second event to nullify the alert: 2025-05-07T07:20:41.482771-04:00 "switch_name" : 2025 May 7 07:20:41 EDT: %BFD-5-SESSION_STATE_UP: BFD session 1124073489 to neighbor "IP Address" on interface Vlan43 is up. host = "switch_name"