All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

In your explanations, the kind of error/exception messages you want to capture aren't completely consistent.  But I finally grasped that you want to count certain such messages based on actual messag... See more...
In your explanations, the kind of error/exception messages you want to capture aren't completely consistent.  But I finally grasped that you want to count certain such messages based on actual message.  Note the sample data contain three "description" that indicate a failure: content.payload.description content.payload.errorMetadata.description content.payload.errorMetadata.exception.description HTTP PUT on resource http://mule-worker-internal-order-sys-api-prod.au-s1.cloudhub.io:8091/orders/submit/716 failed: bad request (400). HTTP PUT on resource http://mule-worker-internal-order-sys-api-prod.au-s1.cloudhub.io:8091/orders/submit/898 failed: bad request (400). {"code":"ghgj","message":"CTT failed items. ModifierRequirementNotMet - 4 Nuggets,ModifierRequirementNotMet - 4 Nuggets"} I'll take content.payload.errorMetadata.exception.description as the one you want to capture most because this seems to be the closest match of what you said here. But before that, you still need to explain more about your data.  If this illustration is the full raw log, it seems that the developer placed some plain text message in front of a conformant JSON object so the raw log is not JSON, even though the plain text looks like an excerpt from the JSON object itself, plus some meta data such as partial time stamp and log level.  To Splunk, this means that you do not directly get a field name content.payload.errorMetadata.exception.description.  Is this correct?  In other words, your "blocker" is not so much count by error, but how to extract that error message. If this is the case, the problem is easily solved by removing the leading plain text and get to JSON, then use spath to extract fields from JSON.   | rex "[^{](?<json>{.+})" | spath input=json | stats count by content.payload.errorMetadata.exception.description   Your sample data will give content.payload.errorMetadata.exception.description count {"code":"ghgj","message":"CTT failed items. ModifierRequirementNotMet - 4 Nuggets,ModifierRequirementNotMet - 4 Nuggets"} 1 (The count is 1 because there is only one sample) Is this something you are looking for? Here is an emulation you can play with and compare with real data   | makeresults | eval _raw = "logger:integration-fabrics-exp-api.put:\\orders\\submit\\(storeid).Exception message: [10-12 05:36:03] INFO Exception [[MuleRuntime].uber.12973: [integration-fabrics-exp-api-prod].util:logger-exception/processors/0.ps.BLOCKING @8989]: { \"correlationId\" : \"787979-50ac-4b6f-90bd-64f1b6f79985\", \"message\" : \"Exception\", \"tracePoint\" : \"EXCEPTION\", \"priority\" : \"INFO\", \"category\" : \"kfc-integration-fabrics-exp-api.put:\\\\orders\\\\submit\\\\(storeid).Exception\", \"elapsed\" : 3806, \"locationInfo\" : { \"lineInFile\" : \"69\", \"component\" : \"json-logger:logger\", \"fileName\" : \"common/common-logger-flow.xml\", \"rootContainer\" : \"util:logger-exception\" }, \"timestamp\" : \"2023-10-12T05:36:03.317Z\", \"content\" : { \"payload\" : { \"api\" : \"integration-fabrics-exp-api-prod\", \"message\" : \"{\\n \\\"externalOrderId\\\": \\\"275769403\\\",\\n \\\"instruction\\\": \\\"275769403\\\",\\n \\\"items\\\": [\\n {\\n \\\"id\\\": \\\"I-30995\\\",\\n \\\"name\\\": \\\"Regular Chips\\\",\\n \\\"unitPrice\\\": 445,\\n \\\"quantity\\\": 1,\\n \\\"subItems\\\": []\\n },\\n {\\n \\\"id\\\": \\\"I-30057\\\",\\n \\\"name\\\": \\\"Regular Potato \\\\u0026 Gravy\\\",\\n \\\"unitPrice\\\": 545,\\n \\\"quantity\\\": 1,\\n \\\"subItems\\\": []\\n },\\n {\\n \\\"id\\\": \\\"I-30017\\\",\\n \\\"name\\\": \\\"3 Wicked Wings®\\\",\\n \\\"unitPrice\\\": 695,\\n \\\"quantity\\\": 1,\\n \\\"subItems\\\": []\\n },\\n {\\n \\\"id\\\": \\\"I-898-0\\\",\\n \\\"name\\\": \\\"Kids Meal with Nuggets\\\",\\n \\\"unitPrice\\\": 875,\\n \\\"quantity\\\": 1,\\n \\\"subItems\\\": [\\n {\\n \\\"id\\\": \\\"M-41687-0\\\",\\n \\\"name\\\": \\\"4 Nuggets\\\",\\n \\\"unitPrice\\\": 0,\\n \\\"quantity\\\": 1,\\n \\\"subItems\\\": []\\n },\\n {\\n \\\"id\\\": \\\"M-40976-0\\\",\\n \\\"name\\\": \\\"Regular Chips\\\",\\n \\\"unitPrice\\\": 0,\\n \\\"quantity\\\": 1,\\n \\\"subItems\\\": []\\n },\\n {\\n \\\"id\\\": \\\"M-40931-0\\\",\\n \\\"name\\\": \\\"Regular 7Up\\\",\\n \\\"unitPrice\\\": 0,\\n \\\"quantity\\\": 1,\\n \\\"subItems\\\": []\\n }\\n ]\\n },\\n {\\n \\\"id\\\": \\\"I-32368-0\\\",\\n \\\"name\\\": \\\"Kids Meal with Nuggets\\\",\\n \\\"unitPrice\\\": 875,\\n \\\"quantity\\\": 1,\\n \\\"subItems\\\": [\\n {\\n \\\"id\\\": \\\"M-41687-0\\\",\\n \\\"name\\\": \\\"4 Nuggets\\\",\\n \\\"unitPrice\\\": 0,\\n \\\"quantity\\\": 1,\\n \\\"subItems\\\": []\\n },\\n {\\n \\\"id\\\": \\\"M-40976-0\\\",\\n \\\"name\\\": \\\"Regular Chips\\\",\\n \\\"unitPrice\\\": 0,\\n \\\"quantity\\\": 1,\\n \\\"subItems\\\": []\\n },\\n {\\n \\\"id\\\": \\\"M-40931-0\\\",\\n \\\"name\\\": \\\"Regular 7Up\\\",\\n \\\"unitPrice\\\": 0,\\n \\\"quantity\\\": 1,\\n \\\"subItems\\\": []\\n }\\n ]\\n }\\n ],\\n \\\"customer\\\": {\\n \\\"firstName\\\": \\\"9403\\\",\\n \\\"lastName\\\": \\\"ML\\\",\\n \\\"email\\\": \\\"ghgjhgj@hotmail.com\\\",\\n \\\"phoneNumber\\\": \\\"897987\\\"\\n },\\n \\\"tenders\\\": [\\n {\\n \\\"type\\\": \\\"credit-card\\\",\\n \\\"amount\\\": 3435\\n }\\n ],\\n \\\"discountLines\\\": []\\n}\", \"description\" : \"HTTP PUT on resource http://mule-worker-internal-order-sys-api-prod.au-s1.cloudhub.io:8091/orders/submit/716 failed: bad request (400).\", \"correlationId\" : \"1cb22ac0-50ac-4b6f-0988-64f1b6f79985\", \"category\" : \"integration-fabrics-exp-api.put:\\\\orders\\\\submit\\\\(storeid)\", \"timeStamp\" : \"2023-10-12T16:36:03:316000Z\", \"incomingMessage\" : { \"externalOrderId\" : \"9898\", \"instruction\" : \"275769403\", \"items\" : [ { \"id\" : \"I-30995\", \"name\" : \"Regular Chips\", \"unitPrice\" : 445, \"quantity\" : 1, \"subItems\" : [ ] }, { \"id\" : \"I-30057\", \"name\" : \"Regular Potato & Gravy\", \"unitPrice\" : 545, \"quantity\" : 1, \"subItems\" : [ ] }, { \"id\" : \"I-30017\", \"name\" : \"3 Wicked Wings®\", \"unitPrice\" : 695, \"quantity\" : 1, \"subItems\" : [ ] }, { \"id\" : \"I-32368-0\", \"name\" : \"Kids Meal with Nuggets\", \"unitPrice\" : 875, \"quantity\" : 1, \"subItems\" : [ { \"id\" : \"M-41687-0\", \"name\" : \"4 Nuggets\", \"unitPrice\" : 0, \"quantity\" : 1, \"subItems\" : [ ] }, { \"id\" : \"M-40976-0\", \"name\" : \"Regular Chips\", \"unitPrice\" : 0, \"quantity\" : 1, \"subItems\" : [ ] }, { \"id\" : \"M-40931-0\", \"name\" : \"Regular 7Up\", \"unitPrice\" : 0, \"quantity\" : 1, \"subItems\" : [ ] } ] }, { \"id\" : \"I-32368-0\", \"name\" : \"Kids Meal with Nuggets\", \"unitPrice\" : 875, \"quantity\" : 1, \"subItems\" : [ { \"id\" : \"M-41687-0\", \"name\" : \"4 Nuggets\", \"unitPrice\" : 0, \"quantity\" : 1, \"subItems\" : [ ] }, { \"id\" : \"M-40976-0\", \"name\" : \"Regular Chips\", \"unitPrice\" : 0, \"quantity\" : 1, \"subItems\" : [ ] }, { \"id\" : \"M-40931-0\", \"name\" : \"Regular 7Up\", \"unitPrice\" : 0, \"quantity\" : 1, \"subItems\" : [ ] } ] } ], \"customer\" : { \"firstName\" : \"9403\", \"lastName\" : \"ML\", \"email\" : \"ns@hotmail.com\", \"phoneNumber\" : \"98908\" }, \"tenders\" : [ { \"type\" : \"credit-card\", \"amount\" : 3435 } ], \"discountLines\" : [ ] }, \"errorMetadata\" : { \"errorType\" : { \"parentErrorType\" : { \"identifier\" : \"ANY\", \"namespace\" : \"MULE\" }, \"identifier\" : \"BAD_REQUEST\", \"namespace\" : \"HTTP\" }, \"description\" : \"HTTP PUT on resource http://mule-worker-internal-order-sys-api-prod.au-s1.cloudhub.io:8091/orders/submit/898 failed: bad request (400).\", \"additionalDetails\" : \"HTTP PUT on resource http://mule-worker-internal-order-sys-api-prod.au-s1.cloudhub.io:8091/orders/submit/716 failed: bad request (400).\", \"exception\" : { \"correlationId\" : \"1cb22ac0-50ac-4b6f-90bd-78979\", \"timestamp\" : \"2023-10-12T16:36:03:273000Z\", \"errorType\" : \"400 HTTP:BAD_REQUEST\", \"description\" : \"{\\\"code\\\":\\\"ghgj\\\",\\\"message\\\":\\\"CTT failed items. ModifierRequirementNotMet - 4 Nuggets,ModifierRequirementNotMet - 4 Nuggets\\\"}\" } } } }, \"applicationName\" : \"integration-fabrics-exp-api-prod\", \"applicationVersion\" : \"\", \"environment\" : \"prod\", \"threadName\" : \"[MuleRuntime].uber.12973: [integration-fabrics-exp-api-prod].util:logger-exception/processors/0.ps.BLOCKING @64c03d54\" }" ``` data emulation above ```    
Set a time range for the third panel and click on Date & Time Range dropdown to set "between" time range of the other 2 you previously set. (dashboard studio)
Let's say im running a search where I want to look at domains traveled to. index=web_traffic sourcetype=domains domain IN ("*.com", "*.org*", "*.edu*") I want to do a count on how domains that have... See more...
Let's say im running a search where I want to look at domains traveled to. index=web_traffic sourcetype=domains domain IN ("*.com", "*.org*", "*.edu*") I want to do a count on how domains that have appeared less than 5 times over the search period. How can I accomplish this? I know I could do a stats count by domain but after that, I'm unable to grab the rest of the results in the index such as time, etc.  
I am looking to create a chart showing the average daily total ingest by month in Terabytes excluding weekends over the past year. For some reason I am struggling with this. Any help getting me start... See more...
I am looking to create a chart showing the average daily total ingest by month in Terabytes excluding weekends over the past year. For some reason I am struggling with this. Any help getting me started would be appreciated.
Edit, in configuration menu under view options, toggle for Show Open In Search Button
Afternoon, We are currently having issues with duplicate JSON entries on our search heads which operate in a clustered set up. I understand this is due to the data being read at index time and a... See more...
Afternoon, We are currently having issues with duplicate JSON entries on our search heads which operate in a clustered set up. I understand this is due to the data being read at index time and at search time, hence duplicating the fields.  I have read many other forums with similar issues. The fix suggested is to ensure to set the below in the props.conf on the search heads which we have deployed via an app. KV_MODE =  none  AUTO_KV_JSON = false  while keeping just the below on the props.conf on the forwarder; INDEXED_EXTRACTIONS = JSON  We have successfully tested this in a non clustered environment and it seems to work but in a clustered set up we are still seeing the duplicate values.   Any help or guidance would be greatly appreciated. 
@gcusello thank you,  yes i am keeping it on the indexer. regarding that a quick query , its a cloud environment(classic) & i am keeping props&transforms on the splunk cloud indexers , if we drop th... See more...
@gcusello thank you,  yes i am keeping it on the indexer. regarding that a quick query , its a cloud environment(classic) & i am keeping props&transforms on the splunk cloud indexers , if we drop these events from splunkcloud indxers using props&tranforms would it still count against SVCs? I am asking this because the null queue would happen after parsing so the processing is happening. in on-prem as far as i know it wont count against licensing because indexing wont happen, how does it work in splunkcloud
Below is the complete xml. here i am not getting how to add the token values to the other panels in the dashboard. Can you help me on that <dashboard> <label> Dashboard title</label> <row> ... See more...
Below is the complete xml. here i am not getting how to add the token values to the other panels in the dashboard. Can you help me on that <dashboard> <label> Dashboard title</label> <row> <panel> <table depends="$hide$"> <title>$Time_Period_Start$ $Time_Period_End$</title> <search> <query>| makeresults | addinfo | eval SearchStart = strftime(info_min_time, "%Y-%m-%d %H:%M:%S"), SearchEnd = strftime(info_max_time, "%Y-%m-%d %H:%M:%S") | table SearchStart, SearchEnd</query> <earliest>-7d@d</earliest> <latest>@d</latest> <done> <set token="Time_Period_Start">$result.SearchStart$</set> <set token="Time_Period_End">$result.SearchEnd$</set> </done> </search> </table> </panel> </row> <row> <panel> <title>first panel</title> <single> <search> <query>|tstats count as internal_logs where index=_internal </query> <earliest>-7d@d</earliest> <latest>@d</latest> <sampleRatio>1<sampleRatio> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </single> </panel> </row> <row> <panel> <title>second panel</title> <single> <search> <query>|tstats count as audit_logs where index=_audit </query> <earliest>-7d@d</earliest> <latest>@d</latest> <sampleRatio>1<sampleRatio> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </single> </panel> </row> <row> <panel> <title>Third panel</title> <single> <search> <query>|tstats count as main_logs where index=main </query> <earliest>-7d@d</earliest> <latest>@d</latest> <sampleRatio>1<sampleRatio> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </single> </panel> </row> </dashboard>
Use the mvfind function. | eval present=if(isnotnull(mvfind(DNS_Matched, DNS)),"yes", "no")  
I have two fields: DNS and DNS_Matched. The latter is a multi-value field. How can I see if a field value in DNS is in one  of the multi-value field in DNS_Matched? Example: DNS DNS_Matache... See more...
I have two fields: DNS and DNS_Matched. The latter is a multi-value field. How can I see if a field value in DNS is in one  of the multi-value field in DNS_Matched? Example: DNS DNS_Matached host1 host1 host1-a host1-r host2 host2 host2-a host2-r
Hi @Sid, at first, if you use sourcetype in te stanza header, you don't need to specify sourcetype: [risktrac_log] TRANSFORMS-null=setnull then use an easier regex: [setnull] REGEX = DEBUG DEST_K... See more...
Hi @Sid, at first, if you use sourcetype in te stanza header, you don't need to specify sourcetype: [risktrac_log] TRANSFORMS-null=setnull then use an easier regex: [setnull] REGEX = DEBUG DEST_KEY = queue FORMAT = nullQueue At least, where do you located these conf files? they must be in the first full Splunk instance that the logs passing through, in other words on the first Heavy Forwarders or, if not present, on the Indexers, not on the Universal Forwarders. Ciao. Giuseppe
I am trying to setup props & transforms to send DEBUG events to null queue i tried below regex but that doesnt seem to work Transofrms.conf- [setnull] REGEX = .+(DEBUG...).+$ DEST_KEY = queue FOR... See more...
I am trying to setup props & transforms to send DEBUG events to null queue i tried below regex but that doesnt seem to work Transofrms.conf- [setnull] REGEX = .+(DEBUG...).+$ DEST_KEY = queue FORMAT = nullQueue props.conf- [sourcetype::risktrac_log] TRANSFORMS-null=setnull I used  REGEX=\[\d{2}\/\d{2}\/\d{2}\s\d{2}:\d{2}:\d{2}:\d{3}\sEDT]\s+DEBUG\s.* as well but that too doesnt drop DEBUG messages  i just tried DEBUG in regex too, no help, can someone help me here please? sample event-  [10/13/23 03:46:48:551 EDT] DEBUG DocumentCleanup.run 117 : /_documents document cleanup complete. how does REGEX pick the pattern ? i can see both the REGEX are able to match whole event. we cant turn DEBUG off for the application
That worked. Thank you very much!! 
As a followup, I tried using the following timestamp settings instead. This regex matches on the JSON up to the record.time.timestamp field, and in Settings -> Add Data it also correctly sets the _ti... See more...
As a followup, I tried using the following timestamp settings instead. This regex matches on the JSON up to the record.time.timestamp field, and in Settings -> Add Data it also correctly sets the _time field for all my test data: TIME_PREFIX = \"time\":\s*{.*\"timestamp\":\s TIME_FORMAT = %s.%6N This also fails to properly parse the data when ingested through the Universal Forwarder
We are using Splunk Cloud 9.0.2303.201 and have version 9.0.4 of the Splunk Universal Forwarder installed on a RHEL 7.9 server. The UF is configured to monitor a log file that outputs JSON in this fo... See more...
We are using Splunk Cloud 9.0.2303.201 and have version 9.0.4 of the Splunk Universal Forwarder installed on a RHEL 7.9 server. The UF is configured to monitor a log file that outputs JSON in this format:   {"text": "Ending run - duration 0:00:00.249782\n", "record": {"elapsed": {"repr": "0:00:00.264696", "seconds": 0.264696}, "exception": null, "extra": {"run_id": "b20xlqbi", "action": "status"}, "file": {"name": "alb-handler.py", "path": "scripts/alb-handler.py"}, "function": "exit_handler", "level": {"icon": "", "name": "INFO", "no": 20}, "line": 79, "message": "Ending run - duration 0:00:00.249782", "module": "alb-handler", "name": "__main__", "process": {"id": 28342, "name": "MainProcess"}, "thread": {"id": 140068303431488, "name": "MainThread"}, "time": {"repr": "2023-10-13 10:09:54.452713-04:00", "timestamp": 1697206194.452713}}}   Long story short, it seems that Splunk is getting confused by the multiple fields in the JSON that look like timestamps. The timestamp that should be used is the very last field in the JSON. I first set up a custom sourcetype that's a clone of the _json sourcetype by manually inputting some of these records via Settings -> Add Data.  Using that tool I was able to get Splunk to recognize the correct timestamp via the following settings:   TIMESTAMP_FIELDS = record.time.timestamp TIME_FORMAT = %s.%6N     When I load the above record by hand via Settings -> Add Data and use my custom sourcetype with the above fields then Splunk shows the _time field is being set properly,  so in this case it's 10/13/23 10:09:54.452 AM. The exact same record, when loaded through the Universal Forwarder, appears to be ignoring the TIMESTAMP_FIELDS parameter. It ends up with a date/time of 10/13/23 12:00:00.249 AM, which indicates that it's trying to extract the date/time from the "text" field at the very beginning of the JSON (the string "duration 0:00:00.249782"). The inputs.conf on the Universal Forwarder is quite simple:   [monitor:///app/appman/logs/combined_log.json] sourcetype = python-loguru index = test disabled = 0     Why is the date/time parsing working properly when I manually load these logs via the UI but not when being imported via the Universal Forwarder?
I am attempting to setup an INGEST_EVAL for the _time field. My goal is to check if the _time field is in the future and prevent any future timestamps from being indexed. The INGEST_EVAL is configure... See more...
I am attempting to setup an INGEST_EVAL for the _time field. My goal is to check if the _time field is in the future and prevent any future timestamps from being indexed. The INGEST_EVAL is configured correctly in the props.conf, fields.conf and transforms.conf, but is failing when I attempt to use a conditional statement. My goal is to do something like this in my transforms.conf: [ingest_time_timestamp] INGEST_EVAL = ingest_time_stamp:=if(_time > time(), time(), _time) If _time is in the future, then I want it set to the current time, otherwise I want to leave it alone. Anyone have any ideas?
What you need is everything between the quotation marks.  Try this | rex "Sample ID\\\":\\\"(?<SampleID>[^\"]+)"
Hi @jbanAtSplunk, this means that you require more Indexers: at least 5. About Storage, if the RF and SF is 2, you have 5 Indexers and you'll have Contingency=10%, you'll have: Total_Storage = (Li... See more...
Hi @jbanAtSplunk, this means that you require more Indexers: at least 5. About Storage, if the RF and SF is 2, you have 5 Indexers and you'll have Contingency=10%, you'll have: Total_Storage = (License*Retention*0.5*SF) (1 + Contingency) + License*3.4 = (500*30*0.5*2)*1.1 +1700 = 18200 Storage per Indexer = 18200/5 = 3640 GB per Indexer (License*3.4 is datamodels' storage for ES) Ciao. Giuseppe
I want to extract Sample ID field value "Sample ID":"020ab888-a7ce-4e25-z8h8-a658bf21ech9"
Yes the latest version definitely fixes this and AFAIK is a good, stable version too with lots of other bug fixes.