All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

That's interesting though because my whole config for ingesting suricata's eve.log boils down to this: [monitor:///var/log/suricata/eve.json] disabled = false host = backup index = net sourcetyp... See more...
That's interesting though because my whole config for ingesting suricata's eve.log boils down to this: [monitor:///var/log/suricata/eve.json] disabled = false host = backup index = net sourcetype = suricata I don't even have anything configured for the suricata sourcetype. It just automatically gets parsed as json. I should get it configured more reasonably but it's my home lab server so I don't mind.
Matched=if(match(DNS,Identified_Host_Formatted) OR match(DNS,DNS_Matched),1,0) I Would like to add the search you created to this. These existing only work on single valued fields
My question is how do we confirm it in Splunk?
Thank you so much for the help. Can you explain to me what the follow line means?    values(*) as *
in a search of all time on the GUI nothing came up. Checked SplunkD on the server it has Failed to Parse TImeStamp in first MAX_TIMESTAMP_LOOKHEAD ....defaulting to timestamp of previous event...cont... See more...
in a search of all time on the GUI nothing came up. Checked SplunkD on the server it has Failed to Parse TImeStamp in first MAX_TIMESTAMP_LOOKHEAD ....defaulting to timestamp of previous event...context: source=var/log/suricata.eve. It also complains about too many events with the same timestamp. So do we need to add json_no_timestamp somehwere  maybe in a props file? Wouldn't the app tell it how to parse it?    
Please share the existing eval statement so someone can figure out how to add mvfind.
Blacklisted events are not logged nor is there a log message when an event is blacklisted.  Therefore, there is nothing to search.  If the event exists on your Windows server and doesn't exist in Spl... See more...
Blacklisted events are not logged nor is there a log message when an event is blacklisted.  Therefore, there is nothing to search.  If the event exists on your Windows server and doesn't exist in Splunk then the blacklisting is successful.
Rather than head 1, which returns the first of all results, try dedup _time, which will return the first result from each hour (because of the bucket and sort commands).
I mean how we can query and confirm on search head like index=foo  parentprocessname="c:\\program file\\......" to check the blacklisted events. Thanks
This worked in a vacuum but I get an error saying it's expecting IN when I tried adding it to existing Eval statement
The stats command is transforming, which means only the fields referenced in it are available to subsequent commands.  In this case, they would be count and domain.  To make other fields available, i... See more...
The stats command is transforming, which means only the fields referenced in it are available to subsequent commands.  In this case, they would be count and domain.  To make other fields available, include them in stats. | status count, values(*) as * by domain  Note that fields other than count and domain may be multi-valued and so may require special handling using mv* functions.
Hello,  I am searching to get results for each hour  top 1 max URL hits.  Iam using the below search but not getting results for each hour. index=*  | fields Request_URL _time | stats count as hits... See more...
Hello,  I am searching to get results for each hour  top 1 max URL hits.  Iam using the below search but not getting results for each hour. index=*  | fields Request_URL _time | stats count as hits by Request_URL _time |bucket span=1h _time | sort by hits desc | head 1 Thanks in advance!
The Deployment Server knows if the app containing the settings has been downloaded by each client.  To to Settings->Forwarder management and switch to the Apps tab.
I am looking for a query that can help me list or audit systems that are using default passwords or any other method you think I can use to audit my environment for default passwords.
Hi @richgalloway , Thanks,  How can we verify whether the logs are ingesting or not ? We've deployed the configuration to approximately 3,000 clients. Is there a way to check them all simultaneously?
In your explanations, the kind of error/exception messages you want to capture aren't completely consistent.  But I finally grasped that you want to count certain such messages based on actual messag... See more...
In your explanations, the kind of error/exception messages you want to capture aren't completely consistent.  But I finally grasped that you want to count certain such messages based on actual message.  Note the sample data contain three "description" that indicate a failure: content.payload.description content.payload.errorMetadata.description content.payload.errorMetadata.exception.description HTTP PUT on resource http://mule-worker-internal-order-sys-api-prod.au-s1.cloudhub.io:8091/orders/submit/716 failed: bad request (400). HTTP PUT on resource http://mule-worker-internal-order-sys-api-prod.au-s1.cloudhub.io:8091/orders/submit/898 failed: bad request (400). {"code":"ghgj","message":"CTT failed items. ModifierRequirementNotMet - 4 Nuggets,ModifierRequirementNotMet - 4 Nuggets"} I'll take content.payload.errorMetadata.exception.description as the one you want to capture most because this seems to be the closest match of what you said here. But before that, you still need to explain more about your data.  If this illustration is the full raw log, it seems that the developer placed some plain text message in front of a conformant JSON object so the raw log is not JSON, even though the plain text looks like an excerpt from the JSON object itself, plus some meta data such as partial time stamp and log level.  To Splunk, this means that you do not directly get a field name content.payload.errorMetadata.exception.description.  Is this correct?  In other words, your "blocker" is not so much count by error, but how to extract that error message. If this is the case, the problem is easily solved by removing the leading plain text and get to JSON, then use spath to extract fields from JSON.   | rex "[^{](?<json>{.+})" | spath input=json | stats count by content.payload.errorMetadata.exception.description   Your sample data will give content.payload.errorMetadata.exception.description count {"code":"ghgj","message":"CTT failed items. ModifierRequirementNotMet - 4 Nuggets,ModifierRequirementNotMet - 4 Nuggets"} 1 (The count is 1 because there is only one sample) Is this something you are looking for? Here is an emulation you can play with and compare with real data   | makeresults | eval _raw = "logger:integration-fabrics-exp-api.put:\\orders\\submit\\(storeid).Exception message: [10-12 05:36:03] INFO Exception [[MuleRuntime].uber.12973: [integration-fabrics-exp-api-prod].util:logger-exception/processors/0.ps.BLOCKING @8989]: { \"correlationId\" : \"787979-50ac-4b6f-90bd-64f1b6f79985\", \"message\" : \"Exception\", \"tracePoint\" : \"EXCEPTION\", \"priority\" : \"INFO\", \"category\" : \"kfc-integration-fabrics-exp-api.put:\\\\orders\\\\submit\\\\(storeid).Exception\", \"elapsed\" : 3806, \"locationInfo\" : { \"lineInFile\" : \"69\", \"component\" : \"json-logger:logger\", \"fileName\" : \"common/common-logger-flow.xml\", \"rootContainer\" : \"util:logger-exception\" }, \"timestamp\" : \"2023-10-12T05:36:03.317Z\", \"content\" : { \"payload\" : { \"api\" : \"integration-fabrics-exp-api-prod\", \"message\" : \"{\\n \\\"externalOrderId\\\": \\\"275769403\\\",\\n \\\"instruction\\\": \\\"275769403\\\",\\n \\\"items\\\": [\\n {\\n \\\"id\\\": \\\"I-30995\\\",\\n \\\"name\\\": \\\"Regular Chips\\\",\\n \\\"unitPrice\\\": 445,\\n \\\"quantity\\\": 1,\\n \\\"subItems\\\": []\\n },\\n {\\n \\\"id\\\": \\\"I-30057\\\",\\n \\\"name\\\": \\\"Regular Potato \\\\u0026 Gravy\\\",\\n \\\"unitPrice\\\": 545,\\n \\\"quantity\\\": 1,\\n \\\"subItems\\\": []\\n },\\n {\\n \\\"id\\\": \\\"I-30017\\\",\\n \\\"name\\\": \\\"3 Wicked Wings®\\\",\\n \\\"unitPrice\\\": 695,\\n \\\"quantity\\\": 1,\\n \\\"subItems\\\": []\\n },\\n {\\n \\\"id\\\": \\\"I-898-0\\\",\\n \\\"name\\\": \\\"Kids Meal with Nuggets\\\",\\n \\\"unitPrice\\\": 875,\\n \\\"quantity\\\": 1,\\n \\\"subItems\\\": [\\n {\\n \\\"id\\\": \\\"M-41687-0\\\",\\n \\\"name\\\": \\\"4 Nuggets\\\",\\n \\\"unitPrice\\\": 0,\\n \\\"quantity\\\": 1,\\n \\\"subItems\\\": []\\n },\\n {\\n \\\"id\\\": \\\"M-40976-0\\\",\\n \\\"name\\\": \\\"Regular Chips\\\",\\n \\\"unitPrice\\\": 0,\\n \\\"quantity\\\": 1,\\n \\\"subItems\\\": []\\n },\\n {\\n \\\"id\\\": \\\"M-40931-0\\\",\\n \\\"name\\\": \\\"Regular 7Up\\\",\\n \\\"unitPrice\\\": 0,\\n \\\"quantity\\\": 1,\\n \\\"subItems\\\": []\\n }\\n ]\\n },\\n {\\n \\\"id\\\": \\\"I-32368-0\\\",\\n \\\"name\\\": \\\"Kids Meal with Nuggets\\\",\\n \\\"unitPrice\\\": 875,\\n \\\"quantity\\\": 1,\\n \\\"subItems\\\": [\\n {\\n \\\"id\\\": \\\"M-41687-0\\\",\\n \\\"name\\\": \\\"4 Nuggets\\\",\\n \\\"unitPrice\\\": 0,\\n \\\"quantity\\\": 1,\\n \\\"subItems\\\": []\\n },\\n {\\n \\\"id\\\": \\\"M-40976-0\\\",\\n \\\"name\\\": \\\"Regular Chips\\\",\\n \\\"unitPrice\\\": 0,\\n \\\"quantity\\\": 1,\\n \\\"subItems\\\": []\\n },\\n {\\n \\\"id\\\": \\\"M-40931-0\\\",\\n \\\"name\\\": \\\"Regular 7Up\\\",\\n \\\"unitPrice\\\": 0,\\n \\\"quantity\\\": 1,\\n \\\"subItems\\\": []\\n }\\n ]\\n }\\n ],\\n \\\"customer\\\": {\\n \\\"firstName\\\": \\\"9403\\\",\\n \\\"lastName\\\": \\\"ML\\\",\\n \\\"email\\\": \\\"ghgjhgj@hotmail.com\\\",\\n \\\"phoneNumber\\\": \\\"897987\\\"\\n },\\n \\\"tenders\\\": [\\n {\\n \\\"type\\\": \\\"credit-card\\\",\\n \\\"amount\\\": 3435\\n }\\n ],\\n \\\"discountLines\\\": []\\n}\", \"description\" : \"HTTP PUT on resource http://mule-worker-internal-order-sys-api-prod.au-s1.cloudhub.io:8091/orders/submit/716 failed: bad request (400).\", \"correlationId\" : \"1cb22ac0-50ac-4b6f-0988-64f1b6f79985\", \"category\" : \"integration-fabrics-exp-api.put:\\\\orders\\\\submit\\\\(storeid)\", \"timeStamp\" : \"2023-10-12T16:36:03:316000Z\", \"incomingMessage\" : { \"externalOrderId\" : \"9898\", \"instruction\" : \"275769403\", \"items\" : [ { \"id\" : \"I-30995\", \"name\" : \"Regular Chips\", \"unitPrice\" : 445, \"quantity\" : 1, \"subItems\" : [ ] }, { \"id\" : \"I-30057\", \"name\" : \"Regular Potato & Gravy\", \"unitPrice\" : 545, \"quantity\" : 1, \"subItems\" : [ ] }, { \"id\" : \"I-30017\", \"name\" : \"3 Wicked Wings®\", \"unitPrice\" : 695, \"quantity\" : 1, \"subItems\" : [ ] }, { \"id\" : \"I-32368-0\", \"name\" : \"Kids Meal with Nuggets\", \"unitPrice\" : 875, \"quantity\" : 1, \"subItems\" : [ { \"id\" : \"M-41687-0\", \"name\" : \"4 Nuggets\", \"unitPrice\" : 0, \"quantity\" : 1, \"subItems\" : [ ] }, { \"id\" : \"M-40976-0\", \"name\" : \"Regular Chips\", \"unitPrice\" : 0, \"quantity\" : 1, \"subItems\" : [ ] }, { \"id\" : \"M-40931-0\", \"name\" : \"Regular 7Up\", \"unitPrice\" : 0, \"quantity\" : 1, \"subItems\" : [ ] } ] }, { \"id\" : \"I-32368-0\", \"name\" : \"Kids Meal with Nuggets\", \"unitPrice\" : 875, \"quantity\" : 1, \"subItems\" : [ { \"id\" : \"M-41687-0\", \"name\" : \"4 Nuggets\", \"unitPrice\" : 0, \"quantity\" : 1, \"subItems\" : [ ] }, { \"id\" : \"M-40976-0\", \"name\" : \"Regular Chips\", \"unitPrice\" : 0, \"quantity\" : 1, \"subItems\" : [ ] }, { \"id\" : \"M-40931-0\", \"name\" : \"Regular 7Up\", \"unitPrice\" : 0, \"quantity\" : 1, \"subItems\" : [ ] } ] } ], \"customer\" : { \"firstName\" : \"9403\", \"lastName\" : \"ML\", \"email\" : \"ns@hotmail.com\", \"phoneNumber\" : \"98908\" }, \"tenders\" : [ { \"type\" : \"credit-card\", \"amount\" : 3435 } ], \"discountLines\" : [ ] }, \"errorMetadata\" : { \"errorType\" : { \"parentErrorType\" : { \"identifier\" : \"ANY\", \"namespace\" : \"MULE\" }, \"identifier\" : \"BAD_REQUEST\", \"namespace\" : \"HTTP\" }, \"description\" : \"HTTP PUT on resource http://mule-worker-internal-order-sys-api-prod.au-s1.cloudhub.io:8091/orders/submit/898 failed: bad request (400).\", \"additionalDetails\" : \"HTTP PUT on resource http://mule-worker-internal-order-sys-api-prod.au-s1.cloudhub.io:8091/orders/submit/716 failed: bad request (400).\", \"exception\" : { \"correlationId\" : \"1cb22ac0-50ac-4b6f-90bd-78979\", \"timestamp\" : \"2023-10-12T16:36:03:273000Z\", \"errorType\" : \"400 HTTP:BAD_REQUEST\", \"description\" : \"{\\\"code\\\":\\\"ghgj\\\",\\\"message\\\":\\\"CTT failed items. ModifierRequirementNotMet - 4 Nuggets,ModifierRequirementNotMet - 4 Nuggets\\\"}\" } } } }, \"applicationName\" : \"integration-fabrics-exp-api-prod\", \"applicationVersion\" : \"\", \"environment\" : \"prod\", \"threadName\" : \"[MuleRuntime].uber.12973: [integration-fabrics-exp-api-prod].util:logger-exception/processors/0.ps.BLOCKING @64c03d54\" }" ``` data emulation above ```    
Set a time range for the third panel and click on Date & Time Range dropdown to set "between" time range of the other 2 you previously set. (dashboard studio)
Let's say im running a search where I want to look at domains traveled to. index=web_traffic sourcetype=domains domain IN ("*.com", "*.org*", "*.edu*") I want to do a count on how domains that have... See more...
Let's say im running a search where I want to look at domains traveled to. index=web_traffic sourcetype=domains domain IN ("*.com", "*.org*", "*.edu*") I want to do a count on how domains that have appeared less than 5 times over the search period. How can I accomplish this? I know I could do a stats count by domain but after that, I'm unable to grab the rest of the results in the index such as time, etc.  
I am looking to create a chart showing the average daily total ingest by month in Terabytes excluding weekends over the past year. For some reason I am struggling with this. Any help getting me start... See more...
I am looking to create a chart showing the average daily total ingest by month in Terabytes excluding weekends over the past year. For some reason I am struggling with this. Any help getting me started would be appreciated.
Edit, in configuration menu under view options, toggle for Show Open In Search Button