All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Unparsed or incorrectly broken? If they are incorrectly broken you might want to tweak that line breaker. Use https://regex101.com to test your ideas against your data. If they are not/incorrectly p... See more...
Unparsed or incorrectly broken? If they are incorrectly broken you might want to tweak that line breaker. Use https://regex101.com to test your ideas against your data. If they are not/incorrectly parsed, either the events are malformed or you might be hitting extraction limits (there are limits to the size of the data and number of fields which are automaticaly extracted if I remember correctly).
Thanks @PrewinThomas -  it worked as expected and was fast enough.
thank you my friend that worked, most events are now being parsed properly - however I am still seeing some very large 200+ line events not getting parsed, with many of them being 257 lines? Any idea... See more...
thank you my friend that worked, most events are now being parsed properly - however I am still seeing some very large 200+ line events not getting parsed, with many of them being 257 lines? Any idea what could be causing these not to parse?
Hi @super_edition , you could try something like this (see my approach and adapt it to your data): index="my_index" kubernetes_namespace="my_ns" kubernetes_cluster!="bad_cluster" kubernetes_deplo... See more...
Hi @super_edition , you could try something like this (see my approach and adapt it to your data): index="my_index" kubernetes_namespace="my_ns" kubernetes_cluster!="bad_cluster" kubernetes_deployment_name="frontend_service" msg="RESPONSE" ("/my_service/user-registration" OR "/my_service/profile-retrieval") | eval url=if(searchmatch("/my_service/profile-retrieval"),"/my_service/profile-retrieval","/my_service/user-registration") | stats count as hits avg(responseTime) as avgResponse perc90(responseTime) as nintyPerc by url method kubernetes_cluster | eval avgResponse=round(avgResponse,2) | eval nintyPerc=round(nintyPerc,2) Ciao. Giuseppe
@super_edition  You can either use append or eval match condition to combine both for your scenario using append ( index="my_index" kubernetes_namespace="my_ns" kubernetes_cluster!="bad_cluster" ... See more...
@super_edition  You can either use append or eval match condition to combine both for your scenario using append ( index="my_index" kubernetes_namespace="my_ns" kubernetes_cluster!="bad_cluster" kubernetes_deployment_name="frontend_service" msg="RESPONSE" "/my_service/user-registration" | dedup req_id | stats count as hits avg(responseTime) as avgResponse perc90(responseTime) as nintyPerc by url method kubernetes_cluster | eval avgResponse=round(avgResponse,2) | eval nintyPerc=round(nintyPerc,2) ) | append [ search index="my_index" kubernetes_namespace="my_ns" kubernetes_cluster!="bad_cluster" kubernetes_deployment_name="frontend_service" msg="RESPONSE" "/my_service/profile-retrieval" | eval url="/my_service/profile-retrieval" | stats count as hits avg(responseTime) as avgResponse perc90(responseTime) as nintyPerc by url method kubernetes_cluster | eval avgResponse=round(avgResponse,2) | eval nintyPerc=round(nintyPerc,2) ] | table url method kubernetes_cluster hits avgResponse nintyPerc   combined index="my_index" kubernetes_namespace="my_ns" kubernetes_cluster!="bad_cluster" kubernetes_deployment_name="frontend_service" msg="RESPONSE" ("/my_service/user-registration" OR "/my_service/profile-retrieval") | eval url=if(match(url, "^/my_service/user-registration"), "/my_service/user-registration", if(match(url, "^/my_service/profile-retrieval"), "/my_service/profile-retrieval", url)) | dedup req_id | stats count as hits avg(responseTime) as avgResponse perc90(responseTime) as nintyPerc by url method kubernetes_cluster | eval avgResponse=round(avgResponse,2) | eval nintyPerc=round(nintyPerc,2) | table url method kubernetes_cluster hits avgResponse nintyPerc Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Sorry, I'm not sure to get it: "Splunk doesn't index fields as indexed fields, unless they are explicitly extracted as indexed fields" How is it possible with splunk cloud? if I understand well, wit... See more...
Sorry, I'm not sure to get it: "Splunk doesn't index fields as indexed fields, unless they are explicitly extracted as indexed fields" How is it possible with splunk cloud? if I understand well, with kv_mode=json, event if our logs are json formatted, I will have to extract one by one all fields I need, using the field extractions feature. The fields will be then extracted at search time, and not indexed. Right? Then, wouldn't be there a risk on the search performance, if all fields are extracted at search time? Also, the usage of tstat will need to be reviewed for all our saved searches/dashboards...etc. Am I right? Thanks BR Nordine
Hello Everyone, I have 2 splunk search queries query-1 index="my_index" kubernetes_namespace="my_ns" kubernetes_cluster!="bad_cluster" kubernetes_deployment_name="frontend_service" msg="RESPONSE... See more...
Hello Everyone, I have 2 splunk search queries query-1 index="my_index" kubernetes_namespace="my_ns" kubernetes_cluster!="bad_cluster" kubernetes_deployment_name="frontend_service" msg="RESPONSE" "/my_service/user-registration" | dedup req_id | stats count as hits avg(responseTime) as avgResponse perc90(responseTime) as nintyPerc by url method kubernetes_cluster | eval avgResponse=round(avgResponse,2) | eval nintyPerc=round(nintyPerc,2) output  url method kubernetes_cluster hits avgResponse nintyPerc /my_service/user-registration POST LON 11254 112 535   query-2 index="my_index" kubernetes_namespace="my_ns" kubernetes_cluster!="bad_cluster" kubernetes_deployment_name="frontend_service" msg="RESPONSE" "/my_service/profile-retrieval" | eval normalized_url="/my_service/profile-retrieval" | stats count as hits avg(responseTime) as avgResponse perc90(responseTime) as nintyPerc by normalized_url method kubernetes_cluster | eval avgResponse=round(avgResponse,2) | eval nintyPerc=round(nintyPerc,2) output url method kubernetes_cluster hits avgResponse nintyPerc /my_service/profile-retrieval GET LON 55477 698 3423   The query-2 returns multiple urls like below but belongs to same endpoint: /my_service/profile-retrieval/324524352 /my_service/profile-retrieval/453453?displayOptions=ADDRESS%2CCONTACT&programCode=SKW /my_service/profile-retrieval/?displayOptions=PREFERENCES&programCode=SKW&ssfMembershipId=00408521260 Hence I used eval function to normalized them eval normalized_url="/my_service/profile-retrieval" How do I combine both queries to return as simplified output url method kubernetes_cluster hits avgResponse nintyPerc /my_service/user-registration POST LON 11254 112 535 /my_service/profile-retrieval GET LON 55477 698 3423   Highly appreciate your help!!
 I mean that if you're using indexed extractions you can't selectively choose which fields are getting indexed as indexed fields and which are not. With indexed extractions Splunk extracts and indexe... See more...
 I mean that if you're using indexed extractions you can't selectively choose which fields are getting indexed as indexed fields and which are not. With indexed extractions Splunk extracts and indexes all fields from your json/csv/xml/whatever as indexed fields. With KV_MODE=json (or KV_MODE=auto but it's better to be precise here so that Splunk doesn't have to guess). Splunk doesn't index fields as indexed fields unless they are explicitly extracted as indexed fields (which will be difficult/impossible with structured data). Anyway, the best practice about handling json data, unless you have a very very good reason to do otherwise, is to use search-time extractions, not indexed extractions.
It's not even that you _should_ remove ES from the deployer in the default installation but rather you must have done something differently for which the removal of ES was the cure. Normally ES shoul... See more...
It's not even that you _should_ remove ES from the deployer in the default installation but rather you must have done something differently for which the removal of ES was the cure. Normally ES should detect that it's being deployed on a deployer and should _not_ set itself as a "runnable" instance.
Hello PickleRick, for the point 3, you mean by using kv_mode=json, unlike using indexed extractions, I will be able to "selectively not index some fields". Would you mind to give me some more detail... See more...
Hello PickleRick, for the point 3, you mean by using kv_mode=json, unlike using indexed extractions, I will be able to "selectively not index some fields". Would you mind to give me some more details, or examples how I can do? On my side, I've checked the source type which is used, and indeed: indexed extractions = json and in advanced tab: kv_mode = none So, you recommend to set: indexed extractions = none and in advanced tab: kv_mode = json Can you confirm this is the right way? Then, how can I exclude some specific fields from automatic extraction? Thanks a lot Regards Nordine
Thank you for the clear answer. Removed and working fine. Does Splunk ES documentation state this anywhere? 
I see the useEnglishOnly setting which is known to cause problems. See my thread here https://community.splunk.com/t5/Getting-Data-In/Debugging-perfmon-input/m-p/621539#M107042
Hi @LAME-Creations  When I send an event to SOAR manually, I get a difference such as user=admin and mode=adhoc. Whereas if I wait for adaptive response from mission control, it is mode=saved and us... See more...
Hi @LAME-Creations  When I send an event to SOAR manually, I get a difference such as user=admin and mode=adhoc. Whereas if I wait for adaptive response from mission control, it is mode=saved and user=machinename.  
@BraxcBT  Issue started after an upgrade or new app/add-on install? Or first time you logged in and observed this? Try from different browser and see if it's still the same. Regards, Prewin Sp... See more...
@BraxcBT  Issue started after an upgrade or new app/add-on install? Or first time you logged in and observed this? Try from different browser and see if it's still the same. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
@DarthHerm  Your inputs.conf looks good. Check splunkd.log on the affected forwarder for errors related to perfmon or permissions try upgrading - test the same config on one host with Universal Fo... See more...
@DarthHerm  Your inputs.conf looks good. Check splunkd.log on the affected forwarder for errors related to perfmon or permissions try upgrading - test the same config on one host with Universal Forwarder 9.3.5 Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Hi @LAME-Creations  Thank you for the explanation, but when I tried to send the manual to SOAR, everything went well. Because I feel that the problem is not there, approximately which part should ... See more...
Hi @LAME-Creations  Thank you for the explanation, but when I tried to send the manual to SOAR, everything went well. Because I feel that the problem is not there, approximately which part should I check again ?
That has to be frustrating and I don't know if I have ever seen what you are experiencing.  I would try a couple things just to see if by chance a mistake has been made.   1) This is what you said... See more...
That has to be frustrating and I don't know if I have ever seen what you are experiencing.  I would try a couple things just to see if by chance a mistake has been made.   1) This is what you said you already have done, but just validate that nothing has changed.  Try to send events manually to SOAR.  If they arrive, in theory the automated should work as well, but that does not seem to be your case.  Which leads me to  1a)  I have actually done this.  I had set up two connections to my SOAR when I was setting it up.  One of the setups had the correct credentials to hit SOAR and the other did not.  So when I set up adaptive response and it says what configuration do you want to use (can't remember the exact verbiage of the question) I picked the wrong one from the dropdown and it caused failure to connect.   2) When I started this, I had a clear idea what was my second thing I would try, but it slipped my mind as I wrote 1 and 1a.  But basically just verify that you really can manually send the alerts to SOAR and that you aren't able to send them as an adaptive response.  
Just for troubleshooting purposes, can you create a brand new event finding (what used to be called correlation search before splunk ES 8? )  What I like to do is just check to make sure if this i... See more...
Just for troubleshooting purposes, can you create a brand new event finding (what used to be called correlation search before splunk ES 8? )  What I like to do is just check to make sure if this is a problem with just this search or is systemic.  So I make my search something generic like  index=_internal | head 1 | table index, sourcetype, _time  Again the above query is just a query that you know will have results each time it runs.  Feel free to make the search anything you want.  Then plug in your drilldown using the same values you applied in your question.  When the alert fires and you click its drilldown, does it go all time or does it use the time selection that you gave it.   Again this is just to identify if this is a problem for one correlation search or for all of your correlation searches.  This will allow us to get a better idea of what is and what is not working.  
Hi Everyone,  I am experiencing an error when sending events from Mission Control to Splunk SOAR. I always get a failure when the send to SOAR action is automatically triggered through Adaptive Resp... See more...
Hi Everyone,  I am experiencing an error when sending events from Mission Control to Splunk SOAR. I always get a failure when the send to SOAR action is automatically triggered through Adaptive Response. Before I automated it, I tried to send event data from Mission Control to SOAR manually by clicking the three dots and then selecting 'Run Adaptive Response Actions' and everything went smoothly. Has anyone ever experienced a similar problem? Danke, Zake