All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I think what is happening is that one timestamp is reflecting the end of the roll-up window and the other is reflecting the beginning. If you need them to align, you may need to subtract the value of... See more...
I think what is happening is that one timestamp is reflecting the end of the roll-up window and the other is reflecting the beginning. If you need them to align, you may need to subtract the value of the roll-up window to make this happen. To verify this theory, you may want to experiment with different roll-up periods and see if the difference is always equal to the roll-up.
Hey @Samiul59, I believe you are looking at the wrong place once the workflow action is set up. You should be able to see your workflow action label as per the following screenshot:   You may ... See more...
Hey @Samiul59, I believe you are looking at the wrong place once the workflow action is set up. You should be able to see your workflow action label as per the following screenshot:   You may also want to review the permission and scope of the app where you are trying to search the action and where it has been defined. Thanks, Tejas.   --- If the above solution helps, an upvote is appreciated..!!  
Hi @LAME-Creations , I figured out the problem related to writing to the indexers.  The issue was that the Search Head wasn't forwarding its data to the indexers and hence wasn't working in my case.... See more...
Hi @LAME-Creations , I figured out the problem related to writing to the indexers.  The issue was that the Search Head wasn't forwarding its data to the indexers and hence wasn't working in my case. As I created an outputs.conf on the SH, the error appeared, but the data was being written. Thanks, Pravin
Hi, Is there any option I can add banners in the AppDynamics dashboard in case of application maintenance or server maintenance notifications? Appreciate your suggestions.   Thanks, Raj
I am doing load testing for the persistent queue how the queue behaves  we have deployed in the argcd yaml so how can i do it  target:     kind: Application     name: splunk   patch: |-     ap... See more...
I am doing load testing for the persistent queue how the queue behaves  we have deployed in the argcd yaml so how can i do it  target:     kind: Application     name: splunk   patch: |-     apiVersion: argoproj.io/v1alpha1     kind: Application     metadata:       name: splunk     spec:       source:         helm:           parameters:           - name: clusterName             value: -QA           - name: distribution             value: openshift           - name: splunkObservability.realm             value: eu0           - name: splunkPlatform.endpoint             value: 'https://131.97/services/collector'           - name: splunkPlatform.index             value: 122049           - name: splunkPlatform.insecureSkipVerify             value: "true"           - name: splunkPlatform.sendingQueue.persistentQueue.enabled             value: "true"           - name: splunkPlatform.sendingQueue.persistentQueue.storagePath             value: "/var/addon/splunk/exporter_queue"           - name: agent.resources.limits.memory             value: "0.5Gi" can someone help me how can i do it 
i have done this, but nothing i can't see in event viewer. what's the problem? 
Unparsed or incorrectly broken? If they are incorrectly broken you might want to tweak that line breaker. Use https://regex101.com to test your ideas against your data. If they are not/incorrectly p... See more...
Unparsed or incorrectly broken? If they are incorrectly broken you might want to tweak that line breaker. Use https://regex101.com to test your ideas against your data. If they are not/incorrectly parsed, either the events are malformed or you might be hitting extraction limits (there are limits to the size of the data and number of fields which are automaticaly extracted if I remember correctly).
Thanks @PrewinThomas -  it worked as expected and was fast enough.
thank you my friend that worked, most events are now being parsed properly - however I am still seeing some very large 200+ line events not getting parsed, with many of them being 257 lines? Any idea... See more...
thank you my friend that worked, most events are now being parsed properly - however I am still seeing some very large 200+ line events not getting parsed, with many of them being 257 lines? Any idea what could be causing these not to parse?
Hi @super_edition , you could try something like this (see my approach and adapt it to your data): index="my_index" kubernetes_namespace="my_ns" kubernetes_cluster!="bad_cluster" kubernetes_deplo... See more...
Hi @super_edition , you could try something like this (see my approach and adapt it to your data): index="my_index" kubernetes_namespace="my_ns" kubernetes_cluster!="bad_cluster" kubernetes_deployment_name="frontend_service" msg="RESPONSE" ("/my_service/user-registration" OR "/my_service/profile-retrieval") | eval url=if(searchmatch("/my_service/profile-retrieval"),"/my_service/profile-retrieval","/my_service/user-registration") | stats count as hits avg(responseTime) as avgResponse perc90(responseTime) as nintyPerc by url method kubernetes_cluster | eval avgResponse=round(avgResponse,2) | eval nintyPerc=round(nintyPerc,2) Ciao. Giuseppe
@super_edition  You can either use append or eval match condition to combine both for your scenario using append ( index="my_index" kubernetes_namespace="my_ns" kubernetes_cluster!="bad_cluster" ... See more...
@super_edition  You can either use append or eval match condition to combine both for your scenario using append ( index="my_index" kubernetes_namespace="my_ns" kubernetes_cluster!="bad_cluster" kubernetes_deployment_name="frontend_service" msg="RESPONSE" "/my_service/user-registration" | dedup req_id | stats count as hits avg(responseTime) as avgResponse perc90(responseTime) as nintyPerc by url method kubernetes_cluster | eval avgResponse=round(avgResponse,2) | eval nintyPerc=round(nintyPerc,2) ) | append [ search index="my_index" kubernetes_namespace="my_ns" kubernetes_cluster!="bad_cluster" kubernetes_deployment_name="frontend_service" msg="RESPONSE" "/my_service/profile-retrieval" | eval url="/my_service/profile-retrieval" | stats count as hits avg(responseTime) as avgResponse perc90(responseTime) as nintyPerc by url method kubernetes_cluster | eval avgResponse=round(avgResponse,2) | eval nintyPerc=round(nintyPerc,2) ] | table url method kubernetes_cluster hits avgResponse nintyPerc   combined index="my_index" kubernetes_namespace="my_ns" kubernetes_cluster!="bad_cluster" kubernetes_deployment_name="frontend_service" msg="RESPONSE" ("/my_service/user-registration" OR "/my_service/profile-retrieval") | eval url=if(match(url, "^/my_service/user-registration"), "/my_service/user-registration", if(match(url, "^/my_service/profile-retrieval"), "/my_service/profile-retrieval", url)) | dedup req_id | stats count as hits avg(responseTime) as avgResponse perc90(responseTime) as nintyPerc by url method kubernetes_cluster | eval avgResponse=round(avgResponse,2) | eval nintyPerc=round(nintyPerc,2) | table url method kubernetes_cluster hits avgResponse nintyPerc Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Sorry, I'm not sure to get it: "Splunk doesn't index fields as indexed fields, unless they are explicitly extracted as indexed fields" How is it possible with splunk cloud? if I understand well, wit... See more...
Sorry, I'm not sure to get it: "Splunk doesn't index fields as indexed fields, unless they are explicitly extracted as indexed fields" How is it possible with splunk cloud? if I understand well, with kv_mode=json, event if our logs are json formatted, I will have to extract one by one all fields I need, using the field extractions feature. The fields will be then extracted at search time, and not indexed. Right? Then, wouldn't be there a risk on the search performance, if all fields are extracted at search time? Also, the usage of tstat will need to be reviewed for all our saved searches/dashboards...etc. Am I right? Thanks BR Nordine
Hello Everyone, I have 2 splunk search queries query-1 index="my_index" kubernetes_namespace="my_ns" kubernetes_cluster!="bad_cluster" kubernetes_deployment_name="frontend_service" msg="RESPONSE... See more...
Hello Everyone, I have 2 splunk search queries query-1 index="my_index" kubernetes_namespace="my_ns" kubernetes_cluster!="bad_cluster" kubernetes_deployment_name="frontend_service" msg="RESPONSE" "/my_service/user-registration" | dedup req_id | stats count as hits avg(responseTime) as avgResponse perc90(responseTime) as nintyPerc by url method kubernetes_cluster | eval avgResponse=round(avgResponse,2) | eval nintyPerc=round(nintyPerc,2) output  url method kubernetes_cluster hits avgResponse nintyPerc /my_service/user-registration POST LON 11254 112 535   query-2 index="my_index" kubernetes_namespace="my_ns" kubernetes_cluster!="bad_cluster" kubernetes_deployment_name="frontend_service" msg="RESPONSE" "/my_service/profile-retrieval" | eval normalized_url="/my_service/profile-retrieval" | stats count as hits avg(responseTime) as avgResponse perc90(responseTime) as nintyPerc by normalized_url method kubernetes_cluster | eval avgResponse=round(avgResponse,2) | eval nintyPerc=round(nintyPerc,2) output url method kubernetes_cluster hits avgResponse nintyPerc /my_service/profile-retrieval GET LON 55477 698 3423   The query-2 returns multiple urls like below but belongs to same endpoint: /my_service/profile-retrieval/324524352 /my_service/profile-retrieval/453453?displayOptions=ADDRESS%2CCONTACT&programCode=SKW /my_service/profile-retrieval/?displayOptions=PREFERENCES&programCode=SKW&ssfMembershipId=00408521260 Hence I used eval function to normalized them eval normalized_url="/my_service/profile-retrieval" How do I combine both queries to return as simplified output url method kubernetes_cluster hits avgResponse nintyPerc /my_service/user-registration POST LON 11254 112 535 /my_service/profile-retrieval GET LON 55477 698 3423   Highly appreciate your help!!
 I mean that if you're using indexed extractions you can't selectively choose which fields are getting indexed as indexed fields and which are not. With indexed extractions Splunk extracts and indexe... See more...
 I mean that if you're using indexed extractions you can't selectively choose which fields are getting indexed as indexed fields and which are not. With indexed extractions Splunk extracts and indexes all fields from your json/csv/xml/whatever as indexed fields. With KV_MODE=json (or KV_MODE=auto but it's better to be precise here so that Splunk doesn't have to guess). Splunk doesn't index fields as indexed fields unless they are explicitly extracted as indexed fields (which will be difficult/impossible with structured data). Anyway, the best practice about handling json data, unless you have a very very good reason to do otherwise, is to use search-time extractions, not indexed extractions.
It's not even that you _should_ remove ES from the deployer in the default installation but rather you must have done something differently for which the removal of ES was the cure. Normally ES shoul... See more...
It's not even that you _should_ remove ES from the deployer in the default installation but rather you must have done something differently for which the removal of ES was the cure. Normally ES should detect that it's being deployed on a deployer and should _not_ set itself as a "runnable" instance.
Hello PickleRick, for the point 3, you mean by using kv_mode=json, unlike using indexed extractions, I will be able to "selectively not index some fields". Would you mind to give me some more detail... See more...
Hello PickleRick, for the point 3, you mean by using kv_mode=json, unlike using indexed extractions, I will be able to "selectively not index some fields". Would you mind to give me some more details, or examples how I can do? On my side, I've checked the source type which is used, and indeed: indexed extractions = json and in advanced tab: kv_mode = none So, you recommend to set: indexed extractions = none and in advanced tab: kv_mode = json Can you confirm this is the right way? Then, how can I exclude some specific fields from automatic extraction? Thanks a lot Regards Nordine
Thank you for the clear answer. Removed and working fine. Does Splunk ES documentation state this anywhere? 
I see the useEnglishOnly setting which is known to cause problems. See my thread here https://community.splunk.com/t5/Getting-Data-In/Debugging-perfmon-input/m-p/621539#M107042
Hi @LAME-Creations  When I send an event to SOAR manually, I get a difference such as user=admin and mode=adhoc. Whereas if I wait for adaptive response from mission control, it is mode=saved and us... See more...
Hi @LAME-Creations  When I send an event to SOAR manually, I get a difference such as user=admin and mode=adhoc. Whereas if I wait for adaptive response from mission control, it is mode=saved and user=machinename.  
@BraxcBT  Issue started after an upgrade or new app/add-on install? Or first time you logged in and observed this? Try from different browser and see if it's still the same. Regards, Prewin Sp... See more...
@BraxcBT  Issue started after an upgrade or new app/add-on install? Or first time you logged in and observed this? Try from different browser and see if it's still the same. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!