All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The most common thing I have heard for notifying users about changes is the notification messages and the global banner.  The messages are not my favorite method because they are often missed and if ... See more...
The most common thing I have heard for notifying users about changes is the notification messages and the global banner.  The messages are not my favorite method because they are often missed and if someone dismisses them, they are dismissed for everyone, so I encourage using a global banner.   If you are using the Web interface to do this, banners can be changed by going to the upper right settings - server controls - global banner.   This is probably all you need, but if you ever run into a situation that we had where we had standalone search heads geographically dispersed across the world, we needed a method to be able to change hundreds of search heads at one time, we used SOAR automation to do this.  We would create a single dashboard that would send an alert to SOAR phantom and that would than trigger an alert that would go to SOAR and then SOAR would ssh into all of the boxes and issue the necessary changes.  This was a really cool process, but I am assuming you don't have that problem and you can just go into the web interface of one Search Head and change your banner and you will be good.   Here is a video of using soar and ssh and curl to send messages.     https://youtu.be/gd5xDNGEsoU I did not make a video of using SOAR and ssh and modifying the global-banner.conf file, but that is the file you would be looking to modify if you want to change banner notifications.  
Hi all,   I am trying to develop a custom command. The custom command works as expected and now I am working to setup proper logging, but I can't seem to be able to make the python script log anyt... See more...
Hi all,   I am trying to develop a custom command. The custom command works as expected and now I am working to setup proper logging, but I can't seem to be able to make the python script log anything or I'm looking in the wrong place. I built it following what's written here: Create a custom search command | Documentation | Splunk Developer Program Here's a quick python code example: #!/usr/bin/env python # coding=utf-8 # # Copyright © 2011-2015 Splunk, Inc. # # Licensed under the Apache License, Version 2.0 (the "License"): you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. import os, sys, requests, json sys.path.insert(0, os.path.join(os.path.dirname(__file__), "..", "lib")) from splunklib.searchcommands import dispatch, StreamingCommand, Configuration, Option, validators, splunklib_logger as logger @Configuration() class TestCustomCMD(StreamingCommand): def getFieldValue(self, field_name, record): return record[field_name] if field_name in record else "" def writeFieldValue(self, field_name, field_value, record): record[field_name] = field_value def stream(self, records): for record in records: self.writeFieldValue("TEST FIELD", "TEST CUSTOM COMMAND", record) logger.fatal("FATAL logging example") logger.error("ERROR logging example") logger.warning("WARNING logging example") logger.info("INFO logging example") yield record dispatch(TestCustomCMD, sys.argv, sys.stdin, sys.stdout, __name__)   command.conf: [testcustcmd] filename = test_custom_command.py python.version = python3 chunked = true   and search to test: | makeresults count=2 | testcustcmd   The search completes correctly and returns this:   However, I don't find the logged lines anywhere. On my Splunk server I ran this: grep -rni "logging example" "/opt/splunk/var/log/splunk/"   But the result is empty. Can you help me understand what I am doing wrong here?   Thank you in advance, Tommaso
No I was checked there; by the way, it was Splunk glitch. I solved it. Thank you. 
Hello @mwmw, I can see on Splunkbase that the JAMF Pro add-on for Splunk is now cloud compatible. The latest version was released on June only. Thanks, Tejas. 
Hey @lgsh, Was this solved? If not, following is the reason for the behavior you are facing. geo_countries lookup does not have any field named latitude or longitude if you are using the built-in l... See more...
Hey @lgsh, Was this solved? If not, following is the reason for the behavior you are facing. geo_countries lookup does not have any field named latitude or longitude if you are using the built-in lookup. Following are the fields in the lookup table:   You'll need to extract the latitude and longitude fields from the geom field and use mvexpand to list record for all the coordinates for the same country as a separate event. You'll then be able to match the latitude and longitude fields from the events with those of the lookup and populate the Country. Hope this helps with your use case. Thanks, Tejas.   --- If the solution helps, an upvote is appreciated..!!
I think what is happening is that one timestamp is reflecting the end of the roll-up window and the other is reflecting the beginning. If you need them to align, you may need to subtract the value of... See more...
I think what is happening is that one timestamp is reflecting the end of the roll-up window and the other is reflecting the beginning. If you need them to align, you may need to subtract the value of the roll-up window to make this happen. To verify this theory, you may want to experiment with different roll-up periods and see if the difference is always equal to the roll-up.
Hey @Samiul59, I believe you are looking at the wrong place once the workflow action is set up. You should be able to see your workflow action label as per the following screenshot:   You may ... See more...
Hey @Samiul59, I believe you are looking at the wrong place once the workflow action is set up. You should be able to see your workflow action label as per the following screenshot:   You may also want to review the permission and scope of the app where you are trying to search the action and where it has been defined. Thanks, Tejas.   --- If the above solution helps, an upvote is appreciated..!!  
Hi @LAME-Creations , I figured out the problem related to writing to the indexers.  The issue was that the Search Head wasn't forwarding its data to the indexers and hence wasn't working in my case.... See more...
Hi @LAME-Creations , I figured out the problem related to writing to the indexers.  The issue was that the Search Head wasn't forwarding its data to the indexers and hence wasn't working in my case. As I created an outputs.conf on the SH, the error appeared, but the data was being written. Thanks, Pravin
Hi, Is there any option I can add banners in the AppDynamics dashboard in case of application maintenance or server maintenance notifications? Appreciate your suggestions.   Thanks, Raj
I am doing load testing for the persistent queue how the queue behaves  we have deployed in the argcd yaml so how can i do it  target:     kind: Application     name: splunk   patch: |-     ap... See more...
I am doing load testing for the persistent queue how the queue behaves  we have deployed in the argcd yaml so how can i do it  target:     kind: Application     name: splunk   patch: |-     apiVersion: argoproj.io/v1alpha1     kind: Application     metadata:       name: splunk     spec:       source:         helm:           parameters:           - name: clusterName             value: -QA           - name: distribution             value: openshift           - name: splunkObservability.realm             value: eu0           - name: splunkPlatform.endpoint             value: 'https://131.97/services/collector'           - name: splunkPlatform.index             value: 122049           - name: splunkPlatform.insecureSkipVerify             value: "true"           - name: splunkPlatform.sendingQueue.persistentQueue.enabled             value: "true"           - name: splunkPlatform.sendingQueue.persistentQueue.storagePath             value: "/var/addon/splunk/exporter_queue"           - name: agent.resources.limits.memory             value: "0.5Gi" can someone help me how can i do it 
i have done this, but nothing i can't see in event viewer. what's the problem? 
Unparsed or incorrectly broken? If they are incorrectly broken you might want to tweak that line breaker. Use https://regex101.com to test your ideas against your data. If they are not/incorrectly p... See more...
Unparsed or incorrectly broken? If they are incorrectly broken you might want to tweak that line breaker. Use https://regex101.com to test your ideas against your data. If they are not/incorrectly parsed, either the events are malformed or you might be hitting extraction limits (there are limits to the size of the data and number of fields which are automaticaly extracted if I remember correctly).
Thanks @PrewinThomas -  it worked as expected and was fast enough.
thank you my friend that worked, most events are now being parsed properly - however I am still seeing some very large 200+ line events not getting parsed, with many of them being 257 lines? Any idea... See more...
thank you my friend that worked, most events are now being parsed properly - however I am still seeing some very large 200+ line events not getting parsed, with many of them being 257 lines? Any idea what could be causing these not to parse?
Hi @super_edition , you could try something like this (see my approach and adapt it to your data): index="my_index" kubernetes_namespace="my_ns" kubernetes_cluster!="bad_cluster" kubernetes_deplo... See more...
Hi @super_edition , you could try something like this (see my approach and adapt it to your data): index="my_index" kubernetes_namespace="my_ns" kubernetes_cluster!="bad_cluster" kubernetes_deployment_name="frontend_service" msg="RESPONSE" ("/my_service/user-registration" OR "/my_service/profile-retrieval") | eval url=if(searchmatch("/my_service/profile-retrieval"),"/my_service/profile-retrieval","/my_service/user-registration") | stats count as hits avg(responseTime) as avgResponse perc90(responseTime) as nintyPerc by url method kubernetes_cluster | eval avgResponse=round(avgResponse,2) | eval nintyPerc=round(nintyPerc,2) Ciao. Giuseppe
@super_edition  You can either use append or eval match condition to combine both for your scenario using append ( index="my_index" kubernetes_namespace="my_ns" kubernetes_cluster!="bad_cluster" ... See more...
@super_edition  You can either use append or eval match condition to combine both for your scenario using append ( index="my_index" kubernetes_namespace="my_ns" kubernetes_cluster!="bad_cluster" kubernetes_deployment_name="frontend_service" msg="RESPONSE" "/my_service/user-registration" | dedup req_id | stats count as hits avg(responseTime) as avgResponse perc90(responseTime) as nintyPerc by url method kubernetes_cluster | eval avgResponse=round(avgResponse,2) | eval nintyPerc=round(nintyPerc,2) ) | append [ search index="my_index" kubernetes_namespace="my_ns" kubernetes_cluster!="bad_cluster" kubernetes_deployment_name="frontend_service" msg="RESPONSE" "/my_service/profile-retrieval" | eval url="/my_service/profile-retrieval" | stats count as hits avg(responseTime) as avgResponse perc90(responseTime) as nintyPerc by url method kubernetes_cluster | eval avgResponse=round(avgResponse,2) | eval nintyPerc=round(nintyPerc,2) ] | table url method kubernetes_cluster hits avgResponse nintyPerc   combined index="my_index" kubernetes_namespace="my_ns" kubernetes_cluster!="bad_cluster" kubernetes_deployment_name="frontend_service" msg="RESPONSE" ("/my_service/user-registration" OR "/my_service/profile-retrieval") | eval url=if(match(url, "^/my_service/user-registration"), "/my_service/user-registration", if(match(url, "^/my_service/profile-retrieval"), "/my_service/profile-retrieval", url)) | dedup req_id | stats count as hits avg(responseTime) as avgResponse perc90(responseTime) as nintyPerc by url method kubernetes_cluster | eval avgResponse=round(avgResponse,2) | eval nintyPerc=round(nintyPerc,2) | table url method kubernetes_cluster hits avgResponse nintyPerc Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
Sorry, I'm not sure to get it: "Splunk doesn't index fields as indexed fields, unless they are explicitly extracted as indexed fields" How is it possible with splunk cloud? if I understand well, wit... See more...
Sorry, I'm not sure to get it: "Splunk doesn't index fields as indexed fields, unless they are explicitly extracted as indexed fields" How is it possible with splunk cloud? if I understand well, with kv_mode=json, event if our logs are json formatted, I will have to extract one by one all fields I need, using the field extractions feature. The fields will be then extracted at search time, and not indexed. Right? Then, wouldn't be there a risk on the search performance, if all fields are extracted at search time? Also, the usage of tstat will need to be reviewed for all our saved searches/dashboards...etc. Am I right? Thanks BR Nordine
Hello Everyone, I have 2 splunk search queries query-1 index="my_index" kubernetes_namespace="my_ns" kubernetes_cluster!="bad_cluster" kubernetes_deployment_name="frontend_service" msg="RESPONSE... See more...
Hello Everyone, I have 2 splunk search queries query-1 index="my_index" kubernetes_namespace="my_ns" kubernetes_cluster!="bad_cluster" kubernetes_deployment_name="frontend_service" msg="RESPONSE" "/my_service/user-registration" | dedup req_id | stats count as hits avg(responseTime) as avgResponse perc90(responseTime) as nintyPerc by url method kubernetes_cluster | eval avgResponse=round(avgResponse,2) | eval nintyPerc=round(nintyPerc,2) output  url method kubernetes_cluster hits avgResponse nintyPerc /my_service/user-registration POST LON 11254 112 535   query-2 index="my_index" kubernetes_namespace="my_ns" kubernetes_cluster!="bad_cluster" kubernetes_deployment_name="frontend_service" msg="RESPONSE" "/my_service/profile-retrieval" | eval normalized_url="/my_service/profile-retrieval" | stats count as hits avg(responseTime) as avgResponse perc90(responseTime) as nintyPerc by normalized_url method kubernetes_cluster | eval avgResponse=round(avgResponse,2) | eval nintyPerc=round(nintyPerc,2) output url method kubernetes_cluster hits avgResponse nintyPerc /my_service/profile-retrieval GET LON 55477 698 3423   The query-2 returns multiple urls like below but belongs to same endpoint: /my_service/profile-retrieval/324524352 /my_service/profile-retrieval/453453?displayOptions=ADDRESS%2CCONTACT&programCode=SKW /my_service/profile-retrieval/?displayOptions=PREFERENCES&programCode=SKW&ssfMembershipId=00408521260 Hence I used eval function to normalized them eval normalized_url="/my_service/profile-retrieval" How do I combine both queries to return as simplified output url method kubernetes_cluster hits avgResponse nintyPerc /my_service/user-registration POST LON 11254 112 535 /my_service/profile-retrieval GET LON 55477 698 3423   Highly appreciate your help!!
 I mean that if you're using indexed extractions you can't selectively choose which fields are getting indexed as indexed fields and which are not. With indexed extractions Splunk extracts and indexe... See more...
 I mean that if you're using indexed extractions you can't selectively choose which fields are getting indexed as indexed fields and which are not. With indexed extractions Splunk extracts and indexes all fields from your json/csv/xml/whatever as indexed fields. With KV_MODE=json (or KV_MODE=auto but it's better to be precise here so that Splunk doesn't have to guess). Splunk doesn't index fields as indexed fields unless they are explicitly extracted as indexed fields (which will be difficult/impossible with structured data). Anyway, the best practice about handling json data, unless you have a very very good reason to do otherwise, is to use search-time extractions, not indexed extractions.
It's not even that you _should_ remove ES from the deployer in the default installation but rather you must have done something differently for which the removal of ES was the cure. Normally ES shoul... See more...
It's not even that you _should_ remove ES from the deployer in the default installation but rather you must have done something differently for which the removal of ES was the cure. Normally ES should detect that it's being deployed on a deployer and should _not_ set itself as a "runnable" instance.