All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

HI @gcusello  what will  this command  do?  We have been running our indexer cluster as a multisite cluster with 3 indexers in our main site for the past year. With the below configuration:  si... See more...
HI @gcusello  what will  this command  do?  We have been running our indexer cluster as a multisite cluster with 3 indexers in our main site for the past year. With the below configuration:  site_replication_factor = origin:2,total:2 site_search_factor = origin:1,total:1  now we have decided to establish a disaster recovery site with an additional 3 indexers.  The expected configuration for the new DR site will be as follows:  site_replication_factor = origin:2, total:3 site_search_factor = origin:1, total:2 will the replication process start syncing all logs in the hot, warm and cold buckets  (approximately 20TB )  to DR indexers or will start real-time hot logs only??    
is it possible to determine which fields are sent from heavy forwarder to another system    i'm asking this because i have problem in TrendMicro can't be readable(logs) from qradar .
Hi @Orange_girl , please check the time format of your timestamps: maybe they are in european format (dd/mm/yyyy) and you didn't configured TIME_FORMAT in your sourcetype definition, so Splunk uses ... See more...
Hi @Orange_girl , please check the time format of your timestamps: maybe they are in european format (dd/mm/yyyy) and you didn't configured TIME_FORMAT in your sourcetype definition, so Splunk uses the american format (mm/dd/yyyy). Ciao. Giuseppe
Hello @deepakc thanks for your input
Hi @Jamietriplet , It's a normal dashboard: you have to create a dropdown using a search, putting attention that the fields to use in the input search and panel search use the same field name (fiel... See more...
Hi @Jamietriplet , It's a normal dashboard: you have to create a dropdown using a search, putting attention that the fields to use in the input search and panel search use the same field name (field names are case sensitive). Ciao. Giuseppe
Hello @gcusello  thanks for your input, I need help to write a query that searches through a csv file and allows user to select through a dropdown and displays a result from the CSV file based on us... See more...
Hello @gcusello  thanks for your input, I need help to write a query that searches through a csv file and allows user to select through a dropdown and displays a result from the CSV file based on user selection.
Hi @hazem , I hint to follow the Splunk Cluster Administration training. Otherwise, did you followed the steps at https://docs.splunk.com/Documentation/Splunk/9.2.1/Indexer/Migratetomultisite ? so... See more...
Hi @hazem , I hint to follow the Splunk Cluster Administration training. Otherwise, did you followed the steps at https://docs.splunk.com/Documentation/Splunk/9.2.1/Indexer/Migratetomultisite ? so try with  constrain_singlesite_buckets = false Ciao. Giuseppe
Hi Joshiro, how did you solve the issue? I'm facing the same problem to connect to Spacebridge to configure Splunk Edge Hub.   Marco
Hello Splunk community,  One of my indexes doesn't seem to have indexed any data for the last two weeks or so. This is the logs I see when searching for index="_internal" index_name:   26/05/2024 ... See more...
Hello Splunk community,  One of my indexes doesn't seem to have indexed any data for the last two weeks or so. This is the logs I see when searching for index="_internal" index_name:   26/05/2024 02:19:36.947 // 05-26-2024 02:19:36.947 -0400 INFO Dashboard - group=per_index_thruput, series="index_name", kbps=7940.738, eps=17495.842, kb=246192.784, ev=542437, avg_age=0.039, max_age=1 26/05/2024 02:19:07.804 // 05-26-2024 02:19:07.804 -0400 INFO DatabaseDirectoryManager [12112 IndexerService] - Finished writing bucket manifest in hotWarmPath=/opt/splunk/var/lib/splunk/…/db duration=0.013 26/05/2024 02:19:07.799 // 05-26-2024 02:19:07.799 -0400 INFO DatabaseDirectoryManager [12112 IndexerService] - idx=index_name writing a bucket manifest in hotWarmPath='/opt/splunk/var/lib/splunk/…/db' pendingBucketUpdates=0 innerLockTime=0.009. Reason='Buckets were rebuilt or tsidx-minified (bucket_count=1).' 26/05/2024 02:19:05.944 // 05-26-2024 02:19:05.944 -0400 INFO Dashboard - group=per_index_thruput, series="index_name", kbps=10987.030, eps=24200.033, kb=340566.581, ev=750132, avg_age=0.032, max_age=1 26/05/2024 02:18:59.981 // 05-26-2024 02:18:59.981 -0400 INFO LicenseUsage - type=Usage s="/opt/splunk/etc/apps/…/…/ABC.csv" st="name" h=host o="" idx="index_name" i="41050380-CA05-4248-AFCA-93E310A1E6A9" pool="auto_generated_pool_enterprise" b=6343129 poolsz=5368709120   What could be a reason for this and how could I address it? Thank you for all your help!
This should give you the column you want If you do just, this should have your data and display it | inputlookup TagDescriptionLookup.csv #This should just give you the column data | inputlookup ... See more...
This should give you the column you want If you do just, this should have your data and display it | inputlookup TagDescriptionLookup.csv #This should just give you the column data | inputlookup TagDescriptionLookup.csv | fields TagName For further info  look here, there are many examples and syntax details https://docs.splunk.com/Documentation/Splunk/9.2.1/SearchReference/Inputlookup#Examples
Hi Community, actual i have a cron job, thats get every day values for today and tomorrow. How to extract for "today" or "tomorrow" the value? This SPL doesn´t work, and don´t  rename my field ... See more...
Hi Community, actual i have a cron job, thats get every day values for today and tomorrow. How to extract for "today" or "tomorrow" the value? This SPL doesn´t work, and don´t  rename my field to get a fix fieldname... | eval today=strftime(_time,"%Y-%m-%d") | rename "result."+'today' AS "result_today" | stats list(result_today) Here my RAW...  
Unfortunately, I tried everything but frustratingly obtained no result. The logs at midnight were not different, so we didn't manage to find what was wrong. I finally found a (NOT)solution: revived... See more...
Unfortunately, I tried everything but frustratingly obtained no result. The logs at midnight were not different, so we didn't manage to find what was wrong. I finally found a (NOT)solution: revived an old ELK server and sent the log through Logstash into Splunk. This way there's no gap in the logs and it is working right now. We plan to return on it whenever the other team installs the new version of Sophos and see whether there are any differences.
Oh wait. I missed to include one last update that was added to the search which is search time window in the search itself. The search uses time window (earliest=@h-6h latest=@h) as shown below. When... See more...
Oh wait. I missed to include one last update that was added to the search which is search time window in the search itself. The search uses time window (earliest=@h-6h latest=@h) as shown below. When I removed this, I found that it is possible to set acceleration. Is this a known limitation?   index=haproxy (backend="backend1" OR backend="backend2") earliest=@h-6h latest=@h | bucket _time span=1h | eval result=if(status >= 500, "Failure", "Success") | stats count(result) as totalcount, count(eval(result="Success")) as success, count(eval(result="Failure")) as failure by backend, _time | eval availability=tostring(round((success/totalcount)*100,3)) + "%" | fields _time, backend, success, failure, totalcount, availability    
Oh wait. I missed to include one last update that was added to the search which is search time window in the search itself. The search uses time window (earliest=@h-6h latest=@h) as shown below. When... See more...
Oh wait. I missed to include one last update that was added to the search which is search time window in the search itself. The search uses time window (earliest=@h-6h latest=@h) as shown below. When I removed this, I found that it is possible to set acceleration. Is this a known limitation? index=haproxy (backend="backend1" OR backend="backend2") earliest=@h-6h latest=@h | bucket _time span=1h | eval result=if(status >= 500, "Failure", "Success") | stats count(result) as totalcount, count(eval(result="Success")) as success, count(eval(result="Failure")) as failure by backend, _time | eval availability=tostring(round((success/totalcount)*100,3)) + "%" | fields _time, backend, success, failure, totalcount, availability  
Hi @Jamietriplet , the "column" field isn't present in the available fields (Site UnitName TagName TagDescription Units). so the where condition isn't verified. What do you want to check? Ciao. ... See more...
Hi @Jamietriplet , the "column" field isn't present in the available fields (Site UnitName TagName TagDescription Units). so the where condition isn't verified. What do you want to check? Ciao. Giuseppe
Im afraid this is not the the case. I have admin role and I have enabled acceleration for other reports before. Please also note that the error is "*This search* can not be accelerated" as mentioned ... See more...
Im afraid this is not the the case. I have admin role and I have enabled acceleration for other reports before. Please also note that the error is "*This search* can not be accelerated" as mentioned in above replies.
Hi @KhalidAlharthi , did you trid to remove defaultGroup? Ciao. Giuseppe
Hi there, I am trying to get some data from MS Defender into a Splunk query.  My original KQL query in azure contains | JOIN KIND INNER. to concat DeviceProcess and DeviceRegistry tables. The Splu... See more...
Hi there, I am trying to get some data from MS Defender into a Splunk query.  My original KQL query in azure contains | JOIN KIND INNER. to concat DeviceProcess and DeviceRegistry tables. The Splunk app I am using:  Splunk https://splunkbase.splunk.com/app/5518    So basically I'd like to do concatenation between DeviceProcess and DeviceRegistry events in advanced hunting query | advhunt in splunk SPL. Is there a suitable Splunk query for this kind of purpose?
@inventsekar  - This is the python code we use to pull the loigs
#!/usr/bin/env python # coding=utf-8 from __future__ import print_function import sys, os import xml.dom.minidom, xml.sax.saxutils from pymongo import MongoClient from datetime import datetime ... See more...
#!/usr/bin/env python # coding=utf-8 from __future__ import print_function import sys, os import xml.dom.minidom, xml.sax.saxutils from pymongo import MongoClient from datetime import datetime import base64 import json import re import logging from io import open import six logging.root logging.root.setLevel(logging.DEBUG) formatter = logging.Formatter('%(levelname)s %(message)s') handler = logging.StreamHandler() handler.setFormatter(formatter) logging.root.addHandler(handler) SCHEME = """<scheme> <title>API Gateway Analytics</title> <description>Ingest data from API Gateway mongodb tyk_analytics database</description> <streaming_mode>xml</streaming_mode> <endpoint> <args> <arg name="mongodb_uri"> <title>MongoDB URI</title> <description>mongodb://USER:PASS@SERVER1:27017,SERVER2:27017/tyk_analytics?replicaSet=mongo-replica</description> </arg> </args> </endpoint> </scheme> """ def do_scheme(): print(SCHEME) # Empty validation routine. This routine is optional. def validate_arguments(): pass def get_config(): config = {} try: # read everything from stdin config_str = sys.stdin.read() # parse the config XML doc = xml.dom.minidom.parseString(config_str) root = doc.documentElement conf_node = root.getElementsByTagName("configuration")[0] if conf_node: stanza = conf_node.getElementsByTagName("stanza")[0] if stanza: stanza_name = stanza.getAttribute("name") if stanza_name: logging.debug("XML: found stanza " + stanza_name) config["name"] = stanza_name params = stanza.getElementsByTagName("param") for param in params: param_name = param.getAttribute("name") logging.debug("XML: found param '%s'" % param_name) if param_name and param.firstChild and param.firstChild.nodeType == param.firstChild.TEXT_NODE: data = param.firstChild.data config[param_name] = data logging.debug("XML: '%s' -> '%s'" % (param_name, data)) checkpnt_node = root.getElementsByTagName("checkpoint_dir")[0] if checkpnt_node and checkpnt_node.firstChild and checkpnt_node.firstChild.nodeType == checkpnt_node.firstChild.TEXT_NODE: config["checkpoint_dir"] = checkpnt_node.firstChild.data if not config: raise Exception("Invalid configuration received from Splunk.") except Exception as e: raise Exception("Error getting Splunk configuration via STDIN: %s" % str(e)) return config def save_checkpoint(config, checkpoint): checkpoint_file = "checkpoint-" + config["name"] checkpoint_file = re.sub('[\\/\s]', '', checkpoint_file) checkpoint_file_new = checkpoint_file + ".new" chk_file = os.path.join(config["checkpoint_dir"], checkpoint_file) chk_file_new = os.path.join(config["checkpoint_dir"], checkpoint_file_new) checkpoint_rfc3339=checkpoint.strftime('%Y-%m-%d %H:%M:%S.%f') logging.debug("Saving checkpoint=%s (checkpoint_rfc3339=%s) to file=%s", checkpoint, checkpoint_rfc3339, chk_file) f = open(chk_file_new, "w") f.write("%s" % checkpoint_rfc3339) f.close() os.rename(chk_file_new, chk_file) def load_checkpoint(config): checkpoint_file = "checkpoint-" + config["name"] checkpoint_file = re.sub('[\\/\s]', '', checkpoint_file) chk_file = os.path.join(config["checkpoint_dir"], checkpoint_file) #chk_file = os.path.join(config["checkpoint_dir"], "checkpoint") # try to open this file try: f = open(chk_file, "r") checkpoint_rfc3339=f.readline().split("\n")[0] logging.info("Read checkpoint_rfc3339=%s from file=%s", checkpoint_rfc3339, chk_file) checkpoint = datetime.strptime(checkpoint_rfc3339, '%Y-%m-%d %H:%M:%S.%f') f.close() except: # assume that this means the checkpoint is not there (Use 2000/1/1) checkpoint_rfc3339='2000-01-01 00:00:00.000000' checkpoint = datetime.strptime(checkpoint_rfc3339, '%Y-%m-%d %H:%M:%S.%f') logging.error("Failed to read checkpoint from file=%s using checkpoint_rfc3339=%s", chk_file, checkpoint_rfc3339) logging.debug("Checkpoint value is: checkpoint=%s", checkpoint) return checkpoint # Routine to index data def run(): config = get_config() mongodb_uri = config["mongodb_uri"] checkpoint = load_checkpoint(config) client = MongoClient(mongodb_uri) database = client["tyk_analytics"] collection = database["tyk_analytics"] cursor = collection.find({'timestamp': {'$gt': checkpoint} }) sys.stdout.write("<stream>") #logging.info("H: Before Document") for document in cursor: #logging.info("After1 Before Document") new_checkpoint=document['timestamp'] #logging.info("After2 Before Document") #document['timestamp'] is in GMT, so we can do a straight epoc conversion, and not be concerned with timezones epoc_timestamp = (new_checkpoint - datetime(1970,1,1,0,0,0,0)).total_seconds() #logging.debug("Calculated epoc_timestamp=%s, from ['timestamp']=%s", str(epoc_timestamp), checkpoint) outputdata={} outputdata['timestamp'] = six.text_type(document['timestamp']) outputdata['apiname'] = document['apiname'] outputdata['ipaddress'] = document['ipaddress'] outputdata['id'] = six.text_type(document['_id']) outputdata['requesttime'] = str(document['requesttime']) outputdata['responsecode'] = str(document['responsecode']) outputdata['method'] = document['method'] outputdata['path'] = document['path'] request=base64.b64decode(document['rawrequest']) try: request.decode('utf-8') #print "string is UTF-8, length %d bytes" % len(request) except UnicodeError: request = "(SPLUNK SCRIPTED INPUT ERROR) input is not UTF-8 unicode" m = re.search('(?s)^(.+)\r\n\r\n(.*)', request.decode('utf-8')) if m: #Strip any Authorization header outputdata['requestheader']=re.sub('\nAuthorization: [^\n]+','',m.group(1)) outputdata['requestbody']=m.group(2) else: outputdata['request'] = request response=base64.b64decode(document['rawresponse']) try: response.decode('utf-8') #print "string is UTF-8, length %d bytes" % len(response) except UnicodeError: response = "(SPLUNK SCRIPTED INPUT ERROR) input is not UTF-8 unicode" if response != "": m = re.search('(?s)^(.+)\r\n\r\n(.*)', response.decode('utf-8')) if m: outputdata['responseheader']=m.group(1) outputdata['responsebody']=m.group(2) else: outputdata['response'] = response.decode('utf-8') sys.stdout.write("<event>") sys.stdout.write("<time>") sys.stdout.write(str(epoc_timestamp)) sys.stdout.write("</time>") sys.stdout.write("<data>") logging.info("Before Json dumps") sys.stdout.write(xml.sax.saxutils.escape(json.dumps(outputdata))) logging.info("After1 Json dumps") sys.stdout.write("</data><done/></event>\n") if new_checkpoint > checkpoint: checkpoint_delta = (new_checkpoint - checkpoint).total_seconds() checkpoint=new_checkpoint if checkpoint_delta > 60: save_checkpoint(config, checkpoint) #End for block save_checkpoint(config, checkpoint) sys.stdout.write("</stream>") sys.stdout.flush() # Script must implement these args: scheme, validate-arguments if __name__ == '__main__': if len(sys.argv) > 1: if sys.argv[1] == "--scheme": do_scheme() elif sys.argv[1] == "--validate-arguments": validate_arguments() else: pass else: run() sys.exit(0)