All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello Splunk community,  One of my indexes doesn't seem to have indexed any data for the last two weeks or so. This is the logs I see when searching for index="_internal" index_name:   26/05/2024 ... See more...
Hello Splunk community,  One of my indexes doesn't seem to have indexed any data for the last two weeks or so. This is the logs I see when searching for index="_internal" index_name:   26/05/2024 02:19:36.947 // 05-26-2024 02:19:36.947 -0400 INFO Dashboard - group=per_index_thruput, series="index_name", kbps=7940.738, eps=17495.842, kb=246192.784, ev=542437, avg_age=0.039, max_age=1 26/05/2024 02:19:07.804 // 05-26-2024 02:19:07.804 -0400 INFO DatabaseDirectoryManager [12112 IndexerService] - Finished writing bucket manifest in hotWarmPath=/opt/splunk/var/lib/splunk/…/db duration=0.013 26/05/2024 02:19:07.799 // 05-26-2024 02:19:07.799 -0400 INFO DatabaseDirectoryManager [12112 IndexerService] - idx=index_name writing a bucket manifest in hotWarmPath='/opt/splunk/var/lib/splunk/…/db' pendingBucketUpdates=0 innerLockTime=0.009. Reason='Buckets were rebuilt or tsidx-minified (bucket_count=1).' 26/05/2024 02:19:05.944 // 05-26-2024 02:19:05.944 -0400 INFO Dashboard - group=per_index_thruput, series="index_name", kbps=10987.030, eps=24200.033, kb=340566.581, ev=750132, avg_age=0.032, max_age=1 26/05/2024 02:18:59.981 // 05-26-2024 02:18:59.981 -0400 INFO LicenseUsage - type=Usage s="/opt/splunk/etc/apps/…/…/ABC.csv" st="name" h=host o="" idx="index_name" i="41050380-CA05-4248-AFCA-93E310A1E6A9" pool="auto_generated_pool_enterprise" b=6343129 poolsz=5368709120   What could be a reason for this and how could I address it? Thank you for all your help!
This should give you the column you want If you do just, this should have your data and display it | inputlookup TagDescriptionLookup.csv #This should just give you the column data | inputlookup ... See more...
This should give you the column you want If you do just, this should have your data and display it | inputlookup TagDescriptionLookup.csv #This should just give you the column data | inputlookup TagDescriptionLookup.csv | fields TagName For further info  look here, there are many examples and syntax details https://docs.splunk.com/Documentation/Splunk/9.2.1/SearchReference/Inputlookup#Examples
Hi Community, actual i have a cron job, thats get every day values for today and tomorrow. How to extract for "today" or "tomorrow" the value? This SPL doesn´t work, and don´t  rename my field ... See more...
Hi Community, actual i have a cron job, thats get every day values for today and tomorrow. How to extract for "today" or "tomorrow" the value? This SPL doesn´t work, and don´t  rename my field to get a fix fieldname... | eval today=strftime(_time,"%Y-%m-%d") | rename "result."+'today' AS "result_today" | stats list(result_today) Here my RAW...  
Unfortunately, I tried everything but frustratingly obtained no result. The logs at midnight were not different, so we didn't manage to find what was wrong. I finally found a (NOT)solution: revived... See more...
Unfortunately, I tried everything but frustratingly obtained no result. The logs at midnight were not different, so we didn't manage to find what was wrong. I finally found a (NOT)solution: revived an old ELK server and sent the log through Logstash into Splunk. This way there's no gap in the logs and it is working right now. We plan to return on it whenever the other team installs the new version of Sophos and see whether there are any differences.
Oh wait. I missed to include one last update that was added to the search which is search time window in the search itself. The search uses time window (earliest=@h-6h latest=@h) as shown below. When... See more...
Oh wait. I missed to include one last update that was added to the search which is search time window in the search itself. The search uses time window (earliest=@h-6h latest=@h) as shown below. When I removed this, I found that it is possible to set acceleration. Is this a known limitation?   index=haproxy (backend="backend1" OR backend="backend2") earliest=@h-6h latest=@h | bucket _time span=1h | eval result=if(status >= 500, "Failure", "Success") | stats count(result) as totalcount, count(eval(result="Success")) as success, count(eval(result="Failure")) as failure by backend, _time | eval availability=tostring(round((success/totalcount)*100,3)) + "%" | fields _time, backend, success, failure, totalcount, availability    
Oh wait. I missed to include one last update that was added to the search which is search time window in the search itself. The search uses time window (earliest=@h-6h latest=@h) as shown below. When... See more...
Oh wait. I missed to include one last update that was added to the search which is search time window in the search itself. The search uses time window (earliest=@h-6h latest=@h) as shown below. When I removed this, I found that it is possible to set acceleration. Is this a known limitation? index=haproxy (backend="backend1" OR backend="backend2") earliest=@h-6h latest=@h | bucket _time span=1h | eval result=if(status >= 500, "Failure", "Success") | stats count(result) as totalcount, count(eval(result="Success")) as success, count(eval(result="Failure")) as failure by backend, _time | eval availability=tostring(round((success/totalcount)*100,3)) + "%" | fields _time, backend, success, failure, totalcount, availability  
Hi @Jamietriplet , the "column" field isn't present in the available fields (Site UnitName TagName TagDescription Units). so the where condition isn't verified. What do you want to check? Ciao. ... See more...
Hi @Jamietriplet , the "column" field isn't present in the available fields (Site UnitName TagName TagDescription Units). so the where condition isn't verified. What do you want to check? Ciao. Giuseppe
Im afraid this is not the the case. I have admin role and I have enabled acceleration for other reports before. Please also note that the error is "*This search* can not be accelerated" as mentioned ... See more...
Im afraid this is not the the case. I have admin role and I have enabled acceleration for other reports before. Please also note that the error is "*This search* can not be accelerated" as mentioned in above replies.
Hi @KhalidAlharthi , did you trid to remove defaultGroup? Ciao. Giuseppe
Hi there, I am trying to get some data from MS Defender into a Splunk query.  My original KQL query in azure contains | JOIN KIND INNER. to concat DeviceProcess and DeviceRegistry tables. The Splu... See more...
Hi there, I am trying to get some data from MS Defender into a Splunk query.  My original KQL query in azure contains | JOIN KIND INNER. to concat DeviceProcess and DeviceRegistry tables. The Splunk app I am using:  Splunk https://splunkbase.splunk.com/app/5518    So basically I'd like to do concatenation between DeviceProcess and DeviceRegistry events in advanced hunting query | advhunt in splunk SPL. Is there a suitable Splunk query for this kind of purpose?
@inventsekar  - This is the python code we use to pull the loigs
#!/usr/bin/env python # coding=utf-8 from __future__ import print_function import sys, os import xml.dom.minidom, xml.sax.saxutils from pymongo import MongoClient from datetime import datetime ... See more...
#!/usr/bin/env python # coding=utf-8 from __future__ import print_function import sys, os import xml.dom.minidom, xml.sax.saxutils from pymongo import MongoClient from datetime import datetime import base64 import json import re import logging from io import open import six logging.root logging.root.setLevel(logging.DEBUG) formatter = logging.Formatter('%(levelname)s %(message)s') handler = logging.StreamHandler() handler.setFormatter(formatter) logging.root.addHandler(handler) SCHEME = """<scheme> <title>API Gateway Analytics</title> <description>Ingest data from API Gateway mongodb tyk_analytics database</description> <streaming_mode>xml</streaming_mode> <endpoint> <args> <arg name="mongodb_uri"> <title>MongoDB URI</title> <description>mongodb://USER:PASS@SERVER1:27017,SERVER2:27017/tyk_analytics?replicaSet=mongo-replica</description> </arg> </args> </endpoint> </scheme> """ def do_scheme(): print(SCHEME) # Empty validation routine. This routine is optional. def validate_arguments(): pass def get_config(): config = {} try: # read everything from stdin config_str = sys.stdin.read() # parse the config XML doc = xml.dom.minidom.parseString(config_str) root = doc.documentElement conf_node = root.getElementsByTagName("configuration")[0] if conf_node: stanza = conf_node.getElementsByTagName("stanza")[0] if stanza: stanza_name = stanza.getAttribute("name") if stanza_name: logging.debug("XML: found stanza " + stanza_name) config["name"] = stanza_name params = stanza.getElementsByTagName("param") for param in params: param_name = param.getAttribute("name") logging.debug("XML: found param '%s'" % param_name) if param_name and param.firstChild and param.firstChild.nodeType == param.firstChild.TEXT_NODE: data = param.firstChild.data config[param_name] = data logging.debug("XML: '%s' -> '%s'" % (param_name, data)) checkpnt_node = root.getElementsByTagName("checkpoint_dir")[0] if checkpnt_node and checkpnt_node.firstChild and checkpnt_node.firstChild.nodeType == checkpnt_node.firstChild.TEXT_NODE: config["checkpoint_dir"] = checkpnt_node.firstChild.data if not config: raise Exception("Invalid configuration received from Splunk.") except Exception as e: raise Exception("Error getting Splunk configuration via STDIN: %s" % str(e)) return config def save_checkpoint(config, checkpoint): checkpoint_file = "checkpoint-" + config["name"] checkpoint_file = re.sub('[\\/\s]', '', checkpoint_file) checkpoint_file_new = checkpoint_file + ".new" chk_file = os.path.join(config["checkpoint_dir"], checkpoint_file) chk_file_new = os.path.join(config["checkpoint_dir"], checkpoint_file_new) checkpoint_rfc3339=checkpoint.strftime('%Y-%m-%d %H:%M:%S.%f') logging.debug("Saving checkpoint=%s (checkpoint_rfc3339=%s) to file=%s", checkpoint, checkpoint_rfc3339, chk_file) f = open(chk_file_new, "w") f.write("%s" % checkpoint_rfc3339) f.close() os.rename(chk_file_new, chk_file) def load_checkpoint(config): checkpoint_file = "checkpoint-" + config["name"] checkpoint_file = re.sub('[\\/\s]', '', checkpoint_file) chk_file = os.path.join(config["checkpoint_dir"], checkpoint_file) #chk_file = os.path.join(config["checkpoint_dir"], "checkpoint") # try to open this file try: f = open(chk_file, "r") checkpoint_rfc3339=f.readline().split("\n")[0] logging.info("Read checkpoint_rfc3339=%s from file=%s", checkpoint_rfc3339, chk_file) checkpoint = datetime.strptime(checkpoint_rfc3339, '%Y-%m-%d %H:%M:%S.%f') f.close() except: # assume that this means the checkpoint is not there (Use 2000/1/1) checkpoint_rfc3339='2000-01-01 00:00:00.000000' checkpoint = datetime.strptime(checkpoint_rfc3339, '%Y-%m-%d %H:%M:%S.%f') logging.error("Failed to read checkpoint from file=%s using checkpoint_rfc3339=%s", chk_file, checkpoint_rfc3339) logging.debug("Checkpoint value is: checkpoint=%s", checkpoint) return checkpoint # Routine to index data def run(): config = get_config() mongodb_uri = config["mongodb_uri"] checkpoint = load_checkpoint(config) client = MongoClient(mongodb_uri) database = client["tyk_analytics"] collection = database["tyk_analytics"] cursor = collection.find({'timestamp': {'$gt': checkpoint} }) sys.stdout.write("<stream>") #logging.info("H: Before Document") for document in cursor: #logging.info("After1 Before Document") new_checkpoint=document['timestamp'] #logging.info("After2 Before Document") #document['timestamp'] is in GMT, so we can do a straight epoc conversion, and not be concerned with timezones epoc_timestamp = (new_checkpoint - datetime(1970,1,1,0,0,0,0)).total_seconds() #logging.debug("Calculated epoc_timestamp=%s, from ['timestamp']=%s", str(epoc_timestamp), checkpoint) outputdata={} outputdata['timestamp'] = six.text_type(document['timestamp']) outputdata['apiname'] = document['apiname'] outputdata['ipaddress'] = document['ipaddress'] outputdata['id'] = six.text_type(document['_id']) outputdata['requesttime'] = str(document['requesttime']) outputdata['responsecode'] = str(document['responsecode']) outputdata['method'] = document['method'] outputdata['path'] = document['path'] request=base64.b64decode(document['rawrequest']) try: request.decode('utf-8') #print "string is UTF-8, length %d bytes" % len(request) except UnicodeError: request = "(SPLUNK SCRIPTED INPUT ERROR) input is not UTF-8 unicode" m = re.search('(?s)^(.+)\r\n\r\n(.*)', request.decode('utf-8')) if m: #Strip any Authorization header outputdata['requestheader']=re.sub('\nAuthorization: [^\n]+','',m.group(1)) outputdata['requestbody']=m.group(2) else: outputdata['request'] = request response=base64.b64decode(document['rawresponse']) try: response.decode('utf-8') #print "string is UTF-8, length %d bytes" % len(response) except UnicodeError: response = "(SPLUNK SCRIPTED INPUT ERROR) input is not UTF-8 unicode" if response != "": m = re.search('(?s)^(.+)\r\n\r\n(.*)', response.decode('utf-8')) if m: outputdata['responseheader']=m.group(1) outputdata['responsebody']=m.group(2) else: outputdata['response'] = response.decode('utf-8') sys.stdout.write("<event>") sys.stdout.write("<time>") sys.stdout.write(str(epoc_timestamp)) sys.stdout.write("</time>") sys.stdout.write("<data>") logging.info("Before Json dumps") sys.stdout.write(xml.sax.saxutils.escape(json.dumps(outputdata))) logging.info("After1 Json dumps") sys.stdout.write("</data><done/></event>\n") if new_checkpoint > checkpoint: checkpoint_delta = (new_checkpoint - checkpoint).total_seconds() checkpoint=new_checkpoint if checkpoint_delta > 60: save_checkpoint(config, checkpoint) #End for block save_checkpoint(config, checkpoint) sys.stdout.write("</stream>") sys.stdout.flush() # Script must implement these args: scheme, validate-arguments if __name__ == '__main__': if len(sys.argv) > 1: if sys.argv[1] == "--scheme": do_scheme() elif sys.argv[1] == "--validate-arguments": validate_arguments() else: pass else: run() sys.exit(0)
right, change the index, for license: index=summary | stats sum(b) as totalBytes by host, index, source, sourcetype | eval host=lower(host) | eval MB=totalBytes/1024/1024 | eval GB=round(MB/10... See more...
right, change the index, for license: index=summary | stats sum(b) as totalBytes by host, index, source, sourcetype | eval host=lower(host) | eval MB=totalBytes/1024/1024 | eval GB=round(MB/1024,2) | sort - GB | head 100 | table host index source sourcetype GB Best, Giulia
Hi @WilmarMeyer  - this is on Splunk Side: Server Settings->IP Allow List Management-> "Search Head API Access".
@gcasaldi  Here it just pulls the cluster master server information with GB when I ran the query for last month and not any other results. Refer screenshot for reference.  
Hello All   Please i need help with the below, i am trying to display a particular column with the below query but i got a 'no results found' output   | inputlookup TagDescriptionLookup.csv | fi... See more...
Hello All   Please i need help with the below, i am trying to display a particular column with the below query but i got a 'no results found' output   | inputlookup TagDescriptionLookup.csv | fields Site UnitName TagName TagDescription Units | where column = "TagName" | rename column AS ColumnName | table ColumnName   Thanks
try like this (select time range from the search): index=_internal source=*license_usage.log type="Usage" | stats sum(b) as totalBytes by host, index, source, sourcetype | eval host=lower(host) ... See more...
try like this (select time range from the search): index=_internal source=*license_usage.log type="Usage" | stats sum(b) as totalBytes by host, index, source, sourcetype | eval host=lower(host) | eval MB=totalBytes/1024/1024 | eval GB=round(MB/1024,2) | sort - GB | head 100 | table host index source sourcetype GB Let me know   Best, Giulia
I installed a new splunk pprod platform and I would like to migrate all the prod data to the new platform. I restored the searchhead prod cluster on the pprod cluster with the backup and restoration... See more...
I installed a new splunk pprod platform and I would like to migrate all the prod data to the new platform. I restored the searchhead prod cluster on the pprod cluster with the backup and restoration of .bundle as indicated in this link: https://docs.splunk.com/Documentation/Splunk/8.2.12/DistSearch/BackuprestoreSHC The problem I have is a difference in the number of lookups between the prod and the pprod (pprod contains 1240 lookups and 58 datamodels while the prod contains 1270 lookups and 59 datamodels). Why do I have this difference even though I restored the pprod cluster with the prod .bundle? What can I do to have the same number on both platforms?
any help?
I have added the following to a dashboard to try and set the variable when it loads: <init> <condition match="$env:instance_type$ == cloud"> <set token="URL">Chips</set> </condition> <condition>... See more...
I have added the following to a dashboard to try and set the variable when it loads: <init> <condition match="$env:instance_type$ == cloud"> <set token="URL">Chips</set> </condition> <condition> <set token="URL">Fish</set> </condition> </init> While the $env:instance_type$ works to bring back the details of the environment when I map it directly in an HTML block it does not seem to want to evaluate in the initiate tags... I do not really want to push this into the parameters on the dashboard as this will confuse users and end up with mistakes happening - is there anywhere else I can set this seamlessly as the dashboard loads?