All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Oh wait. I missed to include one last update that was added to the search which is search time window in the search itself. The search uses time window (earliest=@h-6h latest=@h) as shown below. When... See more...
Oh wait. I missed to include one last update that was added to the search which is search time window in the search itself. The search uses time window (earliest=@h-6h latest=@h) as shown below. When I removed this, I found that it is possible to set acceleration. Is this a known limitation?   index=haproxy (backend="backend1" OR backend="backend2") earliest=@h-6h latest=@h | bucket _time span=1h | eval result=if(status >= 500, "Failure", "Success") | stats count(result) as totalcount, count(eval(result="Success")) as success, count(eval(result="Failure")) as failure by backend, _time | eval availability=tostring(round((success/totalcount)*100,3)) + "%" | fields _time, backend, success, failure, totalcount, availability    
Oh wait. I missed to include one last update that was added to the search which is search time window in the search itself. The search uses time window (earliest=@h-6h latest=@h) as shown below. When... See more...
Oh wait. I missed to include one last update that was added to the search which is search time window in the search itself. The search uses time window (earliest=@h-6h latest=@h) as shown below. When I removed this, I found that it is possible to set acceleration. Is this a known limitation? index=haproxy (backend="backend1" OR backend="backend2") earliest=@h-6h latest=@h | bucket _time span=1h | eval result=if(status >= 500, "Failure", "Success") | stats count(result) as totalcount, count(eval(result="Success")) as success, count(eval(result="Failure")) as failure by backend, _time | eval availability=tostring(round((success/totalcount)*100,3)) + "%" | fields _time, backend, success, failure, totalcount, availability  
Hi @Jamietriplet , the "column" field isn't present in the available fields (Site UnitName TagName TagDescription Units). so the where condition isn't verified. What do you want to check? Ciao. ... See more...
Hi @Jamietriplet , the "column" field isn't present in the available fields (Site UnitName TagName TagDescription Units). so the where condition isn't verified. What do you want to check? Ciao. Giuseppe
Im afraid this is not the the case. I have admin role and I have enabled acceleration for other reports before. Please also note that the error is "*This search* can not be accelerated" as mentioned ... See more...
Im afraid this is not the the case. I have admin role and I have enabled acceleration for other reports before. Please also note that the error is "*This search* can not be accelerated" as mentioned in above replies.
Hi @KhalidAlharthi , did you trid to remove defaultGroup? Ciao. Giuseppe
Hi there, I am trying to get some data from MS Defender into a Splunk query.  My original KQL query in azure contains | JOIN KIND INNER. to concat DeviceProcess and DeviceRegistry tables. The Splu... See more...
Hi there, I am trying to get some data from MS Defender into a Splunk query.  My original KQL query in azure contains | JOIN KIND INNER. to concat DeviceProcess and DeviceRegistry tables. The Splunk app I am using:  Splunk https://splunkbase.splunk.com/app/5518    So basically I'd like to do concatenation between DeviceProcess and DeviceRegistry events in advanced hunting query | advhunt in splunk SPL. Is there a suitable Splunk query for this kind of purpose?
@inventsekar  - This is the python code we use to pull the loigs
#!/usr/bin/env python # coding=utf-8 from __future__ import print_function import sys, os import xml.dom.minidom, xml.sax.saxutils from pymongo import MongoClient from datetime import datetime ... See more...
#!/usr/bin/env python # coding=utf-8 from __future__ import print_function import sys, os import xml.dom.minidom, xml.sax.saxutils from pymongo import MongoClient from datetime import datetime import base64 import json import re import logging from io import open import six logging.root logging.root.setLevel(logging.DEBUG) formatter = logging.Formatter('%(levelname)s %(message)s') handler = logging.StreamHandler() handler.setFormatter(formatter) logging.root.addHandler(handler) SCHEME = """<scheme> <title>API Gateway Analytics</title> <description>Ingest data from API Gateway mongodb tyk_analytics database</description> <streaming_mode>xml</streaming_mode> <endpoint> <args> <arg name="mongodb_uri"> <title>MongoDB URI</title> <description>mongodb://USER:PASS@SERVER1:27017,SERVER2:27017/tyk_analytics?replicaSet=mongo-replica</description> </arg> </args> </endpoint> </scheme> """ def do_scheme(): print(SCHEME) # Empty validation routine. This routine is optional. def validate_arguments(): pass def get_config(): config = {} try: # read everything from stdin config_str = sys.stdin.read() # parse the config XML doc = xml.dom.minidom.parseString(config_str) root = doc.documentElement conf_node = root.getElementsByTagName("configuration")[0] if conf_node: stanza = conf_node.getElementsByTagName("stanza")[0] if stanza: stanza_name = stanza.getAttribute("name") if stanza_name: logging.debug("XML: found stanza " + stanza_name) config["name"] = stanza_name params = stanza.getElementsByTagName("param") for param in params: param_name = param.getAttribute("name") logging.debug("XML: found param '%s'" % param_name) if param_name and param.firstChild and param.firstChild.nodeType == param.firstChild.TEXT_NODE: data = param.firstChild.data config[param_name] = data logging.debug("XML: '%s' -> '%s'" % (param_name, data)) checkpnt_node = root.getElementsByTagName("checkpoint_dir")[0] if checkpnt_node and checkpnt_node.firstChild and checkpnt_node.firstChild.nodeType == checkpnt_node.firstChild.TEXT_NODE: config["checkpoint_dir"] = checkpnt_node.firstChild.data if not config: raise Exception("Invalid configuration received from Splunk.") except Exception as e: raise Exception("Error getting Splunk configuration via STDIN: %s" % str(e)) return config def save_checkpoint(config, checkpoint): checkpoint_file = "checkpoint-" + config["name"] checkpoint_file = re.sub('[\\/\s]', '', checkpoint_file) checkpoint_file_new = checkpoint_file + ".new" chk_file = os.path.join(config["checkpoint_dir"], checkpoint_file) chk_file_new = os.path.join(config["checkpoint_dir"], checkpoint_file_new) checkpoint_rfc3339=checkpoint.strftime('%Y-%m-%d %H:%M:%S.%f') logging.debug("Saving checkpoint=%s (checkpoint_rfc3339=%s) to file=%s", checkpoint, checkpoint_rfc3339, chk_file) f = open(chk_file_new, "w") f.write("%s" % checkpoint_rfc3339) f.close() os.rename(chk_file_new, chk_file) def load_checkpoint(config): checkpoint_file = "checkpoint-" + config["name"] checkpoint_file = re.sub('[\\/\s]', '', checkpoint_file) chk_file = os.path.join(config["checkpoint_dir"], checkpoint_file) #chk_file = os.path.join(config["checkpoint_dir"], "checkpoint") # try to open this file try: f = open(chk_file, "r") checkpoint_rfc3339=f.readline().split("\n")[0] logging.info("Read checkpoint_rfc3339=%s from file=%s", checkpoint_rfc3339, chk_file) checkpoint = datetime.strptime(checkpoint_rfc3339, '%Y-%m-%d %H:%M:%S.%f') f.close() except: # assume that this means the checkpoint is not there (Use 2000/1/1) checkpoint_rfc3339='2000-01-01 00:00:00.000000' checkpoint = datetime.strptime(checkpoint_rfc3339, '%Y-%m-%d %H:%M:%S.%f') logging.error("Failed to read checkpoint from file=%s using checkpoint_rfc3339=%s", chk_file, checkpoint_rfc3339) logging.debug("Checkpoint value is: checkpoint=%s", checkpoint) return checkpoint # Routine to index data def run(): config = get_config() mongodb_uri = config["mongodb_uri"] checkpoint = load_checkpoint(config) client = MongoClient(mongodb_uri) database = client["tyk_analytics"] collection = database["tyk_analytics"] cursor = collection.find({'timestamp': {'$gt': checkpoint} }) sys.stdout.write("<stream>") #logging.info("H: Before Document") for document in cursor: #logging.info("After1 Before Document") new_checkpoint=document['timestamp'] #logging.info("After2 Before Document") #document['timestamp'] is in GMT, so we can do a straight epoc conversion, and not be concerned with timezones epoc_timestamp = (new_checkpoint - datetime(1970,1,1,0,0,0,0)).total_seconds() #logging.debug("Calculated epoc_timestamp=%s, from ['timestamp']=%s", str(epoc_timestamp), checkpoint) outputdata={} outputdata['timestamp'] = six.text_type(document['timestamp']) outputdata['apiname'] = document['apiname'] outputdata['ipaddress'] = document['ipaddress'] outputdata['id'] = six.text_type(document['_id']) outputdata['requesttime'] = str(document['requesttime']) outputdata['responsecode'] = str(document['responsecode']) outputdata['method'] = document['method'] outputdata['path'] = document['path'] request=base64.b64decode(document['rawrequest']) try: request.decode('utf-8') #print "string is UTF-8, length %d bytes" % len(request) except UnicodeError: request = "(SPLUNK SCRIPTED INPUT ERROR) input is not UTF-8 unicode" m = re.search('(?s)^(.+)\r\n\r\n(.*)', request.decode('utf-8')) if m: #Strip any Authorization header outputdata['requestheader']=re.sub('\nAuthorization: [^\n]+','',m.group(1)) outputdata['requestbody']=m.group(2) else: outputdata['request'] = request response=base64.b64decode(document['rawresponse']) try: response.decode('utf-8') #print "string is UTF-8, length %d bytes" % len(response) except UnicodeError: response = "(SPLUNK SCRIPTED INPUT ERROR) input is not UTF-8 unicode" if response != "": m = re.search('(?s)^(.+)\r\n\r\n(.*)', response.decode('utf-8')) if m: outputdata['responseheader']=m.group(1) outputdata['responsebody']=m.group(2) else: outputdata['response'] = response.decode('utf-8') sys.stdout.write("<event>") sys.stdout.write("<time>") sys.stdout.write(str(epoc_timestamp)) sys.stdout.write("</time>") sys.stdout.write("<data>") logging.info("Before Json dumps") sys.stdout.write(xml.sax.saxutils.escape(json.dumps(outputdata))) logging.info("After1 Json dumps") sys.stdout.write("</data><done/></event>\n") if new_checkpoint > checkpoint: checkpoint_delta = (new_checkpoint - checkpoint).total_seconds() checkpoint=new_checkpoint if checkpoint_delta > 60: save_checkpoint(config, checkpoint) #End for block save_checkpoint(config, checkpoint) sys.stdout.write("</stream>") sys.stdout.flush() # Script must implement these args: scheme, validate-arguments if __name__ == '__main__': if len(sys.argv) > 1: if sys.argv[1] == "--scheme": do_scheme() elif sys.argv[1] == "--validate-arguments": validate_arguments() else: pass else: run() sys.exit(0)
right, change the index, for license: index=summary | stats sum(b) as totalBytes by host, index, source, sourcetype | eval host=lower(host) | eval MB=totalBytes/1024/1024 | eval GB=round(MB/10... See more...
right, change the index, for license: index=summary | stats sum(b) as totalBytes by host, index, source, sourcetype | eval host=lower(host) | eval MB=totalBytes/1024/1024 | eval GB=round(MB/1024,2) | sort - GB | head 100 | table host index source sourcetype GB Best, Giulia
Hi @WilmarMeyer  - this is on Splunk Side: Server Settings->IP Allow List Management-> "Search Head API Access".
@gcasaldi  Here it just pulls the cluster master server information with GB when I ran the query for last month and not any other results. Refer screenshot for reference.  
Hello All   Please i need help with the below, i am trying to display a particular column with the below query but i got a 'no results found' output   | inputlookup TagDescriptionLookup.csv | fi... See more...
Hello All   Please i need help with the below, i am trying to display a particular column with the below query but i got a 'no results found' output   | inputlookup TagDescriptionLookup.csv | fields Site UnitName TagName TagDescription Units | where column = "TagName" | rename column AS ColumnName | table ColumnName   Thanks
try like this (select time range from the search): index=_internal source=*license_usage.log type="Usage" | stats sum(b) as totalBytes by host, index, source, sourcetype | eval host=lower(host) ... See more...
try like this (select time range from the search): index=_internal source=*license_usage.log type="Usage" | stats sum(b) as totalBytes by host, index, source, sourcetype | eval host=lower(host) | eval MB=totalBytes/1024/1024 | eval GB=round(MB/1024,2) | sort - GB | head 100 | table host index source sourcetype GB Let me know   Best, Giulia
I installed a new splunk pprod platform and I would like to migrate all the prod data to the new platform. I restored the searchhead prod cluster on the pprod cluster with the backup and restoration... See more...
I installed a new splunk pprod platform and I would like to migrate all the prod data to the new platform. I restored the searchhead prod cluster on the pprod cluster with the backup and restoration of .bundle as indicated in this link: https://docs.splunk.com/Documentation/Splunk/8.2.12/DistSearch/BackuprestoreSHC The problem I have is a difference in the number of lookups between the prod and the pprod (pprod contains 1240 lookups and 58 datamodels while the prod contains 1270 lookups and 59 datamodels). Why do I have this difference even though I restored the pprod cluster with the prod .bundle? What can I do to have the same number on both platforms?
any help?
I have added the following to a dashboard to try and set the variable when it loads: <init> <condition match="$env:instance_type$ == cloud"> <set token="URL">Chips</set> </condition> <condition>... See more...
I have added the following to a dashboard to try and set the variable when it loads: <init> <condition match="$env:instance_type$ == cloud"> <set token="URL">Chips</set> </condition> <condition> <set token="URL">Fish</set> </condition> </init> While the $env:instance_type$ works to bring back the details of the environment when I map it directly in an HTML block it does not seem to want to evaluate in the initiate tags... I do not really want to push this into the parameters on the dashboard as this will confuse users and end up with mistakes happening - is there anywhere else I can set this seamlessly as the dashboard loads?
@gcasaldi , Thank you for your prompt response. For example, I want to pull the report for the entire month of May (from May 1st to May 31st, 2024) for the top 100 hosts by license usage, along with... See more...
@gcasaldi , Thank you for your prompt response. For example, I want to pull the report for the entire month of May (from May 1st to May 31st, 2024) for the top 100 hosts by license usage, along with their index, host, source, and sourcetype. I used the following query: index=_internal source=*license_usage.log type="Usage" | eval host=lower(host) | eval MB=b/1024/1024 | eval GB=round(MB/1024,2) | search earliest=-1mo@d latest=now@d | sort - GB | head 100 | table host index source sourcetype GB ``` However, the query seems to be running continuously and does not produce any results. It is still running when I search for the previous month in the Search and Reporting app. Could you please let me know where I might have made a mistake?
You can achieve this by leveraging internal indexes and configuring a report. Here's how: Search Query below This search query retrieves license usage data by host for a specific time range: ind... See more...
You can achieve this by leveraging internal indexes and configuring a report. Here's how: Search Query below This search query retrieves license usage data by host for a specific time range: index=_internal source=*license_usage.log type="Usage" | eval host=lower(host) # Standardize hostname (optional) | eval MB=b/1024/1024 # Convert bytes to Megabytes | eval GB=round(MB/1024,2) # Convert Megabytes to Gigabytes (round to 2 decimals) | search earliest=-1mo@d latest=now@d # Adjust timeframe as needed (e.g., -3mo@d for past 3 months) | sort - GB # Sort by license usage in descending order | head 100 # Limit results to top 100 hosts | table host GB source sourcetype
Need to pull the License Usage in GB for the top 100 Host along with their respective Index Source and Souretype information on monthly basis for reports. So kindly help with the query. 
I am following the documentation to log events using javascript. https://dev.splunk.com/enterprise/docs/devtools/javascript/logging-javascript/loggingjavascripthowtos/howtologhttpjs I am sending th... See more...
I am following the documentation to log events using javascript. https://dev.splunk.com/enterprise/docs/devtools/javascript/logging-javascript/loggingjavascripthowtos/howtologhttpjs I am sending the data as below but couldn't see any of the keys in the Splunk log. var payload = { message: { temperature: "70F", chickenCount: 500 } };