All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi jconger, would it be possible for me to reach out to you via email? Is there a way I can contact you directly? I am experiencing the same issue and require some assistance. Cheers!
Hi Giuseppe, Thank you for your response. Indeed, it has become necessary due to the customer services residing in an OT network with an air gap environment. They aim to manage both the HF and DS r... See more...
Hi Giuseppe, Thank you for your response. Indeed, it has become necessary due to the customer services residing in an OT network with an air gap environment. They aim to manage both the HF and DS roles on a single virtual machine and are willing to allocate additional CPU and RAM resources to that server. If I were to increase the CPU and RAM, would that suffice, or are there other challenges to consider when consolidating both roles on one server? Additionally, I would like to clarify what you mean when you mention the possibility of them being a bottleneck. DS clients use port 8089, and Forwarders transmit data over port 9997. Could you explain the potential network bottleneck in this context?
Hello All,  We are setting up a Splunk enterprise multisite cluster, and i was wondering can we have 2 license master, 1 master in each datacenter and use the same license file? the goal is to have ... See more...
Hello All,  We are setting up a Splunk enterprise multisite cluster, and i was wondering can we have 2 license master, 1 master in each datacenter and use the same license file? the goal is to have the license server accessible all the time even if one license master is down the other one should work. is it possible to do it? please advice. Thanks, Dhana
Hello, Your assistance is very much appreciated. I am performing a sub search and need the data to be recognized as a single set (statistic). For some reason the quotes are messing with the output o... See more...
Hello, Your assistance is very much appreciated. I am performing a sub search and need the data to be recognized as a single set (statistic). For some reason the quotes are messing with the output on the subsearch. IE:  <search><| dedup src_ip | rename src_ip as search | format >   ......results in the following: ( ( "8.8.8.8" ) OR ( "1.1.1.1" ) OR ( "4.4.4.4" ) )  and I need the results from to be : ( ( 8.8.8.8 ) OR ( 1.1.1.1 ) OR ( 4.4.4.4 ) ) Thank you in advanced. 
index=summary  report=group_ip How do I delete report data from summary index? For example: The report is generated and placed in summary index hourly. How do I delete the report data of Hour 1 ... See more...
index=summary  report=group_ip How do I delete report data from summary index? For example: The report is generated and placed in summary index hourly. How do I delete the report data of Hour 1 and hour 2 from the summary index?     Thank you for your help Hour 1 Empty - because the query is incorrect resulting an empty data, I corrected the query Hour 2 The query is partially correct, resulting partial data, I corrected the query company ip companyA   companyB   companyC   Hour 3 The query is correct company ip companyA 1.1.1.1 companyB 1.1.1.2 companyC 1.1.1.3
The value in a Single Value viz scales automatically based on its size.  If you need a bigger image, make the panel bigger.
Hello Splunkers I can use stats count and visualize the output as a single value so its nice and big in that panel in my dashboard. Is there a way to visualize the output from stats(sum) in a s... See more...
Hello Splunkers I can use stats count and visualize the output as a single value so its nice and big in that panel in my dashboard. Is there a way to visualize the output from stats(sum) in a similar way.  Or just make the the single value in a field big and prominent in the dashboard? |fields total|fieldformat "total" = round(total, 2) | appendpipe [ stats sum(total) as Grand_Total ] |fields Grand_Total output ------------------------------------ Grand_Total 103
Hello there! I'm trying to use <change> and <eval> inside of my time input to create a token that takes in $time.earliest$ and converts it to a unix timestamp, however, my <eval> is not working how ... See more...
Hello there! I'm trying to use <change> and <eval> inside of my time input to create a token that takes in $time.earliest$ and converts it to a unix timestamp, however, my <eval> is not working how I expect. When I use $start_time$ in my dashboard panels, I get a literal copy/paste of the "relative_time(now() ..." statement (i.e., it's not actually evaluating).  I've seen multiple examples in Splunk documentation and it seems like <eval> is supposed to evaluate the function you're trying to use. Help me, Splunk Community.  You're my only hope.
Hi, I have an existing search as follows:     | eval tempTime=strptime(due_at."-0000","%Y-%m-%d %H:%M:%S.%3N%z")     | eval dueDateCompact = strftime(tempTime, "%m-%d-%y") which I have used to s... See more...
Hi, I have an existing search as follows:     | eval tempTime=strptime(due_at."-0000","%Y-%m-%d %H:%M:%S.%3N%z")     | eval dueDateCompact = strftime(tempTime, "%m-%d-%y") which I have used to successfully convert a string field ('due_at') representing an UTC value (although formatted without the time-zone designation at the end), to an abbreviated notation (month-day-year) displayed in local time. So, for example, if "due_at" has a value of "2023-09-30 04:59:59.000", then the resulting "dueDateCompact" field ends up with "09-29-23" in there, correctly representing  "due_at" but in Chicago local time (5 hours behind UTC).  However, my current requirements are such that "due_at" comes formatted as "2023-09-30T04:59:59.000Z" (iso 8601 proper) instead of the original "2023-09-30 04:59:59.000" (note: only the intermediate T and ending Z are the differences between original and updated formats).  Therefore, I updated the first part of my original search to read:        | eval tempTime=strptime(due_at,"%Y-%m-%d %H:%M:%S.%3QZ")     (so I am not appending '-0000' anymore to "due_at", since the 'Z' is present in the format string) but this is NOT producing the correct local time in 'dueDateCompact' anymore (it produces "09-30-23" instead of "09-29-23").   Is there a logical explanation for this? 
I got the same error even when using the standard password and app-password, using TLS or SSL.
Hi Team,   i am trying to create a custom alert action using splunk add on builder. this alert action will have 2 inputs for rest url and token. And also take payload from output of an alert. H... See more...
Hi Team,   i am trying to create a custom alert action using splunk add on builder. this alert action will have 2 inputs for rest url and token. And also take payload from output of an alert. Here is the code i am using. but alert action is not working and no errors too in the code.   import os # encoding = utf-8 def process_event(helper, *args, **kwargs): """ # IMPORTANT # Do not remove the anchor macro:start and macro:end lines. # These lines are used to generate sample code. If they are # removed, the sample code will not be updated when configurations # are updated. [sample_code_macro:start] # The following example gets the alert action parameters and prints them to the log rest_url = helper.get_param("rest_url") helper.log_info("rest_url={}".format(rest_url)) token = helper.get_param("token") helper.log_info("token={}".format(token)) # The following example adds two sample events ("hello", "world") # and writes them to Splunk # NOTE: Call helper.writeevents() only once after all events # have been added helper.addevent("hello", sourcetype="sample_sourcetype") helper.addevent("world", sourcetype="sample_sourcetype") helper.writeevents(index="summary", host="localhost", source="localhost") # The following example gets the events that trigger the alert events = helper.get_events() for event in events: helper.log_info("event={}".format(event)) # helper.settings is a dict that includes environment configuration # Example usage: helper.settings["server_uri"] helper.log_info("server_uri={}".format(helper.settings["server_uri"])) [sample_code_macro:end] """ helper.log_info("Alert action test started.") helper.log_debug("debug message") os.system("echo end of action") # TODO: Implement your alert action logic here import requests import sys, os import json import logging import logging.handlers   def setup_logger(level): logger = logging.getLogger("maintenance_window_logger") logger.propagate = False # Prevent the log messages from being duplicated in the python.log file logger.setLevel(level) file_handler = logging.handlers.RotatingFileHandler(os.environ['SPLUNK_HOME'] + '/var/log/splunk/maintenance_window_alert.log', maxBytes=25000000, backupCount=5) formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s') file_handler.setFormatter(formatter) logger.addHandler(file_handler) return logger logger = setup_logger(logging.DEBUG) def create_maintenance_window(title,entity_key,start,end): logger.debug("calling create_maintenance_window()") url="https://xxxxx:8089/servicesNS/nobody/SA-ITOA/maintenance_services_interface/maintenance_calendar" headers = {'Authorization':'Bearer xxxxxxxxxxxx'} data = {"title":title,"start_time":start,"end_time":end,"objects":[{"object_type":"entity","_key":entity_key}]} logger.debug(data) response = requests.post(url,headers=headers,json=data,verify=True) logger.debug(response) data=response.json() logger.debug(data) logger.debug("completing create_maintenance_window()") return data def validate_payload(payload): if not 'configuration' in payload: log("FATAL Invalid payload, missing 'configuration'") return False config = payload.get('configuration') title = config.get('title') if not title: log("FATAL Validation error: Parameter `title` is missing or empty") return False entity_key = config.get('entity_key') if not entity_key: log("FATAL Validation error: Parameter `entity_key` is missing or empty") return False start = config.get('start') if not start: log("FATAL Validation error: Parameter `start` is missing or empty") return False end = config.get('end') if not end: log("FATAL Validation error: Parameter `end` is missing or empty") return False return True def main(): logger.debug("calling main()") if len(sys.argv) > 1 and sys.argv[1] == "--execute": logger.debug(sys.argv) payload = json.loads(sys.stdin.read()) if not validate_payload(payload): sys.exit(ERROR_CODE_VALIDATION_FAILED) logger.info(payload) config = payload.get('configuration') title = config.get('title') entity_key=config.get('entity_key') start=config.get('start') end=config.get('end') logger.debug(title) logger.debug(start) logger.debug(end) logger.debug(entity_key) data = create_maintenance_window(title,entity_key,start,end) logger.debug("completing main()") if __name__ == "__main__": main() #return 0
Hi @smanojkumar, I think you're almost there! The last step is to prevent url encoding of the token. In my example I put the link directly in an <html> block so this step wasn't necessary. But if y... See more...
Hi @smanojkumar, I think you're almost there! The last step is to prevent url encoding of the token. In my example I put the link directly in an <html> block so this step wasn't necessary. But if you're using the drilldown option from a visualisation, you'll need to make sure Splunk doesn't encode the URL for you. To do that you can use this format: $office_filter_drilldown|n$ Change your drilldown section to resemble something like this:   <drilldown> <link target="_blank">antivirus_details?form.compliance_filter=$click.value$&amp;$office_filter_drilldown|n$&amp;form.machine=$machine$&amp;form.origin=$origin$&amp;form.country=$country$&amp;form.cacp=$cacp$&amp;form.scope=$scope$</link> </drilldown>   That way Splunk won't try to encode any characters in the token, giving you the correct URL.   There are a few different filters available for tokens: Filter Description Wrap value in quotes $token_name|s$ Ensures that quotation marks surround the value referenced by the token. Escapes all quotation characters, ", within the quoted value. HTML format $token_name|h$ Ensures that the token value is valid for HTML formatting. Token values for the <HTML> element use this filter by default. URL format $token_name|u$ Ensures that the token value is valid to use as a URL. Token values for the <link> element use this filter by default. Specify no character escaping $token_name|n$ Prevents the default token filter from running. No characters in the token are escaped.   See more info on Splunk docs   Give that a go and see how you get on.   Cheers, Daniel
As I understand the documentation ANDs are implied, so "eventtype=A eventtype=B"  is the same as "eventtype=A AND eventtype=B"  Also, if the tag names are the same like in this case it is interprete... See more...
As I understand the documentation ANDs are implied, so "eventtype=A eventtype=B"  is the same as "eventtype=A AND eventtype=B"  Also, if the tag names are the same like in this case it is interpreted as "eventtype=A OR eventtype=B". So my question is "eventtype" a tag name or not? If I do a search the search results say eventtype is both A and B! I don't understand.
  We're trying to utilize the IT Essentials Work app for AWS monitoring. I have installed the AWS Content Pack. However, it looks like the majority of the app is paywalled with "xyz is a Premium Fea... See more...
  We're trying to utilize the IT Essentials Work app for AWS monitoring. I have installed the AWS Content Pack. However, it looks like the majority of the app is paywalled with "xyz is a Premium Feature" banners. What is the purpose of the IT Essentials Work app? Please reply if you are successfully using this app to monitor AWS. Thanks, Farhan
I have a few thousand universal forwarders, managed by a deployment server, and we're sending all logs (internal and non-internal) to index cluster A. In addition, I would like to send all internal S... See more...
I have a few thousand universal forwarders, managed by a deployment server, and we're sending all logs (internal and non-internal) to index cluster A. In addition, I would like to send all internal Splunk logs to index cluster B. What's the simplest app package I can deploy via the deployment server to send a 2nd set of all internal logs from universal forwarders to index cluster B?
Was just going through the ‘Masa diagrams’ link: https://community.splunk.com/t5/Getting-Data-In/Diagrams-of-how-indexing-works-in-the-Splunk-platform-the-Masa/m-p/590774 If you look at the "Detai... See more...
Was just going through the ‘Masa diagrams’ link: https://community.splunk.com/t5/Getting-Data-In/Diagrams-of-how-indexing-works-in-the-Splunk-platform-the-Masa/m-p/590774 If you look at the "Detail Diagram - Standalone Splunk" , the queues are laid out like this (one example): (persistentQueue) + udp_queue --> parsingQueue --> aggQueue --> typingQueue --> indexQueue  So lets say we have UDP input configured and some congestion occurs in typingQueue, the persistentQueue should still be able to hold the data until the congestion is cleared up. This should be able to prevent the data loss.. right?  Sorry for this loaded assumption based question.. I am trying to figure out what we can do to stop UDP input data from getting dropped due to the typingQueue being filled. (P.S. adding an extra HF is not an option right now).   Thanks in advance!
I recently download VT4Splunk and everything was working fine with our API Key then a few days later I received a warning to enter the API key. However, when I entered the key back in I received the ... See more...
I recently download VT4Splunk and everything was working fine with our API Key then a few days later I received a warning to enter the API key. However, when I entered the key back in I received the following error message back, “Unexpected error when Validating VirusTotal API Key: 'ta_virustotal_app_settings' We currently have Splunk Cloud 9.0.2303.201 and VT4Splunk 1.6.2   Any assistance you all can provided will be greatly appreciated!
I have multiple strings as below in various log files. Intention is to retrieve them in a table and apply group by. Satisfied Conditions: XYZ, ABC, 123, abc Satisfied Conditions: XYZ, bcd, 123, abc... See more...
I have multiple strings as below in various log files. Intention is to retrieve them in a table and apply group by. Satisfied Conditions: XYZ, ABC, 123, abc Satisfied Conditions: XYZ, bcd, 123, abc Satisfied Conditions: bcd, ABC, 123, abc Satisfied Conditions: XYZ, ABC, 456, abc then output shall be: Condition Count XYZ 3 ABC 3 abc 4 bcd 2 123 3 456 1   I am almost there till retrieving data column wise but not able to get it. Any inputs here would be helpful.
@_JP on current setting part i am kind of good with below query  | rest splunk_server=local /services/authentication/users | fields title, roles | mvexpand roles | append [ | rest splunk_serve... See more...
@_JP on current setting part i am kind of good with below query  | rest splunk_server=local /services/authentication/users | fields title, roles | mvexpand roles | append [ | rest splunk_server=local /services/authorization/roles | fields title srchDiskQuota rtSrchJobsQuota srchJobsQuota cumulativeSrchJobsQuota cumulativeRTSrchJobsQuota | rename title as roles] | stats values(srchDiskQuota) as srchDiskQuota, values(rtSrchJobsQuota) as rtSrchJobsQuota, values(srchJobsQuota) as srchJobsQuota, values(cumulativeSrchJobsQuota) as cumulativeSrchJobsQuota, values(title) as userid, values(cumulativeRTSrchJobsQuota) AS cumulativeRTSrchJobsQuota by roles | mvexpand userid | stats values(srchDiskQuota) as srchDiskQuota, values(rtSrchJobsQuota) as rtSrchJobsQuota, values(srchJobsQuota) as srchJobsQuota, values(cumulativeSrchJobsQuota) as cumulativeSrchJobsQuota,values(cumulativeRTSrchJobsQuota) AS cumulativeRTSrchJobsQuota by userid roles just want to get details on current utilization by user/role & more of search concurrency settings (resource utilization etc)
The ODBC driver to enable PowerBI to connect with Splunk on SplunkBase is only the Mac OS version. Can the Windows version be made available?