All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, Currently my search looks for the list of containers which includes initialised successfully message and lists them. The alert I have set is to look for the number of containers under total c... See more...
Hello, Currently my search looks for the list of containers which includes initialised successfully message and lists them. The alert I have set is to look for the number of containers under total connections column and if it is less then 28, then some of them did not connect successfully   How can pass I the list of containers in my search and compare that with the result produced and state if the result is missing a container please?     Sorry I cannot provide the exact search on a public forum, so I am sharing something similar.   Example: (my example just shows 4 containers)   The search should return container A, container B, container C, container D   Current search:   index=*  Initialised xxxxxxxxxxxx xxxxxx|rex "\{consumerName\=\'(MY REGEX)"|chart count AS Connections by name| addcoltotals labelfield="Total Connections"     The current result is   Container_Name      |  Count       |    Total_Connections Container A                 |    1              | Container B                 |   1                | Container C                 |    1               | Container D                 |    1               |                                                                                     4            How can I tweak the above search to include container A,B,C and D and if container D is missing in the result, the search should compare the result with the values passed in the search and state which container is missing as the last line in the above table i.e. preserve the existing result but state which container is missing from the result as well please?   Regards
Hello everyone, I'm currently setting up a lot of alarms in Splunk, and a question has arisen regarding what is better in terms of time. We've considered two scenarios: latest=-5m@m latest=now... See more...
Hello everyone, I'm currently setting up a lot of alarms in Splunk, and a question has arisen regarding what is better in terms of time. We've considered two scenarios: latest=-5m@m latest=now If I run the alarm every 5 minutes, I naturally want to avoid having "gaps" in my search.  I thinks both options will work but iam not 100% sure. I appreciate any insights on this. Thank you.
i want the output in the below format :- Input as below:- host           sql instance           db name abc              sql1                          db1 abc               sql1                  ... See more...
i want the output in the below format :- Input as below:- host           sql instance           db name abc              sql1                          db1 abc               sql1                          db2 abc               sql2                           db123 abc               sql2                           db1234 xyz               xyzsql1                    db11 xyz                xyzsql2                   db321 xyz                xyzsql2                    db123 xyz                xyzsql2                    db1234 www             wwwsql1              db123 www            wwwsql1                db1234 outpu as below:- host           sql instance           db name abc              sql1                          db1                                                          db2  abc              sql2                        db123                                                          db1234 xyz               xyzsql1                    db11 xyz                xyzsql2                   db321                                                           db123                                                           db1234 www             wwwsql1              db123                                                           db1234
Hi team,   My project Zone  1  have Deployment server , HF and (SH+Indexer) Zone 2  also  Deployment server ,HF and (SH+Indexer)  and don't have cluster  master   My requirement is set up  High ... See more...
Hi team,   My project Zone  1  have Deployment server , HF and (SH+Indexer) Zone 2  also  Deployment server ,HF and (SH+Indexer)  and don't have cluster  master   My requirement is set up  High availability server configuretion Zone1 and zone 2  , I have plan to set  up zone 2 search +indexer server -->setting s  -->indexercluster  --> here i will give masternode is deploymentserver of zone1   because of i don't have cluster master in my project, please guide me my requirement.     Vijreddy
Hello All,  We are setting up a Splunk enterprise multisite cluster, and i was wondering can we have 2 license master, 1 master in each datacenter and use the same license file? the goal is to have ... See more...
Hello All,  We are setting up a Splunk enterprise multisite cluster, and i was wondering can we have 2 license master, 1 master in each datacenter and use the same license file? the goal is to have the license server accessible all the time even if one license master is down the other one should work. is it possible to do it? please advice. Thanks, Dhana
Hello, Your assistance is very much appreciated. I am performing a sub search and need the data to be recognized as a single set (statistic). For some reason the quotes are messing with the output o... See more...
Hello, Your assistance is very much appreciated. I am performing a sub search and need the data to be recognized as a single set (statistic). For some reason the quotes are messing with the output on the subsearch. IE:  <search><| dedup src_ip | rename src_ip as search | format >   ......results in the following: ( ( "8.8.8.8" ) OR ( "1.1.1.1" ) OR ( "4.4.4.4" ) )  and I need the results from to be : ( ( 8.8.8.8 ) OR ( 1.1.1.1 ) OR ( 4.4.4.4 ) ) Thank you in advanced. 
index=summary  report=group_ip How do I delete report data from summary index? For example: The report is generated and placed in summary index hourly. How do I delete the report data of Hour 1 ... See more...
index=summary  report=group_ip How do I delete report data from summary index? For example: The report is generated and placed in summary index hourly. How do I delete the report data of Hour 1 and hour 2 from the summary index?     Thank you for your help Hour 1 Empty - because the query is incorrect resulting an empty data, I corrected the query Hour 2 The query is partially correct, resulting partial data, I corrected the query company ip companyA   companyB   companyC   Hour 3 The query is correct company ip companyA 1.1.1.1 companyB 1.1.1.2 companyC 1.1.1.3
Hello Splunkers I can use stats count and visualize the output as a single value so its nice and big in that panel in my dashboard. Is there a way to visualize the output from stats(sum) in a s... See more...
Hello Splunkers I can use stats count and visualize the output as a single value so its nice and big in that panel in my dashboard. Is there a way to visualize the output from stats(sum) in a similar way.  Or just make the the single value in a field big and prominent in the dashboard? |fields total|fieldformat "total" = round(total, 2) | appendpipe [ stats sum(total) as Grand_Total ] |fields Grand_Total output ------------------------------------ Grand_Total 103
Hello there! I'm trying to use <change> and <eval> inside of my time input to create a token that takes in $time.earliest$ and converts it to a unix timestamp, however, my <eval> is not working how ... See more...
Hello there! I'm trying to use <change> and <eval> inside of my time input to create a token that takes in $time.earliest$ and converts it to a unix timestamp, however, my <eval> is not working how I expect. When I use $start_time$ in my dashboard panels, I get a literal copy/paste of the "relative_time(now() ..." statement (i.e., it's not actually evaluating).  I've seen multiple examples in Splunk documentation and it seems like <eval> is supposed to evaluate the function you're trying to use. Help me, Splunk Community.  You're my only hope.
Hi, I have an existing search as follows:     | eval tempTime=strptime(due_at."-0000","%Y-%m-%d %H:%M:%S.%3N%z")     | eval dueDateCompact = strftime(tempTime, "%m-%d-%y") which I have used to s... See more...
Hi, I have an existing search as follows:     | eval tempTime=strptime(due_at."-0000","%Y-%m-%d %H:%M:%S.%3N%z")     | eval dueDateCompact = strftime(tempTime, "%m-%d-%y") which I have used to successfully convert a string field ('due_at') representing an UTC value (although formatted without the time-zone designation at the end), to an abbreviated notation (month-day-year) displayed in local time. So, for example, if "due_at" has a value of "2023-09-30 04:59:59.000", then the resulting "dueDateCompact" field ends up with "09-29-23" in there, correctly representing  "due_at" but in Chicago local time (5 hours behind UTC).  However, my current requirements are such that "due_at" comes formatted as "2023-09-30T04:59:59.000Z" (iso 8601 proper) instead of the original "2023-09-30 04:59:59.000" (note: only the intermediate T and ending Z are the differences between original and updated formats).  Therefore, I updated the first part of my original search to read:        | eval tempTime=strptime(due_at,"%Y-%m-%d %H:%M:%S.%3QZ")     (so I am not appending '-0000' anymore to "due_at", since the 'Z' is present in the format string) but this is NOT producing the correct local time in 'dueDateCompact' anymore (it produces "09-30-23" instead of "09-29-23").   Is there a logical explanation for this? 
Hi Team,   i am trying to create a custom alert action using splunk add on builder. this alert action will have 2 inputs for rest url and token. And also take payload from output of an alert. H... See more...
Hi Team,   i am trying to create a custom alert action using splunk add on builder. this alert action will have 2 inputs for rest url and token. And also take payload from output of an alert. Here is the code i am using. but alert action is not working and no errors too in the code.   import os # encoding = utf-8 def process_event(helper, *args, **kwargs): """ # IMPORTANT # Do not remove the anchor macro:start and macro:end lines. # These lines are used to generate sample code. If they are # removed, the sample code will not be updated when configurations # are updated. [sample_code_macro:start] # The following example gets the alert action parameters and prints them to the log rest_url = helper.get_param("rest_url") helper.log_info("rest_url={}".format(rest_url)) token = helper.get_param("token") helper.log_info("token={}".format(token)) # The following example adds two sample events ("hello", "world") # and writes them to Splunk # NOTE: Call helper.writeevents() only once after all events # have been added helper.addevent("hello", sourcetype="sample_sourcetype") helper.addevent("world", sourcetype="sample_sourcetype") helper.writeevents(index="summary", host="localhost", source="localhost") # The following example gets the events that trigger the alert events = helper.get_events() for event in events: helper.log_info("event={}".format(event)) # helper.settings is a dict that includes environment configuration # Example usage: helper.settings["server_uri"] helper.log_info("server_uri={}".format(helper.settings["server_uri"])) [sample_code_macro:end] """ helper.log_info("Alert action test started.") helper.log_debug("debug message") os.system("echo end of action") # TODO: Implement your alert action logic here import requests import sys, os import json import logging import logging.handlers   def setup_logger(level): logger = logging.getLogger("maintenance_window_logger") logger.propagate = False # Prevent the log messages from being duplicated in the python.log file logger.setLevel(level) file_handler = logging.handlers.RotatingFileHandler(os.environ['SPLUNK_HOME'] + '/var/log/splunk/maintenance_window_alert.log', maxBytes=25000000, backupCount=5) formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s') file_handler.setFormatter(formatter) logger.addHandler(file_handler) return logger logger = setup_logger(logging.DEBUG) def create_maintenance_window(title,entity_key,start,end): logger.debug("calling create_maintenance_window()") url="https://xxxxx:8089/servicesNS/nobody/SA-ITOA/maintenance_services_interface/maintenance_calendar" headers = {'Authorization':'Bearer xxxxxxxxxxxx'} data = {"title":title,"start_time":start,"end_time":end,"objects":[{"object_type":"entity","_key":entity_key}]} logger.debug(data) response = requests.post(url,headers=headers,json=data,verify=True) logger.debug(response) data=response.json() logger.debug(data) logger.debug("completing create_maintenance_window()") return data def validate_payload(payload): if not 'configuration' in payload: log("FATAL Invalid payload, missing 'configuration'") return False config = payload.get('configuration') title = config.get('title') if not title: log("FATAL Validation error: Parameter `title` is missing or empty") return False entity_key = config.get('entity_key') if not entity_key: log("FATAL Validation error: Parameter `entity_key` is missing or empty") return False start = config.get('start') if not start: log("FATAL Validation error: Parameter `start` is missing or empty") return False end = config.get('end') if not end: log("FATAL Validation error: Parameter `end` is missing or empty") return False return True def main(): logger.debug("calling main()") if len(sys.argv) > 1 and sys.argv[1] == "--execute": logger.debug(sys.argv) payload = json.loads(sys.stdin.read()) if not validate_payload(payload): sys.exit(ERROR_CODE_VALIDATION_FAILED) logger.info(payload) config = payload.get('configuration') title = config.get('title') entity_key=config.get('entity_key') start=config.get('start') end=config.get('end') logger.debug(title) logger.debug(start) logger.debug(end) logger.debug(entity_key) data = create_maintenance_window(title,entity_key,start,end) logger.debug("completing main()") if __name__ == "__main__": main() #return 0
As I understand the documentation ANDs are implied, so "eventtype=A eventtype=B"  is the same as "eventtype=A AND eventtype=B"  Also, if the tag names are the same like in this case it is interprete... See more...
As I understand the documentation ANDs are implied, so "eventtype=A eventtype=B"  is the same as "eventtype=A AND eventtype=B"  Also, if the tag names are the same like in this case it is interpreted as "eventtype=A OR eventtype=B". So my question is "eventtype" a tag name or not? If I do a search the search results say eventtype is both A and B! I don't understand.
  We're trying to utilize the IT Essentials Work app for AWS monitoring. I have installed the AWS Content Pack. However, it looks like the majority of the app is paywalled with "xyz is a Premium Fea... See more...
  We're trying to utilize the IT Essentials Work app for AWS monitoring. I have installed the AWS Content Pack. However, it looks like the majority of the app is paywalled with "xyz is a Premium Feature" banners. What is the purpose of the IT Essentials Work app? Please reply if you are successfully using this app to monitor AWS. Thanks, Farhan
I have a few thousand universal forwarders, managed by a deployment server, and we're sending all logs (internal and non-internal) to index cluster A. In addition, I would like to send all internal S... See more...
I have a few thousand universal forwarders, managed by a deployment server, and we're sending all logs (internal and non-internal) to index cluster A. In addition, I would like to send all internal Splunk logs to index cluster B. What's the simplest app package I can deploy via the deployment server to send a 2nd set of all internal logs from universal forwarders to index cluster B?
Was just going through the ‘Masa diagrams’ link: https://community.splunk.com/t5/Getting-Data-In/Diagrams-of-how-indexing-works-in-the-Splunk-platform-the-Masa/m-p/590774 If you look at the "Detai... See more...
Was just going through the ‘Masa diagrams’ link: https://community.splunk.com/t5/Getting-Data-In/Diagrams-of-how-indexing-works-in-the-Splunk-platform-the-Masa/m-p/590774 If you look at the "Detail Diagram - Standalone Splunk" , the queues are laid out like this (one example): (persistentQueue) + udp_queue --> parsingQueue --> aggQueue --> typingQueue --> indexQueue  So lets say we have UDP input configured and some congestion occurs in typingQueue, the persistentQueue should still be able to hold the data until the congestion is cleared up. This should be able to prevent the data loss.. right?  Sorry for this loaded assumption based question.. I am trying to figure out what we can do to stop UDP input data from getting dropped due to the typingQueue being filled. (P.S. adding an extra HF is not an option right now).   Thanks in advance!
I recently download VT4Splunk and everything was working fine with our API Key then a few days later I received a warning to enter the API key. However, when I entered the key back in I received the ... See more...
I recently download VT4Splunk and everything was working fine with our API Key then a few days later I received a warning to enter the API key. However, when I entered the key back in I received the following error message back, “Unexpected error when Validating VirusTotal API Key: 'ta_virustotal_app_settings' We currently have Splunk Cloud 9.0.2303.201 and VT4Splunk 1.6.2   Any assistance you all can provided will be greatly appreciated!
I have multiple strings as below in various log files. Intention is to retrieve them in a table and apply group by. Satisfied Conditions: XYZ, ABC, 123, abc Satisfied Conditions: XYZ, bcd, 123, abc... See more...
I have multiple strings as below in various log files. Intention is to retrieve them in a table and apply group by. Satisfied Conditions: XYZ, ABC, 123, abc Satisfied Conditions: XYZ, bcd, 123, abc Satisfied Conditions: bcd, ABC, 123, abc Satisfied Conditions: XYZ, ABC, 456, abc then output shall be: Condition Count XYZ 3 ABC 3 abc 4 bcd 2 123 3 456 1   I am almost there till retrieving data column wise but not able to get it. Any inputs here would be helpful.
The ODBC driver to enable PowerBI to connect with Splunk on SplunkBase is only the Mac OS version. Can the Windows version be made available?
I upgraded my SE from 7.2.4 to 8.2.8 afterwards I upgraded my apps and addon as per compatibility. but some addons stopped working and solarwinds addon is one of them. I am getting below errors: ... See more...
I upgraded my SE from 7.2.4 to 8.2.8 afterwards I upgraded my apps and addon as per compatibility. but some addons stopped working and solarwinds addon is one of them. I am getting below errors: 10-26-2023 18:10:04.720 +0000 ERROR AdminManagerExternal [20948 TcpChannelThread] - Stack trace from python handler:\nTraceback (most recent call last):\n File "/opt/splunk/etc/apps/Splunk_TA_SolarWinds/bin/splunk_ta_solarwinds/aob_py3/splunktaucclib/rest_handler/handler.py", line 117, in wrapper\n for name, data, acl in meth(self, *args, **kwargs):\n File "/opt/splunk/etc/apps/Splunk_TA_SolarWinds/bin/splunk_ta_solarwinds/aob_py3/splunktaucclib/rest_handler/handler.py", line 179, in all\n **query\n File "/opt/splunk/etc/apps/Splunk_TA_SolarWinds/bin/splunk_ta_solarwinds/aob_py3/solnlib/packages/splunklib/binding.py", line 289, in wrapper\n return request_fun(self, *args, **kwargs)\n File "/opt/splunk/etc/apps/Splunk_TA_SolarWinds/bin/splunk_ta_solarwinds/aob_py3/solnlib/packages/splunklib/binding.py", line 71, in new_f\n val = f(*args, **kwargs)\n File "/opt/splunk/etc/apps/Splunk_TA_SolarWinds/bin/splunk_ta_solarwinds/aob_py3/solnlib/packages/splunklib/binding.py", line 679, in get\n response = self.http.get(path, all_headers, **query)\n File "/opt/splunk/etc/apps/Splunk_TA_SolarWinds/bin/splunk_ta_solarwinds/aob_py3/solnlib/packages/splunklib/binding.py", line 1183, in get\n return self.request(url, { 'method': "GET", 'headers': headers })\n File "/opt/splunk/etc/apps/Splunk_TA_SolarWinds/bin/splunk_ta_solarwinds/aob_py3/solnlib/packages/splunklib/binding.py", line 1244, in request\n raise HTTPError(response)\nsolnlib.packages.splunklib.binding.HTTPError: HTTP 404 Not Found -- {"messages":[{"type":"ERROR","text":"Not Found"}]}\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File "/opt/splunk/lib/python3.7/site-packages/splunk/admin.py", line 150, in init\n hand.execute(info)\n File "/opt/splunk/lib/python3.7/site-packages/splunk/admin.py", line 636, in execute\n if self.requestedAction == ACTION_LIST: self.handleList(confInfo)\n File "/opt/splunk/etc/apps/Splunk_TA_SolarWinds/bin/splunk_ta_solarwinds/aob_py3/splunk_aoblib/rest_migration.py", line 39, in handleList\n AdminExternalHandler.handleList(self, confInfo)\n File "/opt/splunk/etc/apps/Splunk_TA_SolarWinds/bin/splunk_ta_solarwinds/aob_py3/splunktaucclib/rest_handler/admin_external.py", line 40, in wrapper\n for entity in result:\n File "/opt/splunk/etc/apps/Splunk_TA_SolarWinds/bin/splunk_ta_solarwinds/aob_py3/splunktaucclib/rest_handler/handler.py", line 122, in wrapper\n raise RestError(exc.status, str(exc))\nsplunktaucclib.rest_handler.error.RestError: REST Error [404]: Not Found -- HTTP 404 Not Found -- {"messages":[{"type":"ERROR","text":"Not Found"}]}\n 10-20-2023 18:45:23.444 +0000 ERROR ModularInputs [15755 MainThread] - Unable to initialize modular input "solwarwinds_query" defined in the app "Splunk_TA_SolarWinds": Introspecting scheme=solwarwinds_query: script running failed (PID 15889 exited with code 1).   10-20-2023 18:45:23.443 +0000 ERROR ModularInputs [15755 MainThread] - <stderr> Introspecting scheme=solwarwinds_query: File "/opt/splunk/etc/apps/Splunk_TA_SolarWinds/bin/splunk_ta_solarwinds/aob_py3/cloudconnectlib/core/cacerts/ca_certs_locater.py", line 59, in _fallback