All Topics

Top

All Topics

Hi, I  need an spl to find the threshold for the respective domains. index=ss group="Threat Intelligence" | stats values(attacker_score) as attacker_score by domain eg. admin.com 110 120 ... See more...
Hi, I  need an spl to find the threshold for the respective domains. index=ss group="Threat Intelligence" | stats values(attacker_score) as attacker_score by domain eg. admin.com 110 120 135 145 160 170 185 195 210 220 235 245 270 345 360 370 395 410 420 435 445 45 470 495 520 570 60 645 70 85 920 95 Thanks..
Hi, I have create two different timechart like: Timechart1(cable connection on/off): index=cisco_asa dest_interface=outside | timechart span=10m dc(count) by count   Timechart2(login user listed... See more...
Hi, I have create two different timechart like: Timechart1(cable connection on/off): index=cisco_asa dest_interface=outside | timechart span=10m dc(count) by count   Timechart2(login user listed): host=10.1.1.1 src_sg_info=* | timechart span=10m dc(src_sg_info) by src_sg_info   Individually the display is perfect, but it would be even better if we could combined into one graph with common timestamps.    I search through splunk documents, also tried different setup without success. Hope someone could help me with it  
Hello, I have one more begginers question regarding reports and dashboards I am trying to do overview of most used services, I am using this query:   index=notable| top limit=15 app   When I... See more...
Hello, I have one more begginers question regarding reports and dashboards I am trying to do overview of most used services, I am using this query:   index=notable| top limit=15 app   When I put this report into dashboard studio, there are appearing count as well as percentage: I would like to remove percentages completely from the chart. Can you tell me how to do it, please? And one more option just coming to my mind - if I would like to use both - count and percentages, is it possible to adapt x axis in the way that it would use separate scale like 0-100 percent for percentages?
Good afternoon, I have a Salesforce org in which I have a need to do audit trail to monitor changes in standard and custom object fields, a setup audit trail to also monitor changes to the setup of ... See more...
Good afternoon, I have a Salesforce org in which I have a need to do audit trail to monitor changes in standard and custom object fields, a setup audit trail to also monitor changes to the setup of the org and event monitoring for me to be able to monitor who see and does what. I have this need, to respect compliance and auditing rules, and I need this information to be able to be accessible 24/7 on a database, and need the data to be able to be seen/retrieve for, at least, 10 years. I would also like to have an export capability in which it would allow me to setup an export to an external database. Does splunk have such capabilities in its add-ons? and if so, which one. Thank you all,  Paulo  
Hello, Currently my search looks for the list of containers which includes initialised successfully message and lists them. The alert I have set is to look for the number of containers under total c... See more...
Hello, Currently my search looks for the list of containers which includes initialised successfully message and lists them. The alert I have set is to look for the number of containers under total connections column and if it is less then 28, then some of them did not connect successfully   How can pass I the list of containers in my search and compare that with the result produced and state if the result is missing a container please?     Sorry I cannot provide the exact search on a public forum, so I am sharing something similar.   Example: (my example just shows 4 containers)   The search should return container A, container B, container C, container D   Current search:   index=*  Initialised xxxxxxxxxxxx xxxxxx|rex "\{consumerName\=\'(MY REGEX)"|chart count AS Connections by name| addcoltotals labelfield="Total Connections"     The current result is   Container_Name      |  Count       |    Total_Connections Container A                 |    1              | Container B                 |   1                | Container C                 |    1               | Container D                 |    1               |                                                                                     4            How can I tweak the above search to include container A,B,C and D and if container D is missing in the result, the search should compare the result with the values passed in the search and state which container is missing as the last line in the above table i.e. preserve the existing result but state which container is missing from the result as well please?   Regards
Hello everyone, I'm currently setting up a lot of alarms in Splunk, and a question has arisen regarding what is better in terms of time. We've considered two scenarios: latest=-5m@m latest=now... See more...
Hello everyone, I'm currently setting up a lot of alarms in Splunk, and a question has arisen regarding what is better in terms of time. We've considered two scenarios: latest=-5m@m latest=now If I run the alarm every 5 minutes, I naturally want to avoid having "gaps" in my search.  I thinks both options will work but iam not 100% sure. I appreciate any insights on this. Thank you.
i want the output in the below format :- Input as below:- host           sql instance           db name abc              sql1                          db1 abc               sql1                  ... See more...
i want the output in the below format :- Input as below:- host           sql instance           db name abc              sql1                          db1 abc               sql1                          db2 abc               sql2                           db123 abc               sql2                           db1234 xyz               xyzsql1                    db11 xyz                xyzsql2                   db321 xyz                xyzsql2                    db123 xyz                xyzsql2                    db1234 www             wwwsql1              db123 www            wwwsql1                db1234 outpu as below:- host           sql instance           db name abc              sql1                          db1                                                          db2  abc              sql2                        db123                                                          db1234 xyz               xyzsql1                    db11 xyz                xyzsql2                   db321                                                           db123                                                           db1234 www             wwwsql1              db123                                                           db1234
Hi team,   My project Zone  1  have Deployment server , HF and (SH+Indexer) Zone 2  also  Deployment server ,HF and (SH+Indexer)  and don't have cluster  master   My requirement is set up  High ... See more...
Hi team,   My project Zone  1  have Deployment server , HF and (SH+Indexer) Zone 2  also  Deployment server ,HF and (SH+Indexer)  and don't have cluster  master   My requirement is set up  High availability server configuretion Zone1 and zone 2  , I have plan to set  up zone 2 search +indexer server -->setting s  -->indexercluster  --> here i will give masternode is deploymentserver of zone1   because of i don't have cluster master in my project, please guide me my requirement.     Vijreddy
Hello All,  We are setting up a Splunk enterprise multisite cluster, and i was wondering can we have 2 license master, 1 master in each datacenter and use the same license file? the goal is to have ... See more...
Hello All,  We are setting up a Splunk enterprise multisite cluster, and i was wondering can we have 2 license master, 1 master in each datacenter and use the same license file? the goal is to have the license server accessible all the time even if one license master is down the other one should work. is it possible to do it? please advice. Thanks, Dhana
Hello, Your assistance is very much appreciated. I am performing a sub search and need the data to be recognized as a single set (statistic). For some reason the quotes are messing with the output o... See more...
Hello, Your assistance is very much appreciated. I am performing a sub search and need the data to be recognized as a single set (statistic). For some reason the quotes are messing with the output on the subsearch. IE:  <search><| dedup src_ip | rename src_ip as search | format >   ......results in the following: ( ( "8.8.8.8" ) OR ( "1.1.1.1" ) OR ( "4.4.4.4" ) )  and I need the results from to be : ( ( 8.8.8.8 ) OR ( 1.1.1.1 ) OR ( 4.4.4.4 ) ) Thank you in advanced. 
index=summary  report=group_ip How do I delete report data from summary index? For example: The report is generated and placed in summary index hourly. How do I delete the report data of Hour 1 ... See more...
index=summary  report=group_ip How do I delete report data from summary index? For example: The report is generated and placed in summary index hourly. How do I delete the report data of Hour 1 and hour 2 from the summary index?     Thank you for your help Hour 1 Empty - because the query is incorrect resulting an empty data, I corrected the query Hour 2 The query is partially correct, resulting partial data, I corrected the query company ip companyA   companyB   companyC   Hour 3 The query is correct company ip companyA 1.1.1.1 companyB 1.1.1.2 companyC 1.1.1.3
Hello Splunkers I can use stats count and visualize the output as a single value so its nice and big in that panel in my dashboard. Is there a way to visualize the output from stats(sum) in a s... See more...
Hello Splunkers I can use stats count and visualize the output as a single value so its nice and big in that panel in my dashboard. Is there a way to visualize the output from stats(sum) in a similar way.  Or just make the the single value in a field big and prominent in the dashboard? |fields total|fieldformat "total" = round(total, 2) | appendpipe [ stats sum(total) as Grand_Total ] |fields Grand_Total output ------------------------------------ Grand_Total 103
Hello there! I'm trying to use <change> and <eval> inside of my time input to create a token that takes in $time.earliest$ and converts it to a unix timestamp, however, my <eval> is not working how ... See more...
Hello there! I'm trying to use <change> and <eval> inside of my time input to create a token that takes in $time.earliest$ and converts it to a unix timestamp, however, my <eval> is not working how I expect. When I use $start_time$ in my dashboard panels, I get a literal copy/paste of the "relative_time(now() ..." statement (i.e., it's not actually evaluating).  I've seen multiple examples in Splunk documentation and it seems like <eval> is supposed to evaluate the function you're trying to use. Help me, Splunk Community.  You're my only hope.
Hi, I have an existing search as follows:     | eval tempTime=strptime(due_at."-0000","%Y-%m-%d %H:%M:%S.%3N%z")     | eval dueDateCompact = strftime(tempTime, "%m-%d-%y") which I have used to s... See more...
Hi, I have an existing search as follows:     | eval tempTime=strptime(due_at."-0000","%Y-%m-%d %H:%M:%S.%3N%z")     | eval dueDateCompact = strftime(tempTime, "%m-%d-%y") which I have used to successfully convert a string field ('due_at') representing an UTC value (although formatted without the time-zone designation at the end), to an abbreviated notation (month-day-year) displayed in local time. So, for example, if "due_at" has a value of "2023-09-30 04:59:59.000", then the resulting "dueDateCompact" field ends up with "09-29-23" in there, correctly representing  "due_at" but in Chicago local time (5 hours behind UTC).  However, my current requirements are such that "due_at" comes formatted as "2023-09-30T04:59:59.000Z" (iso 8601 proper) instead of the original "2023-09-30 04:59:59.000" (note: only the intermediate T and ending Z are the differences between original and updated formats).  Therefore, I updated the first part of my original search to read:        | eval tempTime=strptime(due_at,"%Y-%m-%d %H:%M:%S.%3QZ")     (so I am not appending '-0000' anymore to "due_at", since the 'Z' is present in the format string) but this is NOT producing the correct local time in 'dueDateCompact' anymore (it produces "09-30-23" instead of "09-29-23").   Is there a logical explanation for this? 
Hi Team,   i am trying to create a custom alert action using splunk add on builder. this alert action will have 2 inputs for rest url and token. And also take payload from output of an alert. H... See more...
Hi Team,   i am trying to create a custom alert action using splunk add on builder. this alert action will have 2 inputs for rest url and token. And also take payload from output of an alert. Here is the code i am using. but alert action is not working and no errors too in the code.   import os # encoding = utf-8 def process_event(helper, *args, **kwargs): """ # IMPORTANT # Do not remove the anchor macro:start and macro:end lines. # These lines are used to generate sample code. If they are # removed, the sample code will not be updated when configurations # are updated. [sample_code_macro:start] # The following example gets the alert action parameters and prints them to the log rest_url = helper.get_param("rest_url") helper.log_info("rest_url={}".format(rest_url)) token = helper.get_param("token") helper.log_info("token={}".format(token)) # The following example adds two sample events ("hello", "world") # and writes them to Splunk # NOTE: Call helper.writeevents() only once after all events # have been added helper.addevent("hello", sourcetype="sample_sourcetype") helper.addevent("world", sourcetype="sample_sourcetype") helper.writeevents(index="summary", host="localhost", source="localhost") # The following example gets the events that trigger the alert events = helper.get_events() for event in events: helper.log_info("event={}".format(event)) # helper.settings is a dict that includes environment configuration # Example usage: helper.settings["server_uri"] helper.log_info("server_uri={}".format(helper.settings["server_uri"])) [sample_code_macro:end] """ helper.log_info("Alert action test started.") helper.log_debug("debug message") os.system("echo end of action") # TODO: Implement your alert action logic here import requests import sys, os import json import logging import logging.handlers   def setup_logger(level): logger = logging.getLogger("maintenance_window_logger") logger.propagate = False # Prevent the log messages from being duplicated in the python.log file logger.setLevel(level) file_handler = logging.handlers.RotatingFileHandler(os.environ['SPLUNK_HOME'] + '/var/log/splunk/maintenance_window_alert.log', maxBytes=25000000, backupCount=5) formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s') file_handler.setFormatter(formatter) logger.addHandler(file_handler) return logger logger = setup_logger(logging.DEBUG) def create_maintenance_window(title,entity_key,start,end): logger.debug("calling create_maintenance_window()") url="https://xxxxx:8089/servicesNS/nobody/SA-ITOA/maintenance_services_interface/maintenance_calendar" headers = {'Authorization':'Bearer xxxxxxxxxxxx'} data = {"title":title,"start_time":start,"end_time":end,"objects":[{"object_type":"entity","_key":entity_key}]} logger.debug(data) response = requests.post(url,headers=headers,json=data,verify=True) logger.debug(response) data=response.json() logger.debug(data) logger.debug("completing create_maintenance_window()") return data def validate_payload(payload): if not 'configuration' in payload: log("FATAL Invalid payload, missing 'configuration'") return False config = payload.get('configuration') title = config.get('title') if not title: log("FATAL Validation error: Parameter `title` is missing or empty") return False entity_key = config.get('entity_key') if not entity_key: log("FATAL Validation error: Parameter `entity_key` is missing or empty") return False start = config.get('start') if not start: log("FATAL Validation error: Parameter `start` is missing or empty") return False end = config.get('end') if not end: log("FATAL Validation error: Parameter `end` is missing or empty") return False return True def main(): logger.debug("calling main()") if len(sys.argv) > 1 and sys.argv[1] == "--execute": logger.debug(sys.argv) payload = json.loads(sys.stdin.read()) if not validate_payload(payload): sys.exit(ERROR_CODE_VALIDATION_FAILED) logger.info(payload) config = payload.get('configuration') title = config.get('title') entity_key=config.get('entity_key') start=config.get('start') end=config.get('end') logger.debug(title) logger.debug(start) logger.debug(end) logger.debug(entity_key) data = create_maintenance_window(title,entity_key,start,end) logger.debug("completing main()") if __name__ == "__main__": main() #return 0
As I understand the documentation ANDs are implied, so "eventtype=A eventtype=B"  is the same as "eventtype=A AND eventtype=B"  Also, if the tag names are the same like in this case it is interprete... See more...
As I understand the documentation ANDs are implied, so "eventtype=A eventtype=B"  is the same as "eventtype=A AND eventtype=B"  Also, if the tag names are the same like in this case it is interpreted as "eventtype=A OR eventtype=B". So my question is "eventtype" a tag name or not? If I do a search the search results say eventtype is both A and B! I don't understand.
  We're trying to utilize the IT Essentials Work app for AWS monitoring. I have installed the AWS Content Pack. However, it looks like the majority of the app is paywalled with "xyz is a Premium Fea... See more...
  We're trying to utilize the IT Essentials Work app for AWS monitoring. I have installed the AWS Content Pack. However, it looks like the majority of the app is paywalled with "xyz is a Premium Feature" banners. What is the purpose of the IT Essentials Work app? Please reply if you are successfully using this app to monitor AWS. Thanks, Farhan
I have a few thousand universal forwarders, managed by a deployment server, and we're sending all logs (internal and non-internal) to index cluster A. In addition, I would like to send all internal S... See more...
I have a few thousand universal forwarders, managed by a deployment server, and we're sending all logs (internal and non-internal) to index cluster A. In addition, I would like to send all internal Splunk logs to index cluster B. What's the simplest app package I can deploy via the deployment server to send a 2nd set of all internal logs from universal forwarders to index cluster B?
Was just going through the ‘Masa diagrams’ link: https://community.splunk.com/t5/Getting-Data-In/Diagrams-of-how-indexing-works-in-the-Splunk-platform-the-Masa/m-p/590774 If you look at the "Detai... See more...
Was just going through the ‘Masa diagrams’ link: https://community.splunk.com/t5/Getting-Data-In/Diagrams-of-how-indexing-works-in-the-Splunk-platform-the-Masa/m-p/590774 If you look at the "Detail Diagram - Standalone Splunk" , the queues are laid out like this (one example): (persistentQueue) + udp_queue --> parsingQueue --> aggQueue --> typingQueue --> indexQueue  So lets say we have UDP input configured and some congestion occurs in typingQueue, the persistentQueue should still be able to hold the data until the congestion is cleared up. This should be able to prevent the data loss.. right?  Sorry for this loaded assumption based question.. I am trying to figure out what we can do to stop UDP input data from getting dropped due to the typingQueue being filled. (P.S. adding an extra HF is not an option right now).   Thanks in advance!