All Topics

Top

All Topics

I have setup an Indexer Cluster and joined Search Heads and Peer nodes to the Cluster Master. I am able to see all the Peers, Indexes, Search Heads from Cluster Master Web Interface (Settings -> Ind... See more...
I have setup an Indexer Cluster and joined Search Heads and Peer nodes to the Cluster Master. I am able to see all the Peers, Indexes, Search Heads from Cluster Master Web Interface (Settings -> Indexer Clustering). But I am looking for a CLI command that will list all the Search Heads that have joined this Cluster Master. I have tried these but none of these show Search Head Nodes information. $ splunk list cluster-generation $ splunk list cluster-config $ splunk show cluster-status
Hi, I want to check for a string in the field, but if the string is not found in the field then need to print the remaining data. (last 15 mins data) for example, Field1      Field2             ... See more...
Hi, I want to check for a string in the field, but if the string is not found in the field then need to print the remaining data. (last 15 mins data) for example, Field1      Field2              9/2/10   successful 9/2/10   creating the file 9/2/10   created from the above table, I want to check the Field2 for the last 15mins for string "successful", if no string is found in Field2 with "successful", Then need to trigger an alert with the remaining data like below. Field1      Field2  9/2/10   creating the file 9/2/10   created is this possbile in splunk.
Hi, Is there a way to group the applications that have been installed, such that its multi-level ?  Similar to how dashboard navigation works with collections (in the default.xml). When I click... See more...
Hi, Is there a way to group the applications that have been installed, such that its multi-level ?  Similar to how dashboard navigation works with collections (in the default.xml). When I click on the list of Applications, it shows all 100 applications that we have installed - I was hoping to be able to group them by a category: Support Apps -> Support App 1                                   Support App 2                                   Support App 3                                  etc.. Manager Apps-> manager App1                                  .... DevOps Apps -> DevOps App1                                 ... Sure there will still be the 100 apps, but if we have 10 categories with 10 apps per category, then the main application menu will only show the 10 categories, and when moving the mouse over that app it will expand.  Just means the menu will be less cluttered and more organised. regards -brett
'Hi, We are want to create a playbook for Splunk with Ansible,  We are having an issue config the AWS add on proxy configuration with the CLI or ansible, When you configuring the proxy via the Web... See more...
'Hi, We are want to create a playbook for Splunk with Ansible,  We are having an issue config the AWS add on proxy configuration with the CLI or ansible, When you configuring the proxy via the Web UI, it's generating passwords.conf file with the proxy configuration hashed,  I tried to find a way to config the proxy configuration via CLI to create the hashed password.conf file and to actually see the config change in the Web UI without success.  Is someone able to config the proxy via CLI ansible? not sure if there is a way either. I tried to go around it, and found a the python script that running in the background when you config the proxy via the Web UI, under -  /opt/splunk/etc/apps/Splunk_TA_aws/bin/aws_proxy_settings_rh.py   from __future__ import absolute_import import aws_bootstrap_env import re import logging import splunk.admin as admin from splunktalib.rest_manager import util, error_ctl from splunk_ta_aws.common.proxy_conf import ProxyManager KEY_NAMESPACE = util.getBaseAppName() KEY_OWNER = '-' AWS_PROXY = 'aws_proxy' POSSIBLE_KEYS = ('host', 'port', 'username', 'password', 'proxy_enabled') class ProxyRestHandler(admin.MConfigHandler): def __init__(self, scriptMode, ctxInfo): admin.MConfigHandler.__init__(self, scriptMode, ctxInfo) if self.callerArgs.id and self.callerArgs.id != 'aws_proxy': error_ctl.RestHandlerError.ctl(1202, msgx='aws_proxy', logLevel=logging.INFO) def setup(self): if self.requestedAction in (admin.ACTION_CREATE, admin.ACTION_EDIT): for arg in POSSIBLE_KEYS: self.supportedArgs.addOptArg(arg) return def handleCreate(self, confInfo): try: args = self.validate(self.callerArgs.data) args_dict = {} for arg in POSSIBLE_KEYS: if arg in args: args_dict[arg] = args[arg][0] else: args_dict[arg] = '' proxy_str = '%s:%s@%s:%s' % (args_dict['username'], args_dict['password'], args_dict['host'], args_dict['port']) if 'proxy_enabled' in args: enable = True if args_dict['proxy_enabled'] == '1' else False else: proxy = self.get() enable = True if (proxy and proxy.get_enable()) else False self.update(proxy_str, enable) except Exception as exc: error_ctl.RestHandlerError.ctl(400, msgx=exc, logLevel=logging.INFO) def handleList(self, confInfo): try: proxy = self.get() if not proxy: confInfo[AWS_PROXY].append('proxy_enabled', '0') return m = re.match('^(?P<username>\S*):(?P<password>\S*)@(?P<host>\S+):(?P<port>\d+$)', proxy.get_proxy()) if not m: confInfo[AWS_PROXY].append('proxy_enabled', '0') return However, I couldn't find the correct way to run the script and pass it to the correct parameter for the script?   Created details.txt file with the proxy config as     ['1.1.1.1', '1111', 'username', 'password', '1']   Run the script  /opt/splunk/bin/splunk cmd python3 /opt/splunk/etc/apps/Splunk_TA_aws/bin/aws_proxy_settings_rh.py setup details.txt error: ^CTraceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/aws_proxy_settings_rh.py", line 95, in <module> admin.init(ProxyRestHandler, admin.CONTEXT_NONE) File "/opt/splunk/lib/python3.7/site-packages/splunk/admin.py", line 151, in init hand = handler(mode, ctxInfo) File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/aws_proxy_settings_rh.py", line 20, in __init__ admin.MConfigHandler.__init__(self, scriptMode, ctxInfo) File "/opt/splunk/lib/python3.7/site-packages/splunk/admin.py", line 475, in __init__ dataFromSplunkd = sys.stdin.buffer.read() KeyboardInterrupt Is someone can try to help? Thanks,
Hi, if possible I would like to combine the two eval statements below so I can optimise it for my datamodel | eval uri=if(like('metric.uri_path', "/as/%/resume/as/authorization"), "resume/as/authori... See more...
Hi, if possible I would like to combine the two eval statements below so I can optimise it for my datamodel | eval uri=if(like('metric.uri_path', "/as/%/resume/as/authorization"), "resume/as/authorization.ping", uri) | eval url_path=mvappend(metric.uri_path, uri)
Hello, I have some issues in writing PROPS configuration file for the sample data/events given below. I have given 4 events and each of the events starts with CONNECT. But the word CONNECT has 2 0r ... See more...
Hello, I have some issues in writing PROPS configuration file for the sample data/events given below. I have given 4 events and each of the events starts with CONNECT. But the word CONNECT has 2 0r 4 of "-" before it and First  Line has the time stamp.  How I would write following parameters for PROPS configuration file. Any help will be highly appreciated. Thank you so much. SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+) TIME_PREFIX = BREAK_ONLY_BEFORE= MAX_TIMESTAMP_LOOKAHEAD=20 TIME_FORMAT=%Y-%m-%d %H:%M   Sample Events: ----CONNECT-1007-036807981618-SYS-2021-09-18 09:39 ----CHECKPOINT-0000-036807981629-2021-09-18 08:39:07.010344 --ROLLBACK-1007-036807981689DF --ROLLBACK WORK --CHECKPOINT-0000-036807981670-2021-09-18 09:39:37.056758 --COMMIT-1001-036807983530-2021-09-18 09:57:33.200259 --COMMIT WORK --CHECKPOINT-0000-sa2036807983541-er2021-09-145 09:57:4462.998011 --CHECKPOINT-0000-qa4036807983512aa7-21aa021-09-18 09:58:17.469411 --CONNECT-1027-036807981700-dbo-2021-09-18 09:42 ----ROLLBACK-1027-036807981723CD --ROLLBACK WORK ---CONNECT-1029-036807981725-dbo-2021-09-18 09:42 ----CHECKPOINT-0000-036807981736-2021-09-18 09:42:26.201026 --ROLLBACK-1029-0368079817AB --ROLLBACK WORK --CONNECT-1031-036807981780-dbo-2021-09-18 09:42 ----COMMIT-1031-036807981791-2021-09-18 09:42:27.981158 --COMMIT WORK --ROLLBACK-1031-036807981800 --ROLLBACK WORK --COMMIT-1001-036807983530-2021-09-18 09:57:33.200259 --COMMIT WORK --CHECKPOINT-0000-036807983541-2021-09-18 09:57:42.998011 --CHECKPOINT-0000-036807983577-2021-09-18 09:58:17.469411  
Need your help please to setup / configure 2 Apps. SplunkConf Backup & GeminiKV Store Tools. I have been searching for any instructions for over 2 months now. Can not any instructions to make these a... See more...
Need your help please to setup / configure 2 Apps. SplunkConf Backup & GeminiKV Store Tools. I have been searching for any instructions for over 2 months now. Can not any instructions to make these apps work, no luck. I appreciate your help in advance.
Hello there, I have spent a good time researching lateral movement in Splunk, unfortunately I have not found much. I have only seen answers suggesting to review the use cases of the Splunk Security... See more...
Hello there, I have spent a good time researching lateral movement in Splunk, unfortunately I have not found much. I have only seen answers suggesting to review the use cases of the Splunk Security Essentials APP but this use case on said app is based on Sysmon logs and I am only collecting the Security and Application logs using the Agent. I also see very old responses where fields mention fields as "user" when currently called "Account_Name" I would appreciate if someone can give me any suggestions to try to identify possible Lateral movements. i found this index=main sourcetype=WinEventLog:Security (EventCode=4624 OR EventCode=4672) Logon_Type=3 NOT Account_Name="*$" NOT Account_Name="ANONYMOUS LOGON"  
Hi, I have a uri_path that I want to combine into a single value, and put the combined value back into the original field and I have achieved that with the below search: index=ping_sandbox uri_path... See more...
Hi, I have a uri_path that I want to combine into a single value, and put the combined value back into the original field and I have achieved that with the below search: index=ping_sandbox uri_path=/as/*/resume/as/authorization | eval uri=if(like(uri_path, "/as/%/resume/as/authorization"), "resume/as/authorization", uri) | eval uri_path=mvappend(uri, url_path) However, not every uri_path is /as/*/resume/as/authorization, and when I remove the uri_path search value, all the other uri_path values are gone. For example, here's 3 values /1 /2 /3, and if I do the above eval statements for /as/*/resume/as/authorization I don't have /1 /2 or /3 anymore. Does anyone have any advice on how to do the above eval statements while still retaining the rest of the field values? I only want the eval statements applied if /as/*/resume/as/authorization is present as well
We have a very small test enviroment, with a single instance Splunk server (running on Linux) and a handful of Windows servers with UFs installed. I'm attempting to use Splunk Stream to monitor NIC ... See more...
We have a very small test enviroment, with a single instance Splunk server (running on Linux) and a handful of Windows servers with UFs installed. I'm attempting to use Splunk Stream to monitor NIC traffic on the Windows UFs. Following the Splunk Stream docs precisely is confusing (and in many cases just wrong). https://docs.splunk.com/Documentation/StreamApp/7.4.0/DeployStreamApp/AboutSplunkStream I'm at the point I want to use the Splunk server's deployment server functionality to distribute the Splunk_TA_stream to the Windows UFs, but I'm confused on how to properly configure the Splunk_TA_stream app before deploying it. (Docs say, Splunk_TA_stream will be installed in SPLUNK_HOME/etc/deployment-apps preconfigured... this is certainly not true in my case.) I'm at a loss of how to configure Splunk_TA_stream before deploying it (via deployment server) to the Windows UFs. Any insight is greatly appreciated. Thanks
I am trying to speed up a search on Splunk. The search looks through millions of logs for matches to around 100 event types (each event type has multiple strings to match) so it has ended up being ve... See more...
I am trying to speed up a search on Splunk. The search looks through millions of logs for matches to around 100 event types (each event type has multiple strings to match) so it has ended up being very slow. The original search I have is:   eventtype=fail_type* source="*console" host = $jenkins_server$ | timechart count by eventtype   Which plots a timechart of the different types of fails in the console logs of jenkins which is what I want. I tried to speed up the job by getting it to only look through logs from failing jobs. I can get a table of failing console logs using the search below but if I try to use those console paths for a new search by adding "| search source=console_path" it doesn't work   event_tag="job_event" host = $jenkins_server$ | eval job_result=if(type="started", "INPROGRESS", job_result) `utc_to_local_time(job_started_at)` | search (job_result=FAILURE OR job_result=UNSTABLE OR job_result=ABORTED) | eval console_path= "*" + build_url + "console*" | table console_path build_url job_result   Apricate any help or suggestions for other ways to speed up the search
How do I pull together a chart of all our user accounts, with the last time that user logged in?   I currently have:  eventtype=wineventlog_security (EventCode=4776 OR EventCode=4777 OR EventCode=... See more...
How do I pull together a chart of all our user accounts, with the last time that user logged in?   I currently have:  eventtype=wineventlog_security (EventCode=4776 OR EventCode=4777 OR EventCode=680 OR EventCode=681) | stats max(Time) by Logon_Account   I am getting the time but also need to display the date. I am also getting a lot of service accounts, is there an easy way to filter those out?
Hey guys, I'm having trouble updating SPlunk from version 8.1.0 to version 8.2. When running the command "rpm -i --replacepkgs splunk-8.2.2.1-ae6821b7c64b-linux-2.6-x86_64.rpm", it displays several... See more...
Hey guys, I'm having trouble updating SPlunk from version 8.1.0 to version 8.2. When running the command "rpm -i --replacepkgs splunk-8.2.2.1-ae6821b7c64b-linux-2.6-x86_64.rpm", it displays several alerts as below: How this alert occurs for several files. file /opt/splunk/share/splunk/search_mrsparkle/exposed/pcss/version-5-and-earlier/admin_lite.pcss from install of splunk-8.2.2.1-ae6821b7c64b.x86_64 conflicts with file from package splunk-8.1.0.1-24fd52428b5a.x86_64 should I go about resolving the problem?
I work for a utility company and, among many things, we have an index for some environmental and system totals. This index is used to to compute yesterday's sales and compare to same day last year, w... See more...
I work for a utility company and, among many things, we have an index for some environmental and system totals. This index is used to to compute yesterday's sales and compare to same day last year, we also do some calculations for one year to date compared to previous year to date. This means that the dashboards may access events two years old. The data is a single event per day, going back to 1995. After loading the data (Which is via DB Connect, from SQL table) everything is great for a while and then one day the data up until about 18 months ago is gone. I am guessing it is being rolled to frozen via some kind of default. What setting should I use to keep all the data in the index and searchable? 
Fly through instrumenting serverless AWS Lambda functions using the AppDynamics Lambda Extension for Node.JS and Python and an API for Java. Check out this short video featured on The Full-Stack ... See more...
Fly through instrumenting serverless AWS Lambda functions using the AppDynamics Lambda Extension for Node.JS and Python and an API for Java. Check out this short video featured on The Full-Stack Monthly newsletter to get you inspired and read up more in the AppDynamics AWS Lambda Extension Is Now Generally Available blog. For more detailed instructions, check out our documentation. Know someone who needs The Full-Stack Monthly? Help them sign up:  Sign up here >>
Hi Below data is dynamic, sample input table is given below, rows are order may vary (for simplicity I have put the data in order to understand easily).   Input: Feature Name Browser Name ... See more...
Hi Below data is dynamic, sample input table is given below, rows are order may vary (for simplicity I have put the data in order to understand easily).   Input: Feature Name Browser Name Result Feature 1 B1 Pass Feature 1 B1 Pass Feature 1 B1 Pass Feature 1 B1 Pass Feature 1 B2 Fail Feature 1 B2 Pass Feature 1 B2 Pass Feature 1 B2 Pass Feature 1 B3 Pass Feature 1 B3 Pass Feature 1 B3 Pass Feature 1 B3 Fail Feature 1 B4 Pass Feature 1 B4 Pass Feature 1 B4 Fail Feature 1 B4 Pass   Based on the above input table, output needs to be generated as listed below.  Cumulative result needs to be generated based on the browser name and result for each feature.  If any one of result fails on particular a browser, feature is considered failed.   Output: Feature 1 B1 Pass Feature 1 B2 Fail Feature 1 B3 Fail Feature 1 B4 Fail   Would you please help me to generate expected output as listed.
When I create an ITSM alert and use $result.Activity$ the correct value for the "Activity" field appears in ITSM.  How do I represent a field called "Start Time UTC{}"? 
I am trying to speed up a search on Splunk. The search looks through millions of logs for matches to around 100 event types (each event type has multiple strings to match) so it has ended up being ve... See more...
I am trying to speed up a search on Splunk. The search looks through millions of logs for matches to around 100 event types (each event type has multiple strings to match) so it has ended up being very slow. The original search I have is:   eventtype=fail_type* source="*console" host = $jenkins_server$ | timechart count by eventtype   Which plots a timechart of the different types of fails in the console logs of jenkins which is what I want. I tried to speed up the job by getting it to only look through logs from failing jobs. I can get a table of failing console logs using the search below but if I try to use those console paths for a new search by adding "| search source=console_path" it doesn't work   event_tag="job_event" host = $jenkins_server$ | eval job_result=if(type="started", "INPROGRESS", job_result) `utc_to_local_time(job_started_at)` | search (job_result=FAILURE OR job_result=UNSTABLE OR job_result=ABORTED) | eval console_path= "*" + build_url + "console*" | table console_path build_url job_result   Apricate any help or suggestions for other ways to speed up the search
Hi, I'm trying to filter the results from one search based on the results from another search. Example: Consider the following table of data user eventId Joe 1 Joe 2 Bob 3   ... See more...
Hi, I'm trying to filter the results from one search based on the results from another search. Example: Consider the following table of data user eventId Joe 1 Joe 2 Bob 3   I have created a search that returns only eventIds generated by user Joe and creates a token with the result           <search> <query> "event created" user=Joe | table eventId </query> <done> <set token="eventId">$result.eventId$</set> </done> </search>             I have another table with the following data eventId eventName 1 myEvent_1 2 myEvent_2 3 myEvent_3   What I would like to do is create a search that will return just the eventId and eventName that was generated by user Joe using the token created in the first search. So far I have this query           "event names" eventId=$eventId$ | table eventId eventName           This query is only returning the first result from the token list rather than every result. Is there a way to use the token this way to return results from all values in the token? I would like to avoid using JOIN or subsearches as I will need to create multiple tables with the same token filter and those methods would start to get very slow. Thanks in advance!
This is mostly just a curiosity, motivated by this post on how to compare a particular time interval across multiple larger time periods. Effectively the solution seems to be to generate a list of ti... See more...
This is mostly just a curiosity, motivated by this post on how to compare a particular time interval across multiple larger time periods. Effectively the solution seems to be to generate a list of time intervals and run map subsearches on each entry. When I have multiple time periods that I'd like to run stats on, I typically use a multisearch command followed by a chart, as follows:     | multisearch [ index=potato et=<et1> lt=<lt2> | eval series=1 ] [ index=potato et=<et2> lt=<lt2> |eval series=2 ] . . . [ index=potato et=<etn> lt=<ltn> | eval series=n ] | timechart count by series     I suppose you could make it work by substituting the et's and lt's via subsearch, but it won't work if the number of time intervals, n, is also dynamically generated by some prior search.  I know you can use a number of different techniques, but they all have different drawbacks. You could use map, which offers pretty much all the flexibility/dynamic-ness you need (I've abused it plenty of times doing things like map search=`searchstring($searchstring$)` ), but there are performance issues with this as subsearches can time out as it doesn't offer the same optimization as multisearch does when you just need to string multiple streams together.  You can just search the entire timerange and use some eval logic to filter out the time intervals you need, but isn't this suboptimal since you're searching more events than you need? Multisearch seems to be great at streaming multiple different time intervals together and I'd love to have that optimization without  At this point, would you just have to resort to REST to schedule searches? How would we tie the data together? I'm not very familiar with what is possible with REST as all of my experience is with just plain SPL.  In a word, how do we stream events across multiple, dynamically generated time intervals without running into subsearch limitations?