All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am new to Splunk and did some fundamental courses to understand the platform. I have this question and would like to know if this is possible. I want to monitor Linux server (CPU usage, Disk usage,... See more...
I am new to Splunk and did some fundamental courses to understand the platform. I have this question and would like to know if this is possible. I want to monitor Linux server (CPU usage, Disk usage, Ram usage and network metrics) with Splunk. I know there are lot of apps available on Splunkbase. But I want to know if there is a way to just use Splunk without need of any other apps from Splunkbase to accomplish this objective? 
Hi,   I have a Table created by:   eval Actor=actor | eval "Total Time (max/avg/p50/p99)"=maxT + ", " + avgT + ", " + p50T + ", " + p99T | eval "Thread Execution Time (max/avg/p50/p99)"=maxE ... See more...
Hi,   I have a Table created by:   eval Actor=actor | eval "Total Time (max/avg/p50/p99)"=maxT + ", " + avgT + ", " + p50T + ", " + p99T | eval "Thread Execution Time (max/avg/p50/p99)"=maxE + ", " + avgE + ", " + p50E + ", " + p99E | eval "Time On Queue (max/avg/p50/p99)"=maxOnQ + ", " + avgOnQ + ", " + p50OnQ + ", " + p99OnQ | eval "Queue Depth (max/avg/p50/p99)"=maxqUsed + ", " + avgqUsed + ", " + p50qUsed + ", " + p99qUsed | eval "TPS (max/avg/p50/p99)"=maxTPS + ", " + avgTPS + ", " + p50TPS + ", " + p99TPS | <!--- create a table --> table Actor, "Total Time (max/avg/p50/p99)", "Thread Execution Time (max/avg/p50/p99)", "Time On Queue (max/avg/p50/p99)" , "Queue Depth (max/avg/p50/p99)", "TPS (max/avg/p50/p99)" |   Which looks like:   I wanted to change the color of the entire cell based on the max value. Say if max value is greater than 10000, color the cell red, else some other color.   I've tried following: https://community.splunk.com/t5/Dashboards-Visualizations/change-the-color-of-row-based-on-cell-value-in-splunk-without/m-p/525075   But I can't seem to get it to work with numbers. Any help is appreciated thanks!  
Hi all, I am using splunk after a while and lost touch with the SPL. Please help me on below. I have about 40 fields to extract using a SPL query. I am able to get all the fields required using int... See more...
Hi all, I am using splunk after a while and lost touch with the SPL. Please help me on below. I have about 40 fields to extract using a SPL query. I am able to get all the fields required using interesting fields. The issue that I am facing is that I am getting duplicate records in my result set (possibly it is due to the multiple source types that I am using in my query).  Just wondering what is the correct way to write SPL so that all fields that I retrieve are unique records. Don't think writing dedup on all 40 fields is a good idea. Also not sure if I use stats function,  do I have to write values(empno) as empno, vaues(empstartdate) as startdate.........on all 40 fields ? (If my data set has all employee details as an example)   Thanks in advance!
After failing over from the active cluster master to the redundant node (which holds the same configuration), 15 buckets report now    slave bucket=XXXXXX has unexpected mask=11 expected=0   This... See more...
After failing over from the active cluster master to the redundant node (which holds the same configuration), 15 buckets report now    slave bucket=XXXXXX has unexpected mask=11 expected=0   This results in search factor not met for the corresponding indices.  I can see the cluster master running the "CMChangeMasksJob" in regular intervals but it looks to me it just can't handle those 15 buckets.  I am looking for any hints how to tackle this. First and foremost, what is a bucket mask? Are my buckets corrupted? Can I try to update the mask manually?  
Hi there, I have a problem splitting transactions using request data while using custom expression on HttpRequest. My application is .NET Core running in a ECS Task. The custom expression described... See more...
Hi there, I have a problem splitting transactions using request data while using custom expression on HttpRequest. My application is .NET Core running in a ECS Task. The custom expression described here (https://docs.appdynamics.com/21.9/en/application-monitoring/configure-instrumentation/transaction-detection-rules/uri-based-entry-points) does not work. When using "${Url.ToString().Split(Char[]/-).[2]}-${UserAgent}" no transaction gets instrumented. While it is returning "Microsoft.AspNetCore.Http.DefaultHttpRequest" when using "${Url.ToString()}". Any ideas? Thank you
Hi, I've got a lookup with a number of records, and not all of them have all columns populated. Is there a way to append only those columns which are not empty? Something similar to:   | lookup m... See more...
Hi, I've got a lookup with a number of records, and not all of them have all columns populated. Is there a way to append only those columns which are not empty? Something similar to:   | lookup mylookup lookup_key OUTPUTNEW list of columns to append | <some SPL here to hide columns which are empty>   I'd be grateful for any tips.  I was experimenting with foreach but with no results. Regards
Is there any way to transfer log files utilizing Universal Forwarder? I have to use Heavy Forwarder to extract fields form complicated log texts. So It's necessary to send logs as whole file format ... See more...
Is there any way to transfer log files utilizing Universal Forwarder? I have to use Heavy Forwarder to extract fields form complicated log texts. So It's necessary to send logs as whole file format from the machines which generate logs toward Heavy Forwarder. if it's possible, could you tell me How and Which directory I should check on Heavy Forwarder machine. The Construction is this. (attachment photo)
Hello All, I have a search query that performs lookups against a CSV file and outputs only those hosts that are in the CSV file. The CSV file has the following 4 columns and notice the IP Address c... See more...
Hello All, I have a search query that performs lookups against a CSV file and outputs only those hosts that are in the CSV file. The CSV file has the following 4 columns and notice the IP Address column has a white space in it. I have verified the following command displays the values correctly of all hosts with their IP in a table       | inputlookup linux_servers.csv | table host "IP Address"       Now, if put the same thing  in a  tstats command, it does not show any results.  Any ideas why does it not take "IP Address" even though i have stated double quotes ??        | tstats max(_time) as lastSeen_epoch WHERE index=linux [| inputlookup linux_servers.csv | table host "IP Address" ] by host       The following search works fine , if i take out the "IP Address" . It displays the table with host column.       | tstats max(_time) as lastSeen_epoch WHERE index=linux [| inputlookup linux_servers.csv | table host ] by host        
Hi All,     I have logs in splunk and i need to create field values and create table with the values,present in logs. example :Caused by: org.apache.kafka.connect.errors.ConnectException: Failed to... See more...
Hi All,     I have logs in splunk and i need to create field values and create table with the values,present in logs. example :Caused by: org.apache.kafka.connect.errors.ConnectException: Failed to start new JMS session connection 1: JMSWMQ2013: The security authentication was not valid that was supplied for queue manager 'EVT302' with connection mode 'Client' and host name '10.37.84.12,10.37.100.13(1442)'. Above one is the example log and i need to extract value under caused by as description and queue manager number and also the hostname.  Can anyone help me on the same. Thanks in Advance.
I would like to install github3 module in phantom custom function using pip .. How do I do it?
Hi Splunkers! I hope you all are doing well. This is my indexes.conf My problem is that the COLD volume was fulled. This is the output of df command The fs of COLD volume is xfs Do you k... See more...
Hi Splunkers! I hope you all are doing well. This is my indexes.conf My problem is that the COLD volume was fulled. This is the output of df command The fs of COLD volume is xfs Do you know that the total maxsize of both COLD and splunk_summareis must not exceed from total space or Just setting the COLD volume is enough because the splunk_summaries volume is part of that? I mean in my case Splunk set the addition of both volume:COLD and volume:_splunk_summaries for total space for storing buckets or just set the maxVolumesize of volume:COLD config?  Thanks in advance for any advice   PS: I know Splunk do recommend that the summaries must be stored in HOT volume!
I need assistance to configure and forwarding the Mcafee DLP logs to Splunk. I already try to send the logs to splunk at port 8089 but the logs are encrypted. I intend to forward the logs to splunk... See more...
I need assistance to configure and forwarding the Mcafee DLP logs to Splunk. I already try to send the logs to splunk at port 8089 but the logs are encrypted. I intend to forward the logs to splunk at port 6514 but the port is not responding. Can anyone help us on this? Thank you.
I have setup an Indexer Cluster and joined Search Heads and Peer nodes to the Cluster Master. I am able to see all the Peers, Indexes, Search Heads from Cluster Master Web Interface (Settings -> Ind... See more...
I have setup an Indexer Cluster and joined Search Heads and Peer nodes to the Cluster Master. I am able to see all the Peers, Indexes, Search Heads from Cluster Master Web Interface (Settings -> Indexer Clustering). But I am looking for a CLI command that will list all the Search Heads that have joined this Cluster Master. I have tried these but none of these show Search Head Nodes information. $ splunk list cluster-generation $ splunk list cluster-config $ splunk show cluster-status
Hi, I want to check for a string in the field, but if the string is not found in the field then need to print the remaining data. (last 15 mins data) for example, Field1      Field2             ... See more...
Hi, I want to check for a string in the field, but if the string is not found in the field then need to print the remaining data. (last 15 mins data) for example, Field1      Field2              9/2/10   successful 9/2/10   creating the file 9/2/10   created from the above table, I want to check the Field2 for the last 15mins for string "successful", if no string is found in Field2 with "successful", Then need to trigger an alert with the remaining data like below. Field1      Field2  9/2/10   creating the file 9/2/10   created is this possbile in splunk.
Hi, Is there a way to group the applications that have been installed, such that its multi-level ?  Similar to how dashboard navigation works with collections (in the default.xml). When I click... See more...
Hi, Is there a way to group the applications that have been installed, such that its multi-level ?  Similar to how dashboard navigation works with collections (in the default.xml). When I click on the list of Applications, it shows all 100 applications that we have installed - I was hoping to be able to group them by a category: Support Apps -> Support App 1                                   Support App 2                                   Support App 3                                  etc.. Manager Apps-> manager App1                                  .... DevOps Apps -> DevOps App1                                 ... Sure there will still be the 100 apps, but if we have 10 categories with 10 apps per category, then the main application menu will only show the 10 categories, and when moving the mouse over that app it will expand.  Just means the menu will be less cluttered and more organised. regards -brett
'Hi, We are want to create a playbook for Splunk with Ansible,  We are having an issue config the AWS add on proxy configuration with the CLI or ansible, When you configuring the proxy via the Web... See more...
'Hi, We are want to create a playbook for Splunk with Ansible,  We are having an issue config the AWS add on proxy configuration with the CLI or ansible, When you configuring the proxy via the Web UI, it's generating passwords.conf file with the proxy configuration hashed,  I tried to find a way to config the proxy configuration via CLI to create the hashed password.conf file and to actually see the config change in the Web UI without success.  Is someone able to config the proxy via CLI ansible? not sure if there is a way either. I tried to go around it, and found a the python script that running in the background when you config the proxy via the Web UI, under -  /opt/splunk/etc/apps/Splunk_TA_aws/bin/aws_proxy_settings_rh.py   from __future__ import absolute_import import aws_bootstrap_env import re import logging import splunk.admin as admin from splunktalib.rest_manager import util, error_ctl from splunk_ta_aws.common.proxy_conf import ProxyManager KEY_NAMESPACE = util.getBaseAppName() KEY_OWNER = '-' AWS_PROXY = 'aws_proxy' POSSIBLE_KEYS = ('host', 'port', 'username', 'password', 'proxy_enabled') class ProxyRestHandler(admin.MConfigHandler): def __init__(self, scriptMode, ctxInfo): admin.MConfigHandler.__init__(self, scriptMode, ctxInfo) if self.callerArgs.id and self.callerArgs.id != 'aws_proxy': error_ctl.RestHandlerError.ctl(1202, msgx='aws_proxy', logLevel=logging.INFO) def setup(self): if self.requestedAction in (admin.ACTION_CREATE, admin.ACTION_EDIT): for arg in POSSIBLE_KEYS: self.supportedArgs.addOptArg(arg) return def handleCreate(self, confInfo): try: args = self.validate(self.callerArgs.data) args_dict = {} for arg in POSSIBLE_KEYS: if arg in args: args_dict[arg] = args[arg][0] else: args_dict[arg] = '' proxy_str = '%s:%s@%s:%s' % (args_dict['username'], args_dict['password'], args_dict['host'], args_dict['port']) if 'proxy_enabled' in args: enable = True if args_dict['proxy_enabled'] == '1' else False else: proxy = self.get() enable = True if (proxy and proxy.get_enable()) else False self.update(proxy_str, enable) except Exception as exc: error_ctl.RestHandlerError.ctl(400, msgx=exc, logLevel=logging.INFO) def handleList(self, confInfo): try: proxy = self.get() if not proxy: confInfo[AWS_PROXY].append('proxy_enabled', '0') return m = re.match('^(?P<username>\S*):(?P<password>\S*)@(?P<host>\S+):(?P<port>\d+$)', proxy.get_proxy()) if not m: confInfo[AWS_PROXY].append('proxy_enabled', '0') return However, I couldn't find the correct way to run the script and pass it to the correct parameter for the script?   Created details.txt file with the proxy config as     ['1.1.1.1', '1111', 'username', 'password', '1']   Run the script  /opt/splunk/bin/splunk cmd python3 /opt/splunk/etc/apps/Splunk_TA_aws/bin/aws_proxy_settings_rh.py setup details.txt error: ^CTraceback (most recent call last): File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/aws_proxy_settings_rh.py", line 95, in <module> admin.init(ProxyRestHandler, admin.CONTEXT_NONE) File "/opt/splunk/lib/python3.7/site-packages/splunk/admin.py", line 151, in init hand = handler(mode, ctxInfo) File "/opt/splunk/etc/apps/Splunk_TA_aws/bin/aws_proxy_settings_rh.py", line 20, in __init__ admin.MConfigHandler.__init__(self, scriptMode, ctxInfo) File "/opt/splunk/lib/python3.7/site-packages/splunk/admin.py", line 475, in __init__ dataFromSplunkd = sys.stdin.buffer.read() KeyboardInterrupt Is someone can try to help? Thanks,
Hi, if possible I would like to combine the two eval statements below so I can optimise it for my datamodel | eval uri=if(like('metric.uri_path', "/as/%/resume/as/authorization"), "resume/as/authori... See more...
Hi, if possible I would like to combine the two eval statements below so I can optimise it for my datamodel | eval uri=if(like('metric.uri_path', "/as/%/resume/as/authorization"), "resume/as/authorization.ping", uri) | eval url_path=mvappend(metric.uri_path, uri)
Hello, I have some issues in writing PROPS configuration file for the sample data/events given below. I have given 4 events and each of the events starts with CONNECT. But the word CONNECT has 2 0r ... See more...
Hello, I have some issues in writing PROPS configuration file for the sample data/events given below. I have given 4 events and each of the events starts with CONNECT. But the word CONNECT has 2 0r 4 of "-" before it and First  Line has the time stamp.  How I would write following parameters for PROPS configuration file. Any help will be highly appreciated. Thank you so much. SHOULD_LINEMERGE=false LINE_BREAKER=([\r\n]+) TIME_PREFIX = BREAK_ONLY_BEFORE= MAX_TIMESTAMP_LOOKAHEAD=20 TIME_FORMAT=%Y-%m-%d %H:%M   Sample Events: ----CONNECT-1007-036807981618-SYS-2021-09-18 09:39 ----CHECKPOINT-0000-036807981629-2021-09-18 08:39:07.010344 --ROLLBACK-1007-036807981689DF --ROLLBACK WORK --CHECKPOINT-0000-036807981670-2021-09-18 09:39:37.056758 --COMMIT-1001-036807983530-2021-09-18 09:57:33.200259 --COMMIT WORK --CHECKPOINT-0000-sa2036807983541-er2021-09-145 09:57:4462.998011 --CHECKPOINT-0000-qa4036807983512aa7-21aa021-09-18 09:58:17.469411 --CONNECT-1027-036807981700-dbo-2021-09-18 09:42 ----ROLLBACK-1027-036807981723CD --ROLLBACK WORK ---CONNECT-1029-036807981725-dbo-2021-09-18 09:42 ----CHECKPOINT-0000-036807981736-2021-09-18 09:42:26.201026 --ROLLBACK-1029-0368079817AB --ROLLBACK WORK --CONNECT-1031-036807981780-dbo-2021-09-18 09:42 ----COMMIT-1031-036807981791-2021-09-18 09:42:27.981158 --COMMIT WORK --ROLLBACK-1031-036807981800 --ROLLBACK WORK --COMMIT-1001-036807983530-2021-09-18 09:57:33.200259 --COMMIT WORK --CHECKPOINT-0000-036807983541-2021-09-18 09:57:42.998011 --CHECKPOINT-0000-036807983577-2021-09-18 09:58:17.469411  
Need your help please to setup / configure 2 Apps. SplunkConf Backup & GeminiKV Store Tools. I have been searching for any instructions for over 2 months now. Can not any instructions to make these a... See more...
Need your help please to setup / configure 2 Apps. SplunkConf Backup & GeminiKV Store Tools. I have been searching for any instructions for over 2 months now. Can not any instructions to make these apps work, no luck. I appreciate your help in advance.
Hello there, I have spent a good time researching lateral movement in Splunk, unfortunately I have not found much. I have only seen answers suggesting to review the use cases of the Splunk Security... See more...
Hello there, I have spent a good time researching lateral movement in Splunk, unfortunately I have not found much. I have only seen answers suggesting to review the use cases of the Splunk Security Essentials APP but this use case on said app is based on Sysmon logs and I am only collecting the Security and Application logs using the Agent. I also see very old responses where fields mention fields as "user" when currently called "Account_Name" I would appreciate if someone can give me any suggestions to try to identify possible Lateral movements. i found this index=main sourcetype=WinEventLog:Security (EventCode=4624 OR EventCode=4672) Logon_Type=3 NOT Account_Name="*$" NOT Account_Name="ANONYMOUS LOGON"