All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, I have a query to get the result of the list of filesystems and their respective disk usage details as below: File_System  Total in GB   Used in GB   Available in GB   Disk_Usage in % /var... See more...
Hi All, I have a query to get the result of the list of filesystems and their respective disk usage details as below: File_System  Total in GB   Used in GB   Available in GB   Disk_Usage in % /var                   10                    9.2                   0.8                           92 /opt                   10                    8.1                   1.9                          81 /logs                 10                    8.7                   1.3                          87 /apps                10                    8.4                   1.6                          84 /pcvs                10                    9.4                    0.6                         94 I need to create a dropdown with the disk usage values to get the above table for a range of values. For e.g. If I select 80 in the dropdown it will show the table with values of disk usage in the range 80-84, then if I select 85 in the dropdown it will show the table with values of disk usage in the range 85-89 and so on. I created the dropdown with token as "DU" and created the search query for the table as: .... | search Disk_Usage=$DU$ | table File_System,Total,Used,Available,Disk_Usage | rename Total as "Total in GB" Used as "Used in GB" Available as "Available in GB" Disk_Usage as "Disk_Usage in %" But with this query I am able to get the table for a single disk usage value only. Please help me create a query so that upon selecting an option in the dropdown will give the table for a range of disk usage values.
Hi all,  I'm trying to find the specific queries for the SH to create Splunk dashboard of the following info (example) USER PID %MEM %CPU root     40          5.6         0.4 root       12       ... See more...
Hi all,  I'm trying to find the specific queries for the SH to create Splunk dashboard of the following info (example) USER PID %MEM %CPU root     40          5.6         0.4 root       12         4.2         0.2 I got the index named  "stats"  sourcetype "top" I need that dashboard to show the info from just the last 5 minutes. Any idea? Thanks, Max
Hi all,  I'm trying to find the specific queries for the SH to create Splunk dashboard of the following info (example) USER PID %MEM %CPU root     40          5.6         0.4 root       12       ... See more...
Hi all,  I'm trying to find the specific queries for the SH to create Splunk dashboard of the following info (example) USER PID %MEM %CPU root     40          5.6         0.4 root       12         4.2         0.2 I got the index named  "stats"  sourcetype "top"   Any idea? Thanks, Max
Hello, I'm attempting to use the regex command to filter out any records on the "user" field that do not match the written expression below: index=myindex sourcetype=traceability_log4net earliest=-... See more...
Hello, I'm attempting to use the regex command to filter out any records on the "user" field that do not match the written expression below: index=myindex sourcetype=traceability_log4net earliest=-10d | regex _raw="user=[a-z]{3,6}[a-z1-9]{1,2}" | table user Per the expression, there should not be any "user" records that exceed 8 characters. However, the non-matching values are not being filtered out. "sstevenson6111", for example,  should theoretically be listed as "sstevens" per the expression.  I'm sure there's something blindingly obvious that I'm missing!   
Log4J Query:   index=* | regex _raw="(\$|%24)(\{|%7B)([^jJ]*[jJ])([^nN]*[nN])([^dD]*[dD])([^iI]*[iI])(:|%3A|\$|%24|}|%7D)" | eval action=coalesce(action_taken, elb_status_code, status) | where NOT ... See more...
Log4J Query:   index=* | regex _raw="(\$|%24)(\{|%7B)([^jJ]*[jJ])([^nN]*[nN])([^dD]*[dD])([^iI]*[iI])(:|%3A|\$|%24|}|%7D)" | eval action=coalesce(action_taken, elb_status_code, status) | where NOT (cidrmatch("192.168.0.0/16",src_ip) OR cidrmatch("10.0.0.0/8",src_ip) OR cidrmatch("172.16.0.0/12",src_ip)) OR Country="United States" | iplocation src_ip | eval notNULL="" | fillnull value="unknown" notNULL, src_ip, dest_ip, action, url, Country | stats count by src_ip, Country, dest_ip, url, action, sourcetype | sort - count​   This checks anywhere where there is a sign of the Log4J exploit being used.  I've done field extraction on any sourcetypes returned by my previous query:  index=* | regex _raw="(\$|%24)(\{|%7B)([^jJ]*[jJ])([^nN]*[nN])([^dD]*[dD])([^iI]*[iI])(:|%3A|\$|%24|}|%7D)" | stats count as "exploit attempts" by sourcetype | sort - "exploit attempts" I extracted fields so that I can get a table with src_ip, Country, dest_ip, url, action, sourcetype, and count. I want to then use this query in subsequent queries to get information on if the exploit was successful, and if there is any other communication that follows.   The query works and I get results like this (fake results): src_ip Country dest_ip url action sourcetype count 248.216.243.59 Unknown 192.168.1.148 192.168.1.148/ blocked firewall 3 207.191.80.208 US 192.168.1.216 192.168.1.216/ allowed firewall 2    Problem being... The query runs really slow after a few minutes of running. It starts out by doing millions of events every few seconds and slows down to doing thousands every few seconds.  Some info from logs: command.search, command.search.kv, and dispatch.stream.remote take up the most time of the run. I'm getting warnings in search.log like "Max bucket size is larger than the index size limit" , "Invalid field alias specification in stanza". However, these don't seem to be the reason for the error. Using high_perf and Fast Mode If there is any more information I can add, then feel free to ask and I will edit.  
Hi all,  how can I set the Universal Forwarder to run a script every 5 minute with a cronjob Info of the script should be showing up when searching from the Search Head Thanks in advance, Max.
Need help on enterprise security. Is there a way to create a standard TAXII Parser that can do correlation searches of logs coming from Maritime Transportation System ISAC & logs coming from Stash. N... See more...
Need help on enterprise security. Is there a way to create a standard TAXII Parser that can do correlation searches of logs coming from Maritime Transportation System ISAC & logs coming from Stash. New to ES and have no idea what's all about. See the issue below, If it'll help. Please advise and help, on what's needed to be done. I am very new to ES. Thanks   "A shipping company that gets Intelligence feeds/reports from MTS-ISAC (Maritime Transportation System ISAC) The MTS-ISAC provides proactive cyber threat intelligence, alerts, warnings, and vulnerability information cultivated from maritime stakeholders and public and private sector shares, open-source intelligence, and cybersecurity news So it's just a matter of parsing that information so Matson can do correlation searches (correlate it with logs) that are currently coming from Stash"  
Hi Team, Our Splunk instance is hosted in Cloud and maintained by Splunk Support. So recently we got an email from Splunk Support stating that our Universal Forwarder & Associated Certificate Packag... See more...
Hi Team, Our Splunk instance is hosted in Cloud and maintained by Splunk Support. So recently we got an email from Splunk Support stating that our Universal Forwarder & Associated Certificate Package has been upgraded to latest version since it is going to expire in couple of days and they have requested us to download and install the UF package from Search Head and rollout to all our Client machines since they are planning to upgrade the package in the indexer level in a couple of days. So our architecture is that we have 1 Deployment master server and 4 HF servers. Search heads , Cluster master , Indexers etc. are managed by Splunk Support. So usually we used to push the customized apps and as well as forwarder apps from our Deployment master server to all our client machines. and moreover all our Splunk servers DM & HF are running with Linux OS.   https://docs.splunk.com/Documentation/Forwarder/8.2.4/Forwarder/ConfigSCUFCredentials#Install_the_forwarder_credentials_on_a_deployment_server So as per the documentation I have downloaded the "splunkclouduf.spl" credentials package from our Search head and placed it in /opt/splunk/etc/deployment-apps folder in our DM server then as mentioned I have untar the file so after untar the file I can see a new folder as "100_xxxx_splunkcloud" Later it is mentioned to install the credentials package so in here in this case it is mentioned to choose the path of splunkclouduf.spl so should i need to choose which path and install it? /opt/splunk/etc/deployment-apps/splunkclouduf.spl (OR) /opt/splunk/etc/deployment-apps/100_xxxx_splunkcloud  I am quite not sure hence I am struck over here and didn't installed the credentials yet so kindly help to check and update please. And post installation of credentials package it is mentioned to restart the Splunk instance in the DM server.  So post installation in my DM server how do I push them to all client machines? Do i need to edit the existing forwarder outputs app (which is pushed to all client machines and HF) Since we already have an app "forwarder_outputs" which we have pushed to all client machines. So in this app we have local and metadata folder in it. And in local folder we have limits.conf, outputs.conf, xxx_cacert.pem & xxx_server.pem file and in metadata folder we have local.meta so now what are the files do i need to modify post installing the credential package in DM server and push them to all client machines so that the UF package would be running with latest version. So kindly help on my request .    
I assume that I need to install Splunk Enterprise Security 1. Is my assumption correction? 2. It says Contact Sales when I try to download the Enterprise Security App. I have developer License   ... See more...
I assume that I need to install Splunk Enterprise Security 1. Is my assumption correction? 2. It says Contact Sales when I try to download the Enterprise Security App. I have developer License   Thanks in advance.
Hello I want to feed data directly into Excel but I do not have API access nor I can install custom connectors. Is there any other solution?  I seem to be able to run a search like the below, woul... See more...
Hello I want to feed data directly into Excel but I do not have API access nor I can install custom connectors. Is there any other solution?  I seem to be able to run a search like the below, would that work directly in Excel with MS authentication? | rest splunk_server=local servicesNS/-/-/data/ui/views/ Thanks!
Hello, I am monitoring a csv file using universal forwarder and the first column in the csv file is Last_Updated_Date. This file is indexed based on this field (_time = Last_Updated_Date). This fil... See more...
Hello, I am monitoring a csv file using universal forwarder and the first column in the csv file is Last_Updated_Date. This file is indexed based on this field (_time = Last_Updated_Date). This file also has a column called Created_Date. While writing a search, I want to use Created_Date as _time to filter the data and the search I have written is given below:   index="tickets" host="host_1" | foreach * [ eval newFieldName=replace("<<FIELD>>", "\s+", "_"), {newFieldName}='<<FIELD>>' ] | fields - "* *", newFieldName | eval _time=strptime(Created_Date, "%Y-%m-%d %H:%M:%S") | sort 0 -_time | addinfo | where _time>=info_min_time AND (_time<=info_max_time OR info_max_time="+Infinity") | dedup ID | where Status!="Closed" | eval min_time=strftime(info_min_time, "%Y-%m-%d %H:%M:%S") | eval max_time=strftime(info_max_time, "%Y-%m-%d %H:%M:%S") | eval index_time=strftime(_indextime, "%Y-%m-%d %H:%M:%S") | rename Created_Date as Created, Last_Updated_Date as "Last Updated" | table ID Type Created "Last Updated" _time min_time info_min_time max_time info_max_time index_time | sort 0 Created    When I run this search for a period, say 1st Feb 2021 - 31st Jul 2021, it gives results as below: When I checked this for a longer period, say All Time - it gives results as below: There are many open tickets (created between Feb and Jul) and not just two, as shown in the first screenshot, but it seems still the timepicker is using Last_Updated_Date to filter the events and not the Created_Date. Can you please suggest how I can fix this? Thank you.  
Hi, I have 3 panels which are displaying SIngle value, with a condition if result count is zero, that panel should not display on dashboard. However, it is not working properly. Stll the panels are ... See more...
Hi, I have 3 panels which are displaying SIngle value, with a condition if result count is zero, that panel should not display on dashboard. However, it is not working properly. Stll the panels are hidden even though result count is > 0 for all. How can this be fixed? <dashboard> <init> <set token="eduration">-24h@m</set> <set token="lduration">now</set> </init> <row> <panel depends="$show$"> <single> <title>Panel</title> <search> <query>sarch_queryt</query> <earliest>$eduration$</earliest> <latest>$lduration$</latest> <sampleRatio>1</sampleRatio> <progress> <condition match="'job.resultCount' == 0"> <set token="show">true</set> </condition> <condition> <unset token="show"></unset> </condition> </progress> </search> <option name="colorBy">value</option> <option name="colorMode">block</option> <option name="drilldown">none</option> <option name="numberPrecision">0</option> <option name="rangeColors">["0x53a051","0xf8be34","0xdc4e41"]</option> <option name="rangeValues">[0,10]</option> <option name="showSparkline">1</option> <option name="showTrendIndicator">1</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> <option name="trendColorInterpretation">standard</option> <option name="trendDisplayMode">absolute</option> <option name="unitPosition">after</option> <option name="useColors">1</option> <option name="useThousandSeparators">1</option> </single> </panel> <panel depends="$show1$"> <single> <title>Panel1</title> <search> <query>sarch_queryt</query> <earliest>$eduration$</earliest> <latest>$lduration$</latest> <sampleRatio>1</sampleRatio> <progress> <condition match="'job.resultCount' == 0"> <set token="show1">true</set> </condition> <condition> <unset token="show1"></unset> </condition> </progress> </search> <option name="colorBy">value</option> <option name="colorMode">block</option> <option name="drilldown">none</option> <option name="numberPrecision">0</option> <option name="rangeColors">["0x53a051","0xf8be34","0xdc4e41"]</option> <option name="rangeValues">[0,10]</option> <option name="showSparkline">1</option> <option name="showTrendIndicator">1</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> <option name="trendColorInterpretation">standard</option> <option name="trendDisplayMode">absolute</option> <option name="unitPosition">after</option> <option name="useColors">1</option> <option name="useThousandSeparators">1</option> </single> </panel> <panel depends="$show2$"> <single> <title>Panel3</title> <search> <query>sarch_queryt</query> <earliest>$eduration$</earliest> <latest>$lduration$</latest> <sampleRatio>1</sampleRatio> <progress> <condition match="'job.resultCount' == 0"> <set token="show2">true</set> </condition> <condition> <unset token="show2"></unset> </condition> </progress> </search> <option name="colorBy">value</option> <option name="colorMode">block</option> <option name="drilldown">none</option> <option name="numberPrecision">0</option> <option name="rangeColors">["0x53a051","0xf8be34","0xdc4e41"]</option> <option name="rangeValues">[0,10]</option> <option name="showSparkline">1</option> <option name="showTrendIndicator">1</option> <option name="trellis.enabled">0</option> <option name="trellis.scales.shared">1</option> <option name="trellis.size">medium</option> <option name="trendColorInterpretation">standard</option> <option name="trendDisplayMode">absolute</option> <option name="unitPosition">after</option> <option name="useColors">1</option> <option name="useThousandSeparators">1</option> </single> </panel> </row> </dashboard>  
   Provide details about client purchase details           1. Total purchase split by product ID          2. Total Products split by product ID  with raw data
We are trying to Configure Azure Storage Blob Modular Inputs for Splunk Add-on for Microsoft Cloud Services to get reports, that come in csv format. We have created props.conf TA folder in /opt/splun... See more...
We are trying to Configure Azure Storage Blob Modular Inputs for Splunk Add-on for Microsoft Cloud Services to get reports, that come in csv format. We have created props.conf TA folder in /opt/splunk/etc/apps/Splunk_TA_microsoft-cloudservices/local folder with the following sourcetype stanza and still field extraction is not working. Any advices? [mscs:storage:blob:csv] BREAK_ONLY_BEFORE_DATE = DATETIME_CONFIG = CURRENT INDEXED_EXTRACTIONS = csv KV_MODE = none LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true SHOULD_LINEMERGE = false Thank you!
Hello If now, it is 30/12/2021 22:30, how can I search for timestamps from 29/12/2021 00:00:00 (i.e. beginning of 29/12/2021 or dynamically 'beginning of yesterday')? I need this in a search code r... See more...
Hello If now, it is 30/12/2021 22:30, how can I search for timestamps from 29/12/2021 00:00:00 (i.e. beginning of 29/12/2021 or dynamically 'beginning of yesterday')? I need this in a search code rather than the GUI presets etc. Thanks!
Our DNS logs are sent via syslog to a HF through an Epilog agent. The EpiLog agent reads the dns log file line by line and each line is sent as a separate event to the HF,  looking something like thi... See more...
Our DNS logs are sent via syslog to a HF through an Epilog agent. The EpiLog agent reads the dns log file line by line and each line is sent as a separate event to the HF,  looking something like this: Dec 24 04:05:11 192.####### MSDNSLog 0 12/24/2021 12:02:06 AM 04B4 PACKET 000000####### UDP Rcv 142.####### 3f94 R Q [8281 DR SERVFAIL] PTR (2)87in-addr(4)arpa(0) Dec 24 04:05:11 192.####### MSDNSLog 0 UDP response info at 000000EE456861F0 Dec 24 04:05:11 192.####### MSDNSLog 0 Socket = 1244 Dec 24 04:05:11 192.####### MSDNSLog 0 Remote addr 142.1#######, port 53 Dec 24 04:05:11 192.#######MSDNSLog 0 Time Query=1220313, Queued=0, Expire=0 Dec 24 04:05:11 192.####### MSDNSLog 0 Buf length = 0x0fa0 (4000) Dec 24 04:05:11 192.####### MSDNSLog 0 Msg length = 0x0037 (55) Dec 24 04:05:11 192.####### MSDNSLog 0 Message: Dec 24 04:05:11 192.####### MSDNSLog 0 XID 0x3f94 Dec 24 04:05:11 192.####### MSDNSLog 0 Flags 0x8182 Dec 24 04:05:11 192.####### MSDNSLog 0 QR 1 (RESPONSE) Dec 24 04:05:11 192.####### MSDNSLog 0 OPCODE 0 (QUERY) Dec 24 04:05:11 192.####### MSDNSLog 0 AA 0 Dec 24 04:05:11 192.####### MSDNSLog 0 TC 0 Dec 24 04:05:11 192.####### MSDNSLog 0 RD 1 Dec 24 04:05:11 192.####### MSDNSLog 0 RA 1 Dec 24 04:05:11 192.####### MSDNSLog 0 Z 0 Dec 24 04:05:11 192.####### MSDNSLog 0 CD 0 Dec 24 04:05:11 192.####### MSDNSLog 0 AD 0 Dec 24 04:05:11 192.#######MSDNSLog 0 RCODE 2 (SERVFAIL) Dec 24 04:05:11 192.####### MSDNSLog 0 QCOUNT 1 Dec 24 04:05:11 192.1####### MSDNSLog 0 ACOUNT 0 Dec 24 04:05:11 192.#######  MSDNSLog 0 NSCOUNT 0 Dec 24 04:05:11 192.1###### MSDNSLog 0 ARCOUNT 1 Dec 24 04:05:11 192.1##### MSDNSLog 0 QUESTION SECTION: Dec 24 04:05:11 192.1##### MSDNSLog 0 Offset = 0x000c, RR count = 0 So originally each of those lines was indexed as a separate event in Splunk. I played around with the props.conf file for that specific sourcetype and set  the parameters as follows: SHOULD_LINEMERGE=TRUE TIME_PREFIX to match Dec 24 04:05:11 192.###### MSDNSLog 0 TIME_FORMAT=%m/%d/%Y %l:%M:%S %p BREAK_ONLY_BEFORE=PACKET (Every event starts with a line that contains packet) LINE_BREAKER = ([\r\n]+) TRUNCATE=0 MAX_EVENTS=500000 (I've seen some  events be very long) MAX_TIMESTAMP_LOOKAHEAD=100 SEDCMD-null = regex to get rid of  Dec 24 04:05:11 192.####### MSDNSLog 0 at the beginning of every line Based on my understanding (and I played around with Add Data on a searchhead and the above parameters, where it works), the following should happen: The lines are broken on each new line, then they are merged, with each new event being formed when a line has PACKET in it, timestamp is extracted and then the MSDNSLOG stuff at the beginning of each line is removed.  However, I'm not seeing the timestamp being extracted properly and some (not all)of the DNS events get split like below into separate events: What could I be missing to get all events merged correctly? Please keep in mind that using sysmon/network tap/stream is not an option at the moment so I stuck with trying to the data ingested properly using the conf files.  
Hello, I am new to Splunk.  I have successfully got our SC4S server setup and sending info to Splunk.  I am working on getting data in from our Barracuda Web Filter.  The data is going in but gettin... See more...
Hello, I am new to Splunk.  I have successfully got our SC4S server setup and sending info to Splunk.  I am working on getting data in from our Barracuda Web Filter.  The data is going in but getting assigned a source type of nix:syslog.  I have installed the BarracudaWebFilter app in Splunk but for it to work I am reading the sourcetype needs to be "barracuda".   I believe I need to add a line in the splunk_metadata.csv file on the SC4S server but not sure what it should be.  Anybody else set this up and have any info the could provide. Thanks,
Hi Team, We are frequently seeing dispatch directory messages in the splunk GUI. Show please help me how to handle it in a right way with some permanent solution. Also, we have an idea that we can... See more...
Hi Team, We are frequently seeing dispatch directory messages in the splunk GUI. Show please help me how to handle it in a right way with some permanent solution. Also, we have an idea that we can increase the threshold limit, so help me correctly how to increase the threshold limit so that we could stop seeing these messages in near future. Regards,
I have looked for solutions but I have mostly found results regarding only current and past time comparison which is not what I need. I have a query that bins _time by 24h spans over the previous ... See more...
I have looked for solutions but I have mostly found results regarding only current and past time comparison which is not what I need. I have a query that bins _time by 24h spans over the previous 7 days. and calculates a numeric value associated with those time spans. What I need is to compare each day's values to the entire week's and find any time period (so 48h) where the number jumped significantly.  An example of something similar to my my code: index=sandwiches saved_search_name="yum" earliest=-7d | bin span=24h _time | search sandwich_type="PB&J" | stats count by total_bread_type _time | stats sum(total_bread_type) as bread by _time | eval bread = round(bread / 10000, 2) currently the results are like this: _time bread 2021-12-22 18:00 22 2021-12-23 18:00 23 2021-12-24 18:00 21 2021-12-25 18:00 47 2021-12-26 18:00 48 2021-12-27 18:00 46 2021-12-28 18:00 47 Basically I am looking to compare the 'bread' values by _time and figure out if/where there is a jump of 10 or more and return that data. Any insight would be appreciated. Thanks!
How do I pair events 4778 & 4779 for the same Logon_ID when I have multi 4778 and multi 4779? I would like to pair the first 4779 event (disconnect) with the first 4778 event (reconnect) and than do... See more...
How do I pair events 4778 & 4779 for the same Logon_ID when I have multi 4778 and multi 4779? I would like to pair the first 4779 event (disconnect) with the first 4778 event (reconnect) and than do the same for the second 4779 event with the second 4778 event etc'