All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

| savedsearch cbp_inc_base | eval _time=strftime(opened_time, "%Y/%m/%d") | | bin _time span=1d   here _ time is giving complete data, i want to filter it for one month i.e.. 30days. I tried r... See more...
| savedsearch cbp_inc_base | eval _time=strftime(opened_time, "%Y/%m/%d") | | bin _time span=1d   here _ time is giving complete data, i want to filter it for one month i.e.. 30days. I tried relative_time, but its giving only for specific day
I have syslog-pushed events which behave... weirdly around the end of the year. As we all know, there might be some delay between the source emitting an event and the HF receiving it (add to it poss... See more...
I have syslog-pushed events which behave... weirdly around the end of the year. As we all know, there might be some delay between the source emitting an event and the HF receiving it (add to it possible slight clocks drift and time needed for passing the event between various stages of my syslog environment - it can accumulate to several seconds). So if I have an event which is being sent with a timestamp slightly before midnight on Dec 31st (from the source point of view) and gets received by HF just after midnight on Jan 1st and the event itself doesn't contain the year we get a very uncomofrtable situation. The parts of the date that the input can make out from the event, it fills ok. So we do get the "Dec 31st" part. But since there is no year in the syslog header HF fills the year part from the current (again - from his point of view) date. So I end up getting events indexed at "Dec 31st 2022". Is there some setting I'm missing that could prevent that? (can't modify the timestamp format at source).
In my dashboard studio I want to use dynamic dropdowns. The data source is a metric search like | mcatalog values("extracted_host") AS devices WHERE index=spec_metric extracted_host="*PI*" | sort d... See more...
In my dashboard studio I want to use dynamic dropdowns. The data source is a metric search like | mcatalog values("extracted_host") AS devices WHERE index=spec_metric extracted_host="*PI*" | sort devices | table devices In normal search I get all devices, but in dashboard my dropdown is empty. Is the problem of statistical data? Has anybody a solution? Thanks in advance.
Error :    java.lang.RuntimeException: java.lang.RuntimeException: java.sql.SQLException: Missing defines  
We need to run a profiler on Windows IIS. Need to track PHP, MySQL and Redis. Could you please check and provide me the more info please. If it is suitable will go with premium plan.
Hi Community, Is there a way to get specific data from your log strings and put them in tabular format? We have logs like activity xxxx failed for account yyyy and for user zzzz So we need data xx... See more...
Hi Community, Is there a way to get specific data from your log strings and put them in tabular format? We have logs like activity xxxx failed for account yyyy and for user zzzz So we need data xxxx, yyyy and zzzz as search data in tabular format for our alerts. Any help is appreciated..!   Thanking you in anticipation..!   
Hello guys, Splunk newbie here.   Hope someone can assist in my case,  so index=*_whatever is expected to be filled with data in monthly basis, I want to create a dashboard that tracks whether whi... See more...
Hello guys, Splunk newbie here.   Hope someone can assist in my case,  so index=*_whatever is expected to be filled with data in monthly basis, I want to create a dashboard that tracks whether which indexes are filled and which that are not so I can keep track and check which ones are empty and which ones are filled. Thank you!!
Hello ! Can someone tell methe difference between   /services    and   /servicesNS    when using the Splunk REST API please ?   
I noticed in our environment, from many uf, the internal logs were indexed under a different index name. After investigation, I find it's related to some settings in transforms.conf. So in transform... See more...
I noticed in our environment, from many uf, the internal logs were indexed under a different index name. After investigation, I find it's related to some settings in transforms.conf. So in transforms.conf, it's like: [test_windows_index] REGEX =.* DEST_KEY = _MetaData:Index FORMAT = rexall_windows in props.conf, for certain hosts, there're settings like: [host::testserver1] TRANSFORMS-Microsoft_AD_1 = test_windows_index, Routing_testCloud I believe I should try to exclude indexes like "_internal, _audit", so I changed REGEX=.* to  REGEX=[a-zA-Z0-9]+ but it doesn't seem to work. Appreciate if somebody here can help or provide suggestions.  
Hi All, I have a query to get the result of the list of filesystems and their respective disk usage details as below: File_System  Total in GB   Used in GB   Available in GB   Disk_Usage in % /var... See more...
Hi All, I have a query to get the result of the list of filesystems and their respective disk usage details as below: File_System  Total in GB   Used in GB   Available in GB   Disk_Usage in % /var                   10                    9.2                   0.8                           92 /opt                   10                    8.1                   1.9                          81 /logs                 10                    8.7                   1.3                          87 /apps                10                    8.4                   1.6                          84 /pcvs                10                    9.4                    0.6                         94 I need to create a dropdown with the disk usage values to get the above table for a range of values. For e.g. If I select 80 in the dropdown it will show the table with values of disk usage in the range 80-84, then if I select 85 in the dropdown it will show the table with values of disk usage in the range 85-89 and so on. I created the dropdown with token as "DU" and created the search query for the table as: .... | search Disk_Usage=$DU$ | table File_System,Total,Used,Available,Disk_Usage | rename Total as "Total in GB" Used as "Used in GB" Available as "Available in GB" Disk_Usage as "Disk_Usage in %" But with this query I am able to get the table for a single disk usage value only. Please help me create a query so that upon selecting an option in the dropdown will give the table for a range of disk usage values.
Hi all,  I'm trying to find the specific queries for the SH to create Splunk dashboard of the following info (example) USER PID %MEM %CPU root     40          5.6         0.4 root       12       ... See more...
Hi all,  I'm trying to find the specific queries for the SH to create Splunk dashboard of the following info (example) USER PID %MEM %CPU root     40          5.6         0.4 root       12         4.2         0.2 I got the index named  "stats"  sourcetype "top" I need that dashboard to show the info from just the last 5 minutes. Any idea? Thanks, Max
Hi all,  I'm trying to find the specific queries for the SH to create Splunk dashboard of the following info (example) USER PID %MEM %CPU root     40          5.6         0.4 root       12       ... See more...
Hi all,  I'm trying to find the specific queries for the SH to create Splunk dashboard of the following info (example) USER PID %MEM %CPU root     40          5.6         0.4 root       12         4.2         0.2 I got the index named  "stats"  sourcetype "top"   Any idea? Thanks, Max
Hello, I'm attempting to use the regex command to filter out any records on the "user" field that do not match the written expression below: index=myindex sourcetype=traceability_log4net earliest=-... See more...
Hello, I'm attempting to use the regex command to filter out any records on the "user" field that do not match the written expression below: index=myindex sourcetype=traceability_log4net earliest=-10d | regex _raw="user=[a-z]{3,6}[a-z1-9]{1,2}" | table user Per the expression, there should not be any "user" records that exceed 8 characters. However, the non-matching values are not being filtered out. "sstevenson6111", for example,  should theoretically be listed as "sstevens" per the expression.  I'm sure there's something blindingly obvious that I'm missing!   
Log4J Query:   index=* | regex _raw="(\$|%24)(\{|%7B)([^jJ]*[jJ])([^nN]*[nN])([^dD]*[dD])([^iI]*[iI])(:|%3A|\$|%24|}|%7D)" | eval action=coalesce(action_taken, elb_status_code, status) | where NOT ... See more...
Log4J Query:   index=* | regex _raw="(\$|%24)(\{|%7B)([^jJ]*[jJ])([^nN]*[nN])([^dD]*[dD])([^iI]*[iI])(:|%3A|\$|%24|}|%7D)" | eval action=coalesce(action_taken, elb_status_code, status) | where NOT (cidrmatch("192.168.0.0/16",src_ip) OR cidrmatch("10.0.0.0/8",src_ip) OR cidrmatch("172.16.0.0/12",src_ip)) OR Country="United States" | iplocation src_ip | eval notNULL="" | fillnull value="unknown" notNULL, src_ip, dest_ip, action, url, Country | stats count by src_ip, Country, dest_ip, url, action, sourcetype | sort - count​   This checks anywhere where there is a sign of the Log4J exploit being used.  I've done field extraction on any sourcetypes returned by my previous query:  index=* | regex _raw="(\$|%24)(\{|%7B)([^jJ]*[jJ])([^nN]*[nN])([^dD]*[dD])([^iI]*[iI])(:|%3A|\$|%24|}|%7D)" | stats count as "exploit attempts" by sourcetype | sort - "exploit attempts" I extracted fields so that I can get a table with src_ip, Country, dest_ip, url, action, sourcetype, and count. I want to then use this query in subsequent queries to get information on if the exploit was successful, and if there is any other communication that follows.   The query works and I get results like this (fake results): src_ip Country dest_ip url action sourcetype count 248.216.243.59 Unknown 192.168.1.148 192.168.1.148/ blocked firewall 3 207.191.80.208 US 192.168.1.216 192.168.1.216/ allowed firewall 2    Problem being... The query runs really slow after a few minutes of running. It starts out by doing millions of events every few seconds and slows down to doing thousands every few seconds.  Some info from logs: command.search, command.search.kv, and dispatch.stream.remote take up the most time of the run. I'm getting warnings in search.log like "Max bucket size is larger than the index size limit" , "Invalid field alias specification in stanza". However, these don't seem to be the reason for the error. Using high_perf and Fast Mode If there is any more information I can add, then feel free to ask and I will edit.  
Hi all,  how can I set the Universal Forwarder to run a script every 5 minute with a cronjob Info of the script should be showing up when searching from the Search Head Thanks in advance, Max.
Need help on enterprise security. Is there a way to create a standard TAXII Parser that can do correlation searches of logs coming from Maritime Transportation System ISAC & logs coming from Stash. N... See more...
Need help on enterprise security. Is there a way to create a standard TAXII Parser that can do correlation searches of logs coming from Maritime Transportation System ISAC & logs coming from Stash. New to ES and have no idea what's all about. See the issue below, If it'll help. Please advise and help, on what's needed to be done. I am very new to ES. Thanks   "A shipping company that gets Intelligence feeds/reports from MTS-ISAC (Maritime Transportation System ISAC) The MTS-ISAC provides proactive cyber threat intelligence, alerts, warnings, and vulnerability information cultivated from maritime stakeholders and public and private sector shares, open-source intelligence, and cybersecurity news So it's just a matter of parsing that information so Matson can do correlation searches (correlate it with logs) that are currently coming from Stash"  
Hi Team, Our Splunk instance is hosted in Cloud and maintained by Splunk Support. So recently we got an email from Splunk Support stating that our Universal Forwarder & Associated Certificate Packag... See more...
Hi Team, Our Splunk instance is hosted in Cloud and maintained by Splunk Support. So recently we got an email from Splunk Support stating that our Universal Forwarder & Associated Certificate Package has been upgraded to latest version since it is going to expire in couple of days and they have requested us to download and install the UF package from Search Head and rollout to all our Client machines since they are planning to upgrade the package in the indexer level in a couple of days. So our architecture is that we have 1 Deployment master server and 4 HF servers. Search heads , Cluster master , Indexers etc. are managed by Splunk Support. So usually we used to push the customized apps and as well as forwarder apps from our Deployment master server to all our client machines. and moreover all our Splunk servers DM & HF are running with Linux OS.   https://docs.splunk.com/Documentation/Forwarder/8.2.4/Forwarder/ConfigSCUFCredentials#Install_the_forwarder_credentials_on_a_deployment_server So as per the documentation I have downloaded the "splunkclouduf.spl" credentials package from our Search head and placed it in /opt/splunk/etc/deployment-apps folder in our DM server then as mentioned I have untar the file so after untar the file I can see a new folder as "100_xxxx_splunkcloud" Later it is mentioned to install the credentials package so in here in this case it is mentioned to choose the path of splunkclouduf.spl so should i need to choose which path and install it? /opt/splunk/etc/deployment-apps/splunkclouduf.spl (OR) /opt/splunk/etc/deployment-apps/100_xxxx_splunkcloud  I am quite not sure hence I am struck over here and didn't installed the credentials yet so kindly help to check and update please. And post installation of credentials package it is mentioned to restart the Splunk instance in the DM server.  So post installation in my DM server how do I push them to all client machines? Do i need to edit the existing forwarder outputs app (which is pushed to all client machines and HF) Since we already have an app "forwarder_outputs" which we have pushed to all client machines. So in this app we have local and metadata folder in it. And in local folder we have limits.conf, outputs.conf, xxx_cacert.pem & xxx_server.pem file and in metadata folder we have local.meta so now what are the files do i need to modify post installing the credential package in DM server and push them to all client machines so that the UF package would be running with latest version. So kindly help on my request .    
I assume that I need to install Splunk Enterprise Security 1. Is my assumption correction? 2. It says Contact Sales when I try to download the Enterprise Security App. I have developer License   ... See more...
I assume that I need to install Splunk Enterprise Security 1. Is my assumption correction? 2. It says Contact Sales when I try to download the Enterprise Security App. I have developer License   Thanks in advance.
Hello I want to feed data directly into Excel but I do not have API access nor I can install custom connectors. Is there any other solution?  I seem to be able to run a search like the below, woul... See more...
Hello I want to feed data directly into Excel but I do not have API access nor I can install custom connectors. Is there any other solution?  I seem to be able to run a search like the below, would that work directly in Excel with MS authentication? | rest splunk_server=local servicesNS/-/-/data/ui/views/ Thanks!
Hello, I am monitoring a csv file using universal forwarder and the first column in the csv file is Last_Updated_Date. This file is indexed based on this field (_time = Last_Updated_Date). This fil... See more...
Hello, I am monitoring a csv file using universal forwarder and the first column in the csv file is Last_Updated_Date. This file is indexed based on this field (_time = Last_Updated_Date). This file also has a column called Created_Date. While writing a search, I want to use Created_Date as _time to filter the data and the search I have written is given below:   index="tickets" host="host_1" | foreach * [ eval newFieldName=replace("<<FIELD>>", "\s+", "_"), {newFieldName}='<<FIELD>>' ] | fields - "* *", newFieldName | eval _time=strptime(Created_Date, "%Y-%m-%d %H:%M:%S") | sort 0 -_time | addinfo | where _time>=info_min_time AND (_time<=info_max_time OR info_max_time="+Infinity") | dedup ID | where Status!="Closed" | eval min_time=strftime(info_min_time, "%Y-%m-%d %H:%M:%S") | eval max_time=strftime(info_max_time, "%Y-%m-%d %H:%M:%S") | eval index_time=strftime(_indextime, "%Y-%m-%d %H:%M:%S") | rename Created_Date as Created, Last_Updated_Date as "Last Updated" | table ID Type Created "Last Updated" _time min_time info_min_time max_time info_max_time index_time | sort 0 Created    When I run this search for a period, say 1st Feb 2021 - 31st Jul 2021, it gives results as below: When I checked this for a longer period, say All Time - it gives results as below: There are many open tickets (created between Feb and Jul) and not just two, as shown in the first screenshot, but it seems still the timepicker is using Last_Updated_Date to filter the events and not the Created_Date. Can you please suggest how I can fix this? Thank you.