All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

how to remove values from fields highlighted in red index=main | eval description=case(status == 200, "OK", status == 404, "Not found", status == 500, "Internal Server Error", status == 503, "Se... See more...
how to remove values from fields highlighted in red index=main | eval description=case(status == 200, "OK", status == 404, "Not found", status == 500, "Internal Server Error", status == 503, "Service Unavailable", status == 406, "Not Acceptable", status == 400, "Bad Request ", status == 408, "Request Timeout", status == 505, "HTTP Version Not Supported", status == 403, "Forbidden") | table _time status description | where isnotnull(status) | dedup status
I have configured an alert to notify by Microsoft Teams when CPU threshold reached to 90%. The alert comes when it reaches to 90%. And immediately the CPU usage comes down to 80% within 5 minutes... See more...
I have configured an alert to notify by Microsoft Teams when CPU threshold reached to 90%. The alert comes when it reaches to 90%. And immediately the CPU usage comes down to 80% within 5 minutes. Is there any settings where i can configure to recheck the CPU usage after the first alert is raised? and send another alert saying everything is OK now?
Hello, I have a line chart with multiple series in my dashboard. The series names are quite long, so they cut in the legend by default. Is there any way to display the full series name while m... See more...
Hello, I have a line chart with multiple series in my dashboard. The series names are quite long, so they cut in the legend by default. Is there any way to display the full series name while mouse over on the series on the legend? I know I can mouse over on the chart itself and then the full series name will appear, but I would like to get this effect (full series name) when moving my mouse over the legend. How would I do this? Kind Regards, Kamil
I have installed V2.02 of the app and configured manual performance metrics inputs to Windows hosts with UF already installed. Problem is that the Overview dashboard panels are not working. | input... See more...
I have installed V2.02 of the app and configured manual performance metrics inputs to Windows hosts with UF already installed. Problem is that the Overview dashboard panels are not working. | inputlookup em_entities is returning results for my hosts, but I notice that the metric_name fields are all small caps and the dashboard searches are looking for metric_names that is not all small caps: avg(Processor.%_Privileged_Time). If I change the metric names in the search to all small caps the searches run without issues. If I read the metrics index documentation it states that you can only use small caps in the metric names. Am I missing something when creating the manual inputs? Field alias also does not seem to be working either and I also cant find where to edit the dashboards to change the metric names. Any suggestions welcome? Short from recreating all the dashboards to my own Im out of ideas. Thanks Pieter
Hi. It seems Microsoft has exposed the audit log for Azure DevOps, https://docs.microsoft.com/en-us/rest/api/azure/devops/audit/audit%20log/query?view=azure-devops-rest-5.1 Has anyone tried to i... See more...
Hi. It seems Microsoft has exposed the audit log for Azure DevOps, https://docs.microsoft.com/en-us/rest/api/azure/devops/audit/audit%20log/query?view=azure-devops-rest-5.1 Has anyone tried to index this log and how did you do it? Kind regards las
Hi Team, We were using Splunk Enterprise for last few years. And recently by May 2019 we have migrated all the data from all index from Splunk enterprise to Splunk Cloud (i.e We have reconfigured ... See more...
Hi Team, We were using Splunk Enterprise for last few years. And recently by May 2019 we have migrated all the data from all index from Splunk enterprise to Splunk Cloud (i.e We have reconfigured the data in Splunk Cloud). My query is that we have just started migrated the data only during May 2019 but if we search the data for 2015, 2016 ,2017 and so on i can able to see the events in Splunk Cloud for few of the index. The default retention is 90 days. But how come it holds the data which are very old that is even i can able to see the data from 2013 as well. So how the bucketing system works for Splunk Cloud. And simultaneously when I have searched the configured data after 90 days as per the retention policy i cant able to see the logs searchable after 90 days but still how come it holds the old data? Do we have any architecture diagram explaining the mechanism or how it works. Kindly help to check and update on the same.
After importing DBConect data, Although the rentention of the index is a day ago, It is kept for 5 days. Does splunk cloud keep the period longer than rentention? If so, is there a way to kee... See more...
After importing DBConect data, Although the rentention of the index is a day ago, It is kept for 5 days. Does splunk cloud keep the period longer than rentention? If so, is there a way to keep it for one day only?
Hi Team, We are using Splunk Enterprise on AWS environment. So long back there is an Cloudtrial app configured on the same. Logs are directly getting pushed to splunk indexer through S3 bucket base... See more...
Hi Team, We are using Splunk Enterprise on AWS environment. So long back there is an Cloudtrial app configured on the same. Logs are directly getting pushed to splunk indexer through S3 bucket based on the inputs configured on the Coudtrial app. Since this App version is old, there is no option to configure the inputs through GUI. we are making changes through inputs.conf file itself. I've to block the Decrypt logs (.gz) getting indexed from the splunk. please suggest the work around for the same. Let us know if this cloud trial App has to be upgraded for the same and what will be the latest version of this.
Hi Splunk Team! I recently found filed "dvc_host" in paloalto add-on has no data. I need to get back to that field data Thanks All
I'm doing a new install with 8.0.1 and want to install the Splunk App for Unix and Linux that is compatible with ver. 8.0.1. to collect data. I have a HF, SH Idx and Deployment servers. The document... See more...
I'm doing a new install with 8.0.1 and want to install the Splunk App for Unix and Linux that is compatible with ver. 8.0.1. to collect data. I have a HF, SH Idx and Deployment servers. The document doesn't mentioned 8.0 anyone knows if it will work?
Hi, I have a scheduled search in Splunk with the following link in the description field [1] and would like to capture the 'earliest=' part of the URL to match the actual event time AND have the 'lat... See more...
Hi, I have a scheduled search in Splunk with the following link in the description field [1] and would like to capture the 'earliest=' part of the URL to match the actual event time AND have the 'latest=' part of the URL to be 5 minutes after the event time. Raw scheduled search link: [1] https://splunkserver.blah/en-US/app/search/search?q=$search$&earliest=$trigger_time$&latest=$trigger_time$ Example scenerio: Event time: 2/10/20 8:15:13.000 AM Search query: index=windows EventCode=4624 LogonType=3 User=john.smith When the alert triggers, the above scheduled search link turns into something like this: [2] https://splunkserver.blah/en-US/app/search/search?q=index=windows EventCode=4624 LogonType=3 User=john.smith&earliest=1581282963.14079&latest=1581282963.14079 When I open the link above [2], I get an error of 'Invalid latest_time: latest_time must be after ealiest_time.'. The epoch time captured is the time of when the alert triggered. Does anyone know how to capture the actual event time?
Hello. After upgrading from 7.3 to 8.01 my search heads no longer work. It will not load up the search head web page. Any ideas? If I load from a backup back to 7.3 all of my real time indexed ... See more...
Hello. After upgrading from 7.3 to 8.01 my search heads no longer work. It will not load up the search head web page. Any ideas? If I load from a backup back to 7.3 all of my real time indexed data is there and still working and I can search it all. If no ideas. Is there a way to keep the config but apply it to a new search head? Effectively rebuilding the search head from scratch but with a 8.01 build on it? Either will suffice. Many thanks in advance
Hi, we just ran a scan on a network and found some vulnerabilities in tenable for one particular machine(ipv4). lets say 10 vulnerabilities were discovered on the tenable app but when i was ch... See more...
Hi, we just ran a scan on a network and found some vulnerabilities in tenable for one particular machine(ipv4). lets say 10 vulnerabilities were discovered on the tenable app but when i was checking splunk, i could only see 8 vulnerabilities in splunk. 2 events(vulnerabilities) were missing in splunk for the same machine. We have the Tenable App for Splunk installed on our splunk search head. Is this a truncation issue? below are the config from transforms.conf [tenable:nnm:vuln] DATETIME_CONFIG = CURRENT EVAL-vendor_product = "Tenable xxx" EVAL-product = "xxx" EVAL-vendor = "Tenable" TRUNCATE = 68000000 SHOULD_LINEMERGE = 0
Hello, I have Splunk 8.0.1 installed on Ububntu 18.04.4 LTS. I can connect to port 8000 from the same server with any URL (localhost, 127.0.0.1, server name, server IP address). I can see login pa... See more...
Hello, I have Splunk 8.0.1 installed on Ububntu 18.04.4 LTS. I can connect to port 8000 from the same server with any URL (localhost, 127.0.0.1, server name, server IP address). I can see login page if I use SSH tunneling connecting from remote host with redirect to localhost:8000. But I cannot connect from remote host entering any valid URL to browser - connection times out. I have no firewall on my server. I have all Splunk services running and all services ports listening. I can see incoming packets with tcpdump - but no replies. I can connect to other services (SSH and Apache, for example) on my server. There are no errors in log files - and no events for incoming connections in web_access.log. What else have I to check? Best regards, Cyril
Hi guys, I'm having trouble making a simple subtraction (well, I thought it would be simple!). Field1 is a number in string format, Field2 is a count of events. What am I doing wrong? index=in... See more...
Hi guys, I'm having trouble making a simple subtraction (well, I thought it would be simple!). Field1 is a number in string format, Field2 is a count of events. What am I doing wrong? index=index_name | convert num(Field1) as Field1Total | stats count(Field2) as Field2Total | eval Difference=Field2Total - Field1Total | table Difference Thanks for your help!
The logs sources push logs through SFTP but they are not readable or kind of logs are in encrypted form when received by forwarder. How can change the logs to readable form when they are received... See more...
The logs sources push logs through SFTP but they are not readable or kind of logs are in encrypted form when received by forwarder. How can change the logs to readable form when they are received by forwarder. suggestion appreciated.
On a fresh Splunk Enterprise install, I cannot log in to the web GUI. When I get the password wrong, I am told it is wrong. When I get the password right, it just reloads the login page. Here are t... See more...
On a fresh Splunk Enterprise install, I cannot log in to the web GUI. When I get the password wrong, I am told it is wrong. When I get the password right, it just reloads the login page. Here are the symptoms: - Fresh Install of Splunk 8.0.1 (Have also tried 8.0.0 and 7.3.4) - During startup, I have set a strong password (another post mentioned in 7.1+ it enforces strong passwords, so I tried this) - No errors on install - From web GUI (localhost:8000), when I try my username and password, the login screen simply reloads. When I try a known incorrect password, I get a notification that "Login Failed." So, I am led to believe that my password declared during install is actually recognized since I am not told the login failed when I enter the correct password...but I can't get past the login page. Here's what I have tried so far: -Uninstalled and reinstalled Splunk Enterprise 8.01, 8.00, and 7.3.4. -Tried weak and strong passwords. -Tried default admin / changeme combo. But again, when I get the password wrong, I am told it is wrong. When I get the password right, it just reloads the login page. - Tried to reset admin password by renaming /etc/passwd to /etc/passwd.bak and then creating a new file called user-seed.conf in /etc/system/local with [user_info] PASSWORD = "my strong password" - Also tried to reset password with "splunk cmd splunkd rest -noauth POST /services/admin/users/admin "password=changeme" I am running on Windows 10. Of note, I have run Splunk 8.0.0 on this laptop before. I had the 60 day trial, switched to the free license group, but when I did that I had license violations leftover from when I used it is the 60 day trial. Since I was not able to search because of the violations, I re-installed from scratch (I didn't have much data ingested so I didnt mind) but this is when I stopped being able to login and have had this issue ever since. When I uninstall Splunk I verify that there are no files left in C:\Program Files. (i.e. no Splunk directory anymore) Has anyone seen this issue before? Anyone know a way past it?
I have a single indexer and single search head with the indexer attached as a search peer and I created one index called "winevent" on the indexer. I don't understand why the search head cannot se... See more...
I have a single indexer and single search head with the indexer attached as a search peer and I created one index called "winevent" on the indexer. I don't understand why the search head cannot see this index or auto complete it when I type it in search. Is there another file I need to modify to make my search head aware of the indexes in an indexer? I haven't seen a real clear answer on this and I am trying to expand my Splunk instance from all in one to distributed architecture
| makeresults | eval _raw="Nov 14 03:23:42 hostname rsyslogd-pstats:{ \"name\": \"global\", \"origin\": \"dynstats\", \"values\": { } } Nov 14 03:23:42 hostname rsyslogd-pstats:{ \"name\": \"imuxs... See more...
| makeresults | eval _raw="Nov 14 03:23:42 hostname rsyslogd-pstats:{ \"name\": \"global\", \"origin\": \"dynstats\", \"values\": { } } Nov 14 03:23:42 hostname rsyslogd-pstats:{ \"name\": \"imuxsock\", \"origin\": \"imuxsock\", \"submitted\": 0, \"ratelimit.discarded\": 0, \"ratelimit.numratelimiters\": 0 } Nov 14 03:23:42 hostname rsyslogd-pstats:{ \"name\": \"action 0\", \"origin\": \"core.action\", \"processed\": 50996, \"failed\": 0, \"suspended\": 0, \"suspended.duration\": 0, \"resumed\": 0 } Nov 14 03:23:42 hostname rsyslogd-pstats:{ \"name\": \"action 1\", \"origin\": \"core.action\", \"processed\": 50996, \"failed\": 0, \"suspended\": 0, \"suspended.duration\": 0, \"resumed\": 0 }" | makemv delim=" " _raw | stats count by _raw | rex "(?<json>{.*)" | spath input=json This query works fine. If I want to extract by props.conf, what's setting? TIME_FORMAT = %B %d %T KV_MODE = json LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true I created it above, but I don't know the other settings. If possible, please do not use SEDCMD and use it. FIELD_HEADER_REGEX = ^.*?(?={) Is this it? cf. Extract fields from files with structured data
I am new to Splunk, and I need to perform arithmetic on some multi-field values. What is the best way to do this? Here is an example of an event (where the "stuff" field is an array containing any ... See more...
I am new to Splunk, and I need to perform arithmetic on some multi-field values. What is the best way to do this? Here is an example of an event (where the "stuff" field is an array containing any number of key-value pairs with "A" and "B"): event1 { name: foo stuff: [ { A: 10 B: 220.0 } { A: 2 B: 50.0 } ] } event2 { name: foo stuff: [ { A: 2 B: 100.0 } ] } Here is the search I am using: <my search> | mvexpand stuff{} | rename stuff{}.* as * | eval test=B/A | table _time A B test However, test is empty whenever there is more than 1 "stuff" in my event. In the example above: test=null, null, 50 My goal is to calculate "test" so that: test=22, 25, 50