All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I wanted to update my query to exclude Saturday and Sunday from attached query which is running for last 30 days Please suggest Query searches for host which generated event code 52 in last 30 days
Today is 10/2/2020. I need to execute 6 searches using relative time for last month (earliest= & latest=) that are each 5 days in length. Specifically: 9/01/2020:00:00:00 - 9/05/2020:23:59:59 9/06... See more...
Today is 10/2/2020. I need to execute 6 searches using relative time for last month (earliest= & latest=) that are each 5 days in length. Specifically: 9/01/2020:00:00:00 - 9/05/2020:23:59:59 9/06/2020:00:00:00 - 9/10/2020:23:59:59 9/11/2020:00:00:00 - 9/15/2020:23:59:59 9/16/2020:00:00:00 - 9/20/2020:23:59:59 9/21/2020:00:00:00 - 9/25/2020:23:59:59 9/26/2020:00:00:00 - 9/30/2020:23:59:59 I'd love to use these exact times as earliest/latest, or even epoch times, but that won't work in my particular situation.  How can I represent the 6 spans above in relative time?        
I am mostly new to Splunk but certainly the most well-versed member on my team. I was recently reprimanded by my company's Big Data leadership for "abusing the system" by running "25 queries with ind... See more...
I am mostly new to Splunk but certainly the most well-versed member on my team. I was recently reprimanded by my company's Big Data leadership for "abusing the system" by running "25 queries with index=*" and that I should know what indexes I am working within... Here's the deal.. I never ran 25 queries with index=*... at most my team has four total indexes so there wouldn't be much of a reason. My question is this - since I created a dashboard for my team - is it possible someone else ran the queries through my dashboard (say by inspecting a panel) and it registered to my user account? I currently have my group set to read-only permissions. Thoughts?
I have a problem to find some juniper devices syslog on the splunk, I did packet capture on the server and could confirm the syslog packet reached the server. it's a windows server 2012 R2, port is U... See more...
I have a problem to find some juniper devices syslog on the splunk, I did packet capture on the server and could confirm the syslog packet reached the server. it's a windows server 2012 R2, port is UDP 514. There are so many other devices syslog could be found, some of them are the same model and firmware and syslog configurations are identical. splunk version is 6.3.1. How can I troubleshoot?
Hello all, I have a distributed environment containing the following: 3 x Search heads (1 captain) 4 x Indexers clustered 1 x dedicated linux server for Splunk Stream (UF + TA addon) 1 x deploy... See more...
Hello all, I have a distributed environment containing the following: 3 x Search heads (1 captain) 4 x Indexers clustered 1 x dedicated linux server for Splunk Stream (UF + TA addon) 1 x deployment server 1x SHCD 1x CM The problem I am having is that for unknown reasons the dedicated splunk stream server is now unable to ping the server with the splunk stream app. This all was working but I fear I have made a config slip up somewhere. The Splunk stream TA is deployed to the dedicated stream server from the deployment server and contains the following files/config inputs.conf streamfwdlog.conf inputs.conf - [streamfwd://streamfwd] splunk_stream_app_location = https://<SERVER_IP>:8000/en-us/app/splunk_app_stream/ stream_forwarder_id = disabled = 0 I am able to successfully navigate to the stream app location https://<SERVER_IP>:8000/en-us/app/splunk_app_stream/ But the streamfwd logs are showing the following error message stream.CaptureServer - Unable to ping server (d6e0ed72-789a-4044-95f7-7de95ddbb221): /en-us/app/splunk_app_stream/ping/ status=303 If I navigate to the same URL with "ping" appended then it returns a 404. If you require any other info please let me know. Regards
Hi there,  I have a table with 5 fields.  E column is numeric value, C is sub category of A I want to sum E by column C AND column A. A B C D E a     x       30 a     y       20 a     x ... See more...
Hi there,  I have a table with 5 fields.  E column is numeric value, C is sub category of A I want to sum E by column C AND column A. A B C D E a     x       30 a     y       20 a     x       40 b    y        10 b    x        40 if I do stats(sum E) by A = it will give output of sum of first three rows of E. if I do stats(sum E) by c = it will give output of rows by x and by e.  I want to output be like a x 50  ( 50 is 30 + 20 in this case) Hope, I conveyed what I'm going for.  
have a scripted input that runs: netstat -tupn and the output shows:       tcp x.x.x.x:38314 x.x.x.x:7075 ESTABLISHED 4144/java tcp x.x.x.x:22 x.x.x.x:62601 ESTABLISHED 5830/sshd: tcp x.x.x.x:... See more...
have a scripted input that runs: netstat -tupn and the output shows:       tcp x.x.x.x:38314 x.x.x.x:7075 ESTABLISHED 4144/java tcp x.x.x.x:22 x.x.x.x:62601 ESTABLISHED 5830/sshd: tcp x.x.x.x:37032 x.x.x.x:8080 ESTABLISHED 4144/java tcp x.x.x.x:59344 x.x.x.x:49302 ESTABLISHED 4144/java    in my props.conf I have  [<sourcetype>] BREAK_ONLY_BEFORE = (tcp) SHOULD_LINEMERGE = false the events are getting indexed but I only see the first event tcp x.x.x.x:38314 x.x.x.x:7075 ESTABLISHED 4144/java and nothing else gets indexed. What am I missing?
Team,   Below search query is using maximum license in our environment. can we stop that from indexing? index=_internal source="*license_usage.lo*" type=Usage s="/rootfs/var/log/journal/"
Hello,   I am having problems approaching this problem. Say we have a KV store that stores asset information from a few different sources. The kv store has about 20 uniq fields. One of the fields w... See more...
Hello,   I am having problems approaching this problem. Say we have a KV store that stores asset information from a few different sources. The kv store has about 20 uniq fields. One of the fields within that kv store is state. Basically, the state is either (new) or (update) referencing how it was placed within the kv store. I would like to add another potential value called (stale) if it was last updated over 90 days ago.   At the same time I now have some logic that goes through the source indexes and pulls out events that are older than 90 days. Right now that list is about 1,000. With that list I want to basically say "Any event that is older than 90 days, take that hostname and match any hostname entry within the KV store that is identical, and update the state field entry with the value of (state)". Obviously, I would like to do that with the entire list of 1,000. I am hoping someone might have some psuedo code or best practice tips as I am having problems getting started with it.   Thanks, Joe
Hey all,   I've added the following to props.conf to parse out PRI from _raw, and Severity/Facility codes from PRI.  <p>props.conf<br> [syslog]<br> EXTRACT-PRI = ^<(?P<PRI>\d+)<br> LOOKUP-sys... See more...
Hey all,   I've added the following to props.conf to parse out PRI from _raw, and Severity/Facility codes from PRI.  <p>props.conf<br> [syslog]<br> EXTRACT-PRI = ^<(?P<PRI>\d+)<br> LOOKUP-syslog_facility = syslog_facility "Facility-code" AS "Facility-Code" OUTPUTNEW Facility AS Facility<br> EVAL-Facility-Code = (PRI - (PRI % 8)) / 8<br> EVAL-Severity-Code = PRI % 8<br> LOOKUP-syslog_severity = syslog_severity "Sev-code" AS "Severity-Code" OUTPUTNEW Severity AS Severity</p>   Now, we'd like to drop events if the Severity-Level is not above a certain level (for example, we'd like to drop all debug messages at ingest). I know that first we will have to convert the Severity/Facility codes from EVAL to INGEST_EVAL values so that we can operate on their values at ingest, but what is the best way to filter all messages that are say, Severity-Code>=6 (dropping info and debug.) I was thinking of applying another transform with a REGEX command that forwards to a null queue, but it doesn't look like there's any great boolean evaluators available in transforms. In search I could just use WHERE Severity_Code>6, but that's not available at ingest...
I have a query which looks like: index=test "TestRequest" | dedup _time | rex field=_raw "Price\":(?<price>.*?)," | rex field=_raw REQUEST-ID=(?<REQID>.*?)\s | rex field=_raw "Amount\":(?<amount>.*?... See more...
I have a query which looks like: index=test "TestRequest" | dedup _time | rex field=_raw "Price\":(?<price>.*?)," | rex field=_raw REQUEST-ID=(?<REQID>.*?)\s | rex field=_raw "Amount\":(?<amount>.*?)}," | rex field=_raw "ItemId\":\"(?<itemId>.*?)\"}" | eval discount=round(exact(price-amount),2) , percent=(discount/price)*100 , time=strftime(_time, "%m-%d-%y %H:%M:%S") | stats list(time) as Time list(itemId) as "Item" list(REQID) as X-REQUEST-ID list(price) as "Original Price" list(amount) as "Test Price" list(discount) as "Dollar Discount" list(percent) as "Percent Override" by _time [search index=test "UserId=" | rex field=_raw UserId=(?<userId>.*?)# | dedup userId | rex field=_raw X-REQUEST-ID=(?<REQID>.*?)\s | stats list(userId) as "User ID" list(REQID) as X-REQUEST-ID by _time] | where "Dollar Discount">=500.00 OR "Percent Override">=50.00 | table  "User ID" Item "Original Price" "Dollar Discount" "Test Price" "Percent Override" Time This query throws error as mismatch type for "Dollar Discount">=500.00 OR "Percent Override">=50.00 Since my fields in the table have "" e.g. "Dollar Discount" or "Percent Override", it doesn't work. If i replace these fields names without quotes as Dollar_Discount and Percent_Override, it works fine. how to use a table field with name listed in quotes in where clause?
Guys, loving the dark theme built-in option in Splunk 8. I was wondering though if this can be extended or modified? Say I want to change a font color: Is there a CSS file I can edit? If so is th... See more...
Guys, loving the dark theme built-in option in Splunk 8. I was wondering though if this can be extended or modified? Say I want to change a font color: Is there a CSS file I can edit? If so is that global CSS file or per app (i.e. when you select Dark Theme for an app/dashboard does it copy a CSS file into the app folder or the like) ? Beyond that is there a reference to create one's own themes?
Is there a heart beat from the HF I can monitor and if not detected, alert on it ? 
I have a task to take a list of active Indexes and create a new configuration file entry in a merged file, using a bunch of other configuration files. taking note of bucket size and what not  can an... See more...
I have a task to take a list of active Indexes and create a new configuration file entry in a merged file, using a bunch of other configuration files. taking note of bucket size and what not  can anyone help with that?
I need assistance with converting the Avg_Session_Time from seconds to minutes and seconds. Here is my current search index=kdol_7 sourcetype=ms:iis:auto dest_ip=10.140.14.228 | transaction CF_Con... See more...
I need assistance with converting the Avg_Session_Time from seconds to minutes and seconds. Here is my current search index=kdol_7 sourcetype=ms:iis:auto dest_ip=10.140.14.228 | transaction CF_Connecting_IP | stats avg(duration) AS Avg_Session_Time The column labeled current is the current result.  The column labeled desired is what I need assistance with.  Thanks in advance.    Current Desired Avg_Session_Time Avg_Session_Time 1045.2798365917713 17:41
Hi All,   I am using drill down in the dashboard panel to link to a report. The drill down does not work if the report has '&' in the name. I tried using &amp; and also used CDATA but no luck. Look... See more...
Hi All,   I am using drill down in the dashboard panel to link to a report. The drill down does not work if the report has '&' in the name. I tried using &amp; and also used CDATA but no luck. Looking for help to see if anyone has faced the similar issue or know how to resolve.   Thanks.
I would like to forward data from Elasticsearch to Splunk, but was not able to get a proper solution. I found the "Elasticsearch Data Integrator - Modular Input" Add-on in Splunkbase (https://splunk... See more...
I would like to forward data from Elasticsearch to Splunk, but was not able to get a proper solution. I found the "Elasticsearch Data Integrator - Modular Input" Add-on in Splunkbase (https://splunkbase.splunk.com/app/4175/), it seems to be fine, but I want to filter only important data for each Elasticsearch Indice before sending it to Splunk, Can you recommend the other solution to get data from Elasticsearch, and please do let me know the steps or reference document also.
I have a search index=foobar flashSteamName=foo/bar-moves/12adw320-df21-dasd-124d-12eda234 \ displays 0 results.  index=foobar flashSteamName=*  displays results Now Selected Fields on the left ... See more...
I have a search index=foobar flashSteamName=foo/bar-moves/12adw320-df21-dasd-124d-12eda234 \ displays 0 results.  index=foobar flashSteamName=*  displays results Now Selected Fields on the left side shows my "flashSteamName".  When I click on it, it shows my values and has a count of 20.  When I click it there it opens a new search and 0 results.   Also when I have the fields list in table format and I see my field "flashSteamName" and I click the value it shows I have 20 events. When I click it and it opens a search it has 0 results. Not sure what I can to fix/change or what can be done to be able to click the event data and display the results that states are available.   Also, I do not have a problem with searches that do not have / or - in the field. 
Hello, As the title suggests, I have a column chart with x-axis labels that are too long to fit under each column resulting in the text being truncated.  Here is an example of what I mean.  For simp... See more...
Hello, As the title suggests, I have a column chart with x-axis labels that are too long to fit under each column resulting in the text being truncated.  Here is an example of what I mean.  For simplicity I have set the labels for each column to "THIS TEXT IS TOO LONG TO FIT COMFORTABLY AS A LABEL". What I would like to know is whether it is possible to avoid truncating the text and instead, using CSS, have the text wrap and appear on a second line.  I know I can set the label rotation, but this it not what I want to do.  I want the labels to be horizontal and on multiple lines rather than truncated or rotated. Any help would be much appreciated. Best regards, Andrew
Hi,    i am relatively newer to SPL, i have a usecase to evaluate time difference bwn two fields in two different logs with common data field in both query1 and query 2   sample log looks like th... See more...
Hi,    i am relatively newer to SPL, i have a usecase to evaluate time difference bwn two fields in two different logs with common data field in both query1 and query 2   sample log looks like this. log1 - "field1:: value1 createdOn1:: "9/30/20 10:14 AM", commonfield:: "abds" log2 - "field:: value createdOn2::"2020-09-30 23:30:00" commonfield::"abds"   i have to correlate both of them by commonfieldValue and get difference of createdOn2-createdOn1 in seconds.   Experts, could you help me with this?