All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

We are currently monitoring application URLs using the "Website Monitoring" add-on. However, many URLs are returning null values for the response code, indicated as (response_code="" total_time="" re... See more...
We are currently monitoring application URLs using the "Website Monitoring" add-on. However, many URLs are returning null values for the response code, indicated as (response_code="" total_time="" request_time="" timed_out=True). This results in "timed_out=True" errors, making it impossible to monitor critical URLs and applications in the production environment. An urgent assistance is require to resolve this issue. Prompt support would be highly appreciated and invaluable.
I am pretty new to Splunk. I have requirement to create dashboard panel which relates our JSESSIONIDs and severity like for specific jsessionID how many critical or error logs present. Tried using s... See more...
I am pretty new to Splunk. I have requirement to create dashboard panel which relates our JSESSIONIDs and severity like for specific jsessionID how many critical or error logs present. Tried using stats and chart not getting desired result may be due to less idea in Splunk.  Need to present in pictorial way. Please suggest the Splunk query and what type of visualization will fit for this requirement?
I want to create a custom role to manage splunk risky commands. I looked for configuration files related to risky commands and found that it is related to web.conf command.conf file and I found that ... See more...
I want to create a custom role to manage splunk risky commands. I looked for configuration files related to risky commands and found that it is related to web.conf command.conf file and I found that you can disable risky commands by setting [command] is_risky=false in command.conf file. What I want is to give a role to manage risky_command so that I can get different search results than other users. I wonder if it is possible to create such a role.
I'm trying to count the unique values of a field by the common ID (session ID) but only once (one event). Each sessionID could have multiples of each unique field value. Initially I was getting the ... See more...
I'm trying to count the unique values of a field by the common ID (session ID) but only once (one event). Each sessionID could have multiples of each unique field value. Initially I was getting the count of every event which isn't what I want to count and if I 'dedup' the sessionID then I only get one of the unique field values back.  Is it possible to count one event per session ID for each unique field value?  "stats values("field") by sessionID"  gets me close but in the table it lists the sessionIDs whereas I'm hoping to get the number (count) of unique sessionIDs  Field sessionID value1 ABC123 123ABC value2 ABC123 value3 123ABC value4 ABC123 123ABC AABBCC 12AB3C value5 ABC123 123ABC AABBCC 12AB3C CBA321   Hopefully that makes sense. Thanks  
We have a data in splunk that is basically   DATE/APPLNAME/COUNT, there are about 15 applications, and we would like to create a table that shows by application, the current days count, the 7 day ave... See more...
We have a data in splunk that is basically   DATE/APPLNAME/COUNT, there are about 15 applications, and we would like to create a table that shows by application, the current days count, the 7 day average, and the variance of today, to the average.  I've tried a number of things with different searches like appendcols, but not getting the results.   I can produce the count or the average, but can't seem to put them together correctly. 
Hi, I am trying to download the Splunk Add-ons into my standalone system. However, it is keep on showing me the incorrect ID and password error. I tried changing the password but the issue is same. I... See more...
Hi, I am trying to download the Splunk Add-ons into my standalone system. However, it is keep on showing me the incorrect ID and password error. I tried changing the password but the issue is same. I am able to login with the same credentials but unable to download anything.
Our Splunk ingestion for eStreamer events appears to be getting overwhelmed by the amount of data we receive.  Currently, our ingestion averages over 10,000 events per second, and Cisco support indic... See more...
Our Splunk ingestion for eStreamer events appears to be getting overwhelmed by the amount of data we receive.  Currently, our ingestion averages over 10,000 events per second, and Cisco support indicate that the existing Splunk app that we've been using cannot handle that volume.  Is there an approach we can use to support that interface with a volume at that level? Currently, we are using this app: https://splunkbase.splunk.com/app/3662 Should we be using this app? https://splunkbase.splunk.com/app/7404 And, if we switch apps, will the new one (7404) be able to keep up with the data transfer volume?
Hello, I am creating an alert, and want to make sure that the schedule or real time setup sends an email out once the query finds a match. What is the best configuration for an alert to send an emai... See more...
Hello, I am creating an alert, and want to make sure that the schedule or real time setup sends an email out once the query finds a match. What is the best configuration for an alert to send an email as soon as the criteria of the query matches?  Thank you! 
Introspection seems to give me the data.mount_point only for "/" and not for the other file systems that I can see via the Linux "df -kh" command. How come?
Hello, trying to figure out why this eval statement testing for a null value always evaluates to "true", even when the field does contain data: Here is what the data looks like in the results: ... See more...
Hello, trying to figure out why this eval statement testing for a null value always evaluates to "true", even when the field does contain data: Here is what the data looks like in the results:    
Hi Everyone,  i got error when open Splunk Security Essentials, it says    A custom JavaScript error caused an issue loading your dashboard. See the developer console for more details.   When i ... See more...
Hi Everyone,  i got error when open Splunk Security Essentials, it says    A custom JavaScript error caused an issue loading your dashboard. See the developer console for more details.   When i check it in my browser console it says   GET http://my.ipaddress:8000/en-US/splunkd/__raw/servicesNS/zake/system/storage/collections/data/RecentlyViewedKO?limit=1&query=%7B%22%24and%22%3A%5B%7B%22type%22%3A1%7D%2C%7B%22id%22%3A%22home%22%7D%2C%7B%22app%22%3A%22Splunk_Security_Essentials%22%7D%5D%7D 503 (Service Unavailable)   since i know 503 code is error from server can i know any website to check where that server down ? i mean i check in Statusgator all is okay.  Any solution ?
I recently had an error message pop up synchronizing from our on-prem AD servers to Entra about an account issue.  I found that the account in question had all the attributes correct except for the u... See more...
I recently had an error message pop up synchronizing from our on-prem AD servers to Entra about an account issue.  I found that the account in question had all the attributes correct except for the userPrincipalName.  In the UPN, instead of having the username@mydomain.com, it was changed to a "\"@mydomain.com.  I am trying to figure out who or which account made that change in Splunk Cloud.  I have searched for Event IDs 4738 and it shows the UPN with the "\" but it doesn't tell me who made the change.  Also I am looking in the Windows TA addon to see if I can find any more info in there.
It's a bit long, hope i will not bore you. I made a splunk graph with two lines I need to see the values compared to the average of the last 10 days. So: One line is the percentage between a... See more...
It's a bit long, hope i will not bore you. I made a splunk graph with two lines I need to see the values compared to the average of the last 10 days. So: One line is the percentage between a time period, let's say Today 28 Jan 14:20 --> 14:25 Second line is the average percentage between the same time period but for last 10 days, 18-27 Jan 14:20 --> 14:25 What i can tell by looking at this graph is stuff like , "Today at 14:20 we had x% more/less than the last 10 day average, but at 14:21 we had x% more/less " etc. It's important to always have time snapped at the start of the minute (so if "now" is 17:31:23 then last minute is 17:30:00.000 --> 17:30:59.999) To make the search for this graph, i am using ealiest= and latest= like this: index=logs earliest=-5m@m latest =-1m@m | .... | append [search index=logs ( (earliest=24h-5m@m AND latest=-24h-1m@m) OR (earliest=-48h-5m@m AND latest=-48h-1m@) OR ... ) | ... ] | ... The search itself works ok, but my problem is when i try to make a dashboard for it. The dashboard needs to contain a time input with a token I named "thetime" Usually, you make the dashboard search use this time input by selecting "Shared Time picker (thetime)". This is not possible for my search, so i need somehow to specify $thetime.earliest$ / $thetime.latest$ in the search query. But i cannot just simply do something straight forward like:   index=logs earliest=$thetime.earliest$ latest=$thetime.latest$-24h@m | ...   Depending one what i select in the time picker, i can end up with messages like: Invalid value "now-24h" for time term 'latest' I know about | addinfo  https://docs.splunk.com/Documentation/SplunkCloud/latest/SearchReference/Addinfo but it's impossible to use "info_max_time" in the first part of the searches,  only after the pipe addinfo. Add even if it did somehow, there would still be the issue of the required minute snap to 00 --> 59 seconds. My approach, was to use the the <init> part of the dashboard xml to calculate all the needed earliest/latest. Currently i am dealing only with relative ranges, will deal with exact dates (between) later. So in my dashboard xml i have this: <form version="1.1" theme="light"> <init> <eval token="RSTART">strftime(relative_time(now(), $thetime.earliest$),"%Y-%m-%d %H:%M:00")</eval> <eval token="REND">strftime(relative_time(now(), $thetime.latest$),"%Y-%m-%d %H:%M:00")</eval> </init> ... <query>index=logs | eval RRSTART="$RSTART$", RREND="$REND$" | table _time, RRSTART, RREND</query> ... </form>   The following part drives me crazy. Assuming now is 17:55:02. I am accessing the splunk board that has this link: https://splunk-self-hosted/en-US/app/search/DASHBOARD_NAME When i first load the page, i see the time picker and a submit button. There are no results shown until i press submit. I select "Relative" , earliest 1 Hours ago, "No snap-to", latest now, apply and submit. The browser URL changes to https://splunk-self-hosted/en-US/app/search/DASHBOARD_NAME?form.thetime.earliest=-1h&form.thetime.latest=now and the results i get RRSTART RREND 2025-01-28 17:55:00  2025-01-28 17:55:00 (same values, bad) At this point, I just click the refresh button of the browser, and i get : RRSTART RREND 2025-01-28 16:55:00  2025-01-28 17:55:00 (correct values) So basically, if i always click submit and then reload, im get the correct values From what i understand from https://docs.splunk.com/Documentation/Splunk/9.4.0/Viz/tokens#Set_tokens_on_page_load this should not happen. As for my questions : Can anyone tell me if i am doing something wrong with <init> ? Maybe it cannot be used this way with dashboard tokens ? Or maybe there is another way to do this without using <init> ? Thank you for taking the time to read. Using Splunk Enterprise Version: 9.1.0.2            
Team, I got stats output as below and I need to rearrange stats current output :- transaction_id  source count 12345                   ABC      1 12345                   XYZ       1 Required Ou... See more...
Team, I got stats output as below and I need to rearrange stats current output :- transaction_id  source count 12345                   ABC      1 12345                   XYZ       1 Required Output :- transaction_id   ABC    XYZ 12345                      1          1
Hey Splunk Community, I was wondering if anyone has figured out what is the cause for the GUI not to work at all in a new install of Splunk 9.3 or 9.4 on a [CIS Red Hat ver. 9 Level 1] image. I have... See more...
Hey Splunk Community, I was wondering if anyone has figured out what is the cause for the GUI not to work at all in a new install of Splunk 9.3 or 9.4 on a [CIS Red Hat ver. 9 Level 1] image. I have been trying to manage the the Splunk server with the GUI and it just wont come up. I can SSH all day long, but no GUI. I did come to the conclusion that its only on the [CIS Red Hat 9 level 1] image and not on an original RHEL Red Hat 9 image. This issues does not appear on [CIS Red Hat 8 level 1] image.  If anyone has the fix action to what CIS control configuration is causing this it would be greatly appreciated. I am positive if anyone in the [Gov. sector] is going to be hardening there server with CIS RHEL 9 control images they are going to run across this problem. Thanks - Johnny
Hi, How will work the schedule jobs that perform the api requests (input/output) when deploying, on a Search Head cluster, an TA package that was created by Add-on builder? Is there any mechani... See more...
Hi, How will work the schedule jobs that perform the api requests (input/output) when deploying, on a Search Head cluster, an TA package that was created by Add-on builder? Is there any mechanism similar to DB Connect ? (DB Connect provides high availability on Splunk Enterprise with a Search Head Cluster, by executing input/output on the captain) Thank you, José
Hello community, I need help with configuring Splunk to correctly process timestamp information in my UDP messages. When I send messages starting with a pattern like <\d+>, for example:   <777> 20... See more...
Hello community, I need help with configuring Splunk to correctly process timestamp information in my UDP messages. When I send messages starting with a pattern like <\d+>, for example:   <777> 2025-01-03T06:12:19.236514-08:00 hello world   Splunk substitutes the original timestamp with the current date and local host address. Consequently, what I see in Splunk is:   Jan 28 14:27:25 127.0.0.1 2025-01-03T06:12:19.236514-08:00 hello world   I would like to know how to disable this behavior so that the actual timestamp from the message is preserved in the event. I have attempted to configure TIME_FORMAT and TIME_PREFIX in the props.conf file, but it seems those settings are applied after Splunk substitutes the timestamp with the current date and local host. As a workaround, I implemented the following in props.conf:   [my_sourcetype] EXTRACT-HostName = \b(?P\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d+(-\d{2}:\d{2})?) EVAL-_time = strptime(extracted_time, "%Y-%m-%dT%H:%M:%S.%6N%z")   Is there a better way to achieve this? Any guidance would be greatly appreciated! Thank you!
Hey, We are currently ingesting wineventlog from some of our Azure VMs via Eventhub. As such, their assigned sourcetype is the eventhub sourcetype, which means they are not subject to the wineventlo... See more...
Hey, We are currently ingesting wineventlog from some of our Azure VMs via Eventhub. As such, their assigned sourcetype is the eventhub sourcetype, which means they are not subject to the wineventlog field extractions. These logs do contain a "raw xml" data field, which is like xmlwineventlog, however, using things such as xmlkv, spath or xpath don't work as intended, and require additional work to extract the data correctly. Unfortunately due to this, we are unable to dump this extra SPL into a field extraction or calculated field. Please find the SPL below:   index=azure sourcetype=mscs:azure:eventhub category="WindowsEventLogsTable" | fields properties.RawXml | spath input=properties.RawXml | eval test=mvzip('Event.EventData.Data{@Name}', 'Event.EventData.Data', "=") | mvexpand test | rex field=test max_match=0 "^(?<kv_key>[^=]+)=(?<kv_value>[^=]+)$" | eval {kv_key}=kv_value     The end goal with this is to successfully extract the relevant windows event fields so that it can be datamodel mapped. It's not possible to install UFs onto these VMs, so unfortunately not the solution here. We'd ideally also want to avoid using "collect" to duplicate the data to a separate index/sourcetype. Has anyone else encountered this and managed to come up with a solution? Thanks
So I have my Query working and I have a webhook created in a Channel It says that I can send Tokens when I send the Alert - It says the Message can include tokens that insert text based on the resul... See more...
So I have my Query working and I have a webhook created in a Channel It says that I can send Tokens when I send the Alert - It says the Message can include tokens that insert text based on the result of search query My Field / Label I created was Total_Count How do I pass that as a Token?
We need to integrate MSSQL standard edition with splunk, so we tried sending logs to Windows Event Viewer application channel. Now we are getting logs, but the issue is logs are not parsed and we are... See more...
We need to integrate MSSQL standard edition with splunk, so we tried sending logs to Windows Event Viewer application channel. Now we are getting logs, but the issue is logs are not parsed and we are getting all logs. My question is if someone has integrated MSSQL standard edition with splunk. how you did it and is data parsed