All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, I need to create a dashboard with errors and HTTP ERROR Codes for Mobile Application. I can find the errors per minute but actual errors I want to add in the widget for the dashboard. Can ... See more...
Hi, I need to create a dashboard with errors and HTTP ERROR Codes for Mobile Application. I can find the errors per minute but actual errors I want to add in the widget for the dashboard. Can someone guide me to the path of the HTTP Error codes and Error for Mobile application EUM?
I'm trying to create a dashboard panel that shows my F5 SSL Certificates and their expiration dates, and sorts the columns from left to right by date so the leftmost column would be the certificate e... See more...
I'm trying to create a dashboard panel that shows my F5 SSL Certificates and their expiration dates, and sorts the columns from left to right by date so the leftmost column would be the certificate expiring soonest. Here's what I have for a search: index=f5_tstream source="f5.telemetry" telemetryEventCategory=systemInfo | convert timeformat="%m/%d/%Y" ctime(sslCerts.*.expirationDate) AS *c_time | stats latest(*c_time) by host | rename host as Host My results look something like this: Host latest(Certificate#1c_time) latest(Certificate#2c_time) latest(Certificate#3c_time) latest(Certificate#4c_time) Device#1 1/1/2023   7/7/2024   Device#2   10/10/2022   9/9/2023 Device#3 1/1/2023   7/7/2024               So basically I want to sort all columns containing "latest(*c_time)" by the date they're returning. Not sure if this is possible. 
Hi, My classifier (SGDClassifier) is allowing only 100 distinct cat values. I followed the link Configure algorithm performance costs - Splunk Documentation and modified the file mlspl.conf as foll... See more...
Hi, My classifier (SGDClassifier) is allowing only 100 distinct cat values. I followed the link Configure algorithm performance costs - Splunk Documentation and modified the file mlspl.conf as follows :     [SGDClassifier] max_distinct_cat_values=2000 max_distinct_cat_values_for_classifiers=2000     However my splunk search is giving me the error : I had already change this value for another classifier (LinearSVC) and everything was fine, what did i miss here ? I just copied and pasted from it and change the Stanza name  I'm using MLTK 5.3.1
helllo   I can't receive an email alert despite having configured it correctly the alert is launched on the portal indicating the outcome but the email is absent 1- mail serer configuration! ... See more...
helllo   I can't receive an email alert despite having configured it correctly the alert is launched on the portal indicating the outcome but the email is absent 1- mail serer configuration! smptm.gmail.com: 587 I added a gmail address with password 2-alert configuration: I put a destination address: I put an outlook address   please help me to fix it
Hi I'm new to Splunk and what to create a search that shows what savedsearches where used in a dashboard? This is how far I got: | rest /servicesNS/-/-/data/ui/views splunk_server=local | s... See more...
Hi I'm new to Splunk and what to create a search that shows what savedsearches where used in a dashboard? This is how far I got: | rest /servicesNS/-/-/data/ui/views splunk_server=local | search title="test_dashboard" | rename eai:acl.app AS app, eai:data AS data | fields title app author data I have no clue how to go from this data to an actual list of savedsearches used in this dashboard. Is there anyone who can put me on a good track?
I'm trying to create a table that displays the following result Appname Amount of users with read access amount of users that have accessed in the last 2 months Open Access Protected ... See more...
I'm trying to create a table that displays the following result Appname Amount of users with read access amount of users that have accessed in the last 2 months Open Access Protected Access AppX <number> <number> O P   I know that I can use the rest api for most (maybe all) of this. The following tells me which apps there are and with what roles a user has read access.     | rest /servicesNS/-/-/apps/local splunk_server="local" | fields label, eai:acl.perms.read | rename eai:acl.perms.read as roles | sort by label | search label!=_searchhead_config     The following tells me what users there are and what roles they have.     | rest /services/authentication/users splunk_server=local | fields title roles | mvexpand roles | rename title as userName     What I want to do now is to combine those and by the roles, match which users have access to a certain app, and than count how many there are. I'm a newbie and I've tried all kinds of things with join, append, appendcols but it never gives me the results I need. Can someone point me in the right direction?    
Good Morning, I am pulling zeek (Bro) logs into my Splunk to view events. However some of these events will display proper syntax highlights while others will just display raw text only, regardless... See more...
Good Morning, I am pulling zeek (Bro) logs into my Splunk to view events. However some of these events will display proper syntax highlights while others will just display raw text only, regardless of their log source. The main difference between the 2 I've noticed is that the events that display proper syntax highlights only have 1 time stamp while other events with multiple time stamps will display as raw text. Multiple searches have led me to create my own local props.conf and transforms.conf files that contains this information at this current time: transforms.conf: [TranSON] SOURCE_KEY = _raw DEST_KEY = _raw REGEX = ^([^{]+)({.+})$ FORMAT = $2 props.conf [my_source_type] KV_MODE = JSON TRANSFORMS-JSON = TranSON SHOULD_LINEMERGE = false LINE_BREAKER=([\r\n\s]*)(?=\{\s*"ts":) TIME_FORMAT=%m-%d%-%Y %H:%M:%S.%4n TIME_PREFIX="timestamp":\s*" MAX_TIMESTAMP_LOOKAHEAD=25 TRUNCATE = 0 EVENT_BREAKER_ENABLE = true   Here is also 2 examples of the events ( I will write them both out in raw text), one that is displaying the syntax highlights and one that doesn't. Event that shows Syntax highlights:  {"ts":1659441156.916498,"host":"1.1.1.1","port_num":123,"port_proto":"udp","service":[""]}   Even that does not show Syntax highlights: {"ts":1659441445.280528,"id.orig_h":"1.1.1.1","id.orig_p":123,"id.resp_h":"1.1.2.2","id.resp_p":456} {"ts":1659441456.795169,"id.orig_h":"1.1.3.4","id.orig_p":789,"id.resp_h":"1.1.7.9","id.resp_p":456}   Any information would be greatly appreciated, I don't know if I'm missing something or I am approaching this wrong.  
Hi all, I have been trying to use if condition in stats values(). It is not working properly. I have used if conditions before and got results perfectly.   stats values(eval(if('FAILS'=="0",0,DAT... See more...
Hi all, I have been trying to use if condition in stats values(). It is not working properly. I have used if conditions before and got results perfectly.   stats values(eval(if('FAILS'=="0",0,DATA))) as DATA   The fields "DATA" is calculated in the beginning. My requirement is that when there are no FAILS the DATA should be zero otherwise it should be the value which is calculated.  I am doing anything wrong here? Because even if the FAILS are there it is giving me result as 0. Please help me.
Hi, I am new to using Splunk and I'm looking for a bit of expertise. I've generated a timechart for CPU statistics on some of our tasks. I have then split this in the dashboard via a search term... See more...
Hi, I am new to using Splunk and I'm looking for a bit of expertise. I've generated a timechart for CPU statistics on some of our tasks. I have then split this in the dashboard via a search term which seperate the visuals into each task using Trellis view. However, I can't figure out how to get each of these fields into a different colour. When I tried to use the answers from various pages I think I might have done it wrong. The search field is called TASK. Is there any way for me to colour by TASK in a timechart?
Hello Splunkers! Receiving the below error under splunkd.log for the UFs  08-02-2022 12:41:53.695 +0200 ERROR TailReader [8108 tailreader0] - Ignoring path="D:\xx\yy\filename" due to: Bug: tried to... See more...
Hello Splunkers! Receiving the below error under splunkd.log for the UFs  08-02-2022 12:41:53.695 +0200 ERROR TailReader [8108 tailreader0] - Ignoring path="D:\xx\yy\filename" due to: Bug: tried to check/configure STData processing but have no pending metadata. Checked in splunk community answers(https://community.splunk.com/t5/Getting-Data-In/TailingProcessor-Ignoring-path-quot-path-to-xyz-quot-due-to-Bug/m-p/198762) and found that setting CHARSET for the related source/source type in related props.conf stanza to CHARSET = AUTO in UF works fine and it did work fine for some time but can any one help me out on why this ERROR is occurring in the UF? as I'm receiving this error intermittently, at times frequently for few logs. The fix provided is the answers works for some time only again that particular UF is throwing the same error, can anyone please help me on this!   Thanks in Advance! Sarah
Hi guys, When I use Splunk-search, it doesn't suggest auto-complete for fields It is crucial. It is almost impossible to know all fields, and searching for fields from the list on the left side a... See more...
Hi guys, When I use Splunk-search, it doesn't suggest auto-complete for fields It is crucial. It is almost impossible to know all fields, and searching for fields from the list on the left side and then copying them is just a waste of time, especially when the fields are JSON objects ( A.B.C{a:b,c:d,e:[a,b,c]} ) Am I missing something? Is there a feature or an add-on that provides this ability?  
Hi everyone, I have a table like below: _time status 01/10/2021 inactive 02/10/2021 active 03/10/2021 active 04/10/2021 active 05/10/2021 active ... See more...
Hi everyone, I have a table like below: _time status 01/10/2021 inactive 02/10/2021 active 03/10/2021 active 04/10/2021 active 05/10/2021 active 06/10/2021 inactive 07/10/2021 inactive 08/10/2021 inactive 09/10/2021 active 10/10/2021 active 11/10/2021 active 12/10/2021 active 13/10/2021 inactive 14/10/2021 inactive The requirement is using Splunk to show the period when status is inactive (not by each day like the table) Do you have any idea, please? Thanks a lot!  
Hello Splunkers, I would like to have a better insight on my license usage, but the "Squash_threshold" default conf is not enough. I have been looking here if there were answers, sadly there are fe... See more...
Hello Splunkers, I would like to have a better insight on my license usage, but the "Squash_threshold" default conf is not enough. I have been looking here if there were answers, sadly there are few answers and the rare that exist are a little old. In the documentation, it is said to ask to a Splunk expert, my contact being in holidays yet, I would like to try to move forward anyways.  So have you any recommendations on this setting and the possible consequences if I increase it?   Thanks in advance,  Best regards,
Hi , I have logs with below format  X.X.X.X. - - [02/Aug/2022:10:31:18 +0200] "GET /api/mc/v0.1/agendas/view/background-tasks?is-details-required=false HTTP/1.1" 200 20 "-" " "https://XXX.AAA.COM... See more...
Hi , I have logs with below format  X.X.X.X. - - [02/Aug/2022:10:31:18 +0200] "GET /api/mc/v0.1/agendas/view/background-tasks?is-details-required=false HTTP/1.1" 200 20 "-" " "https://XXX.AAA.COM" "Mozilla/5.0 (Windows NT X.O; Win64; x64; rv:98.0)  Firefox/98.0" X.X.X.X.X - - [02/Aug/2022:10:31:18 +0200] "GET /api/mc/v0.1/agendas/view/background-tasks?is-details-required=false HTTP/1.1" 200 20 "-" " "https://XXX.AAA.COM" "Mozilla/5.0 (Windows NT X.O; Win64; x64; rv:98.0)  Firefox/98.0" X.X.X.X.- - [02/Aug/2022:10:31:33 +0200] "GET /api/mt/v0.1/tasks/view-count HTTP/1.1" 200 371 "https://XXX.AAA.COM" "Mozilla/5.0 (Windows NT X.O; Win64; x64; rv:98.0)  Firefox/98.0" X.X.X.X. - - [02/Aug/2022:10:31:33 +0200] "GET /api/mt/v0.1/work-items?start-position=0&number-of-items=11 HTTP/1.1" 200 3084  "https://XXX.AAA.COM" "Mozilla/5.0 (Windows NT X.O; Win64; x64; rv:98.0)  Firefox/98.0" out of these logs i want to get only  events which has /api/mt in it and drop the remaining events  My configurations:   [monitor:///aaa/yyy/xxxx/access_log] disabled = false sourcetype = mytask:access_log index = temp props.conf [mytask:access_log] TRANSFORMS-set = setnull TRANSFORMS-set = setparsing    Transforms.conf  [setnull] REGEX = ^(.*)mc(.*) DEST_KEY = queue FORMAT = nullQueue  [setparsing] REGEX = ^(.*)mt(.*) DEST_KEY = queue FORMAT = indexQueue Do we need set anything else in the configs 
Use case has been prepared with help of Splunk article  https://www.splunk.com/en_us/blog/tips-and-tricks/how-to-determine-when-a-host-stops-sending-logs-to-splunk-expeditiously.html | tstats lat... See more...
Use case has been prepared with help of Splunk article  https://www.splunk.com/en_us/blog/tips-and-tricks/how-to-determine-when-a-host-stops-sending-logs-to-splunk-expeditiously.html | tstats latest(_time) as latest where index=* earliest=-24h by host | eval recent = if(latest > relative_time(now(),"-5m"),1,0), realLatest = strftime(latest,"%c") | where recent=0 However receiving multiple false positive alerts for the windows servers(index=windows). what will reason behind this ? its slow logs ingestion or in real there is no events for the mentioned index/sourcetype.
I have a scenario that i'm getting N number of results for last 60min splunk search like below (5:00Pm to 06:00PM). 2022-08-02 17:59:45.203   CCL220727468 2022-08-02 17:59:40.555   CCL220711461  ... See more...
I have a scenario that i'm getting N number of results for last 60min splunk search like below (5:00Pm to 06:00PM). 2022-08-02 17:59:45.203   CCL220727468 2022-08-02 17:59:40.555   CCL220711461  2022-08-02 17:59:34.985   CCL220727468 2022-08-02 17:59:22.080   CCL220727468 2022-08-02 17:59:02.638   CCL220727468 2022-08-02 17:14:02.734   CCL220707460 2022-08-02 17:11:29.456   CCL220729470 2022-08-02 17:04:52.780   CCL220729470  In that i need to exclude the events close to the end time (for eg. I need to exclude the events with timestamp > 05:55PM. The events at the edge of search end time is not required). This is for setup an alert which shows the number of events in last 60min
 Hello, I want to have the possibility to create reports of the diskspace and/or memory from my machine. How can i set-up this?
Hi All, Please suggest the query or solution to achieve below requirement. 1. List of searches or query run by user (looking for the report where shows searches as per user) 2. List of Searches... See more...
Hi All, Please suggest the query or solution to achieve below requirement. 1. List of searches or query run by user (looking for the report where shows searches as per user) 2. List of Searches/reports which use one particular index. (i.e - Use case: User locked out is using index=windows) 
i have a list of string lets say  "abc" "bcd" "def" "efg" "fgh". I want to search each of these string against a query for example : "abc" index=xyz sourcetype=logs host=localhost | table _time... See more...
i have a list of string lets say  "abc" "bcd" "def" "efg" "fgh". I want to search each of these string against a query for example : "abc" index=xyz sourcetype=logs host=localhost | table _time, _raw and i want to search it as - if this string occurs in the result-set within last 10 days then it should print "present" otherwise it should print "absent"
In the splunkbase  it says "Splunk Add-on for Symantec Endpoint Protection"  TA's latest version 3.4.0  is compatible with CIM 4.x,  whereas if we check in release notes, it says the TA is compatible... See more...
In the splunkbase  it says "Splunk Add-on for Symantec Endpoint Protection"  TA's latest version 3.4.0  is compatible with CIM 4.x,  whereas if we check in release notes, it says the TA is compatible with CIM 5.0.1. I am using CIM 4. Does anyone know,  if this version of Symantec Add on is backwards compatible with CIM 4? (or is it compatible with CIM 5 only?)