All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Can someone please tell me exactly what Appdynamics classes as a "call" in the "calls per minute" section of the metric browser? This is for java app agent metrics Screen attached.
After fixing filters on some fields that don't exist in all the events, I tried to apply these filters on the graphs and the problem here is that when Splunk reads the search string of a graph, it ge... See more...
After fixing filters on some fields that don't exist in all the events, I tried to apply these filters on the graphs and the problem here is that when Splunk reads the search string of a graph, it gets only the events where the fields exist and it excludes the other events. As a result all the statistics and the graphs are wrong !! Ayone has a solution please ?? Thanks in Advance.
Hey Splunk community,   I've been getting turned around in the docs as some things are meant for folks running a single instance and others meant for a distributed environment. I'm currently runnin... See more...
Hey Splunk community,   I've been getting turned around in the docs as some things are meant for folks running a single instance and others meant for a distributed environment. I'm currently running an environment with the following    Search Head (Windows Server 2019) Indexer 1 (CentOS Stream 9) Indexer 2 (CentOS Stream 9) Indexer 3 CentOS Stream 9) CPU 48 Cores 24 Cores 24 Cores 24 Cores Disk Space 4Tb 500 GB (expandable) 500 GB (expandable) 500 GB (expandable) Roles Deployment Server Cluster Manager License Manager Search Head Indexer Indexer Indexer     I have a Syslog server running Syslog-ng (that won't start the service but thats for another post.)  Now to the main part of the post: I am principally trying to do two things right now, I have forwarders installed on my file servers and one of my domain controllers. The thing is, the documentation is not clear on what route I need to take to ingest file data and AD data. Do I utilize a deployed app to my forwarders that will "automagically" ingest the data I am looking for or create an inputs.conf file to monitor the events I am looking for. Specifically file reads, modifications and related data. I would also like to monitor my AD for logins and and administration data. Any help would be appreciated.
Hey, does anyone know any best practice or clever way of removing orphaned Knowledge Objects in a Search Head cluster when it is already too late for reassignment? For each orphaned object we are... See more...
Hey, does anyone know any best practice or clever way of removing orphaned Knowledge Objects in a Search Head cluster when it is already too late for reassignment? For each orphaned object we are doing manual job like checking if AD accounts still exist, emailing the users and asking if they still need Splunk etc. For non-existing accounts, we delete /opt/splunk/etc/users<user_id> catalogue from each SH separately (there are 4 SHs in our cluster), but we are looking for more clever solution Unfortunately, there is no option in our case to be informed by the users that they are going to leave the company in order to react in advance and avoid orphaned KOs at all... Greetings, Justyna  
Hello, Have anyone managed to collect windows logs other than the usual  Application,System,Security,Setup ? I am being asked if we can collect Microsoft-Windows-FailoverClustering  event ID 1641... See more...
Hello, Have anyone managed to collect windows logs other than the usual  Application,System,Security,Setup ? I am being asked if we can collect Microsoft-Windows-FailoverClustering  event ID 1641 If anyone has the inputs.conf file for something like that I would appreciate it.    
Hii all... Hope you can help me with two questions 1)Trying to create a query to find if the target user that set to "password never expirer" is a service user with using ldapsearch main serch ... See more...
Hii all... Hope you can help me with two questions 1)Trying to create a query to find if the target user that set to "password never expirer" is a service user with using ldapsearch main serch =                 index=microsoft-windows-dc EventID=4738 NewUacValue=0x210 I am trying  to run this ldapsearch on the results to remove  users with UserTypeName = service | ldapsearch domain=default search="(sAMAccountName=user)" attrs="sAMAccountName,displayName,sn,UserTypeName" How do I run the ldapsearch on all users from the results obtained after the first search ?   2. ldapsearch run only by admin , how to set Permissions to other roles to run ldapsearch  Thanks ... 
Hi,   I need to plot time difference between consecutive events by sourcetype in the last 7 days. I'm using this search but it's slow for a dashboard     index=myindex sourcetype=(sourcet... See more...
Hi,   I need to plot time difference between consecutive events by sourcetype in the last 7 days. I'm using this search but it's slow for a dashboard     index=myindex sourcetype=(sourcetype1, sourcetype,sourcetype3) | streamstats windwos=2 global=f range(_time) as delta by sourcetype | timechart max(range) as "delta [sec]" by sourcetype       do you have any suggestion for a more efficient search?   Thank you, Marta
I have installed Splunk TA Windows Add-on, still I am unable to see tag and tag::eventtype fields, when typing  index=windows but rest other are getting populated. Therefore unable to use "| savedsea... See more...
I have installed Splunk TA Windows Add-on, still I am unable to see tag and tag::eventtype fields, when typing  index=windows but rest other are getting populated. Therefore unable to use "| savedsearch DA-ITSI-OS-OS_Hosts_Search" for importing the entities as it required tag field Please help me here.
Hello Splunkers, I am trying to change the last updated time on the IT essential work dashboard, ( attached screenshot ). the following time zone is not coming from the servers and also the timez... See more...
Hello Splunkers, I am trying to change the last updated time on the IT essential work dashboard, ( attached screenshot ). the following time zone is not coming from the servers and also the timezone in splunk is not this. I am trying to change this to +4 GMT,
Hello, I can't find any information about integration Ivanti Neurons data to Splunk. Maybe someone have solution for this? 
Hi All, Good Day!   I have 2 indexes and having different source types  and diff uri, index 1--- nere having httpstatuscodes   1.  one uri having only 200,403,422 are success remaining failu... See more...
Hi All, Good Day!   I have 2 indexes and having different source types  and diff uri, index 1--- nere having httpstatuscodes   1.  one uri having only 200,403,422 are success remaining failure 2.remaing uri's 200 is success and remaining failure    index 2-- diffrent-- uri -- one uri having 200 is success ---here having Respnsecodes     how to get success percentage by using timechart  by country please help on this 
Using Splunk Enterprise 9. I'm trying to populate a dashboard studio dropdown input from query results. I was testing via a simple query (copied from the dashboard studio examples) as follows:   | ... See more...
Using Splunk Enterprise 9. I'm trying to populate a dashboard studio dropdown input from query results. I was testing via a simple query (copied from the dashboard studio examples) as follows:   | inputlookup firewall_example.csv | stats count by host   This works fine and the dropdown gets populated with the hosts. However, I'd also expected the following to to work    | inputlookup firewall_example.csv | stats values(host)   but it doesn't and no dynamic entries are in the dropdown.  So my understanding of the query types that can be used with dropdown inputs is incomplete! Can someone point me in the right direction?     
Hi Team, 1.We have the 50GB of DEV/TEST license file with us. After Configuring this License Can we create Knowledge objects like Dashboards, Reports and alerts.?? 2. Do we have any restrictions af... See more...
Hi Team, 1.We have the 50GB of DEV/TEST license file with us. After Configuring this License Can we create Knowledge objects like Dashboards, Reports and alerts.?? 2. Do we have any restrictions after configuring the DEV/TES License that prevent us from performing these tasks with the DEV/TEST license? In comparison to Enterprise 3. I want to configure this on a single server that serves as Heavy Forwarder Search head and the indexer. This makes sense, right?
More specifically: when the incoming events are already in JSON format; just, not the HEC-specific JSON structure? In my case, each event is represented by a JSON object with a "flat" structure (no ... See more...
More specifically: when the incoming events are already in JSON format; just, not the HEC-specific JSON structure? In my case, each event is represented by a JSON object with a "flat" structure (no nesting): just a collection of sibling key/value pairs. This "generic" JSON can be ingested by numerous analytics platforms with minimal configuration. I've configured a sourcetype in props.conf and transforms.conf to ingest events in this JSON structure, including timestamp recognition and per-event sourcetype mapping (that is, dynamically mapping each event to a more specific sourcetype based on two values in the event). I use that sourcetype configuration for the following Splunk inputs: TCP HEC raw endpoint (services/collector/raw) I could modify this JSON to meet the HEC-specific structure required by the HEC JSON endpoint (services/collector). I understand the HEC-specific structure and the changes that I need to make. However, before I do that, I thought I'd ask: what are the advantages of using the HEC JSON endpoint versus the HEC raw endpoint? I anticipate that answers will make the point that Splunk ingestion is more streamlined, because you don't need to configure, for example: Timestamp recognition: you specify time as a metadata key Per-event sourcetype mapping: you can specify sourcetype as a metadata key However, from my perspective, this is simply shifting compute costs upstream. That is, I would have to perform additional upstream processing to modify the existing "generic" JSON. Given this context, what do I gain by using the HEC JSON endpoint? I understand that HEC indexer acknowledgment is available via both endpoints. Am I missing something?
    I have a log that documents call results for phone calls as a CSV event record There is a field in the event record for the phone number The event record may contain a list of sub-events th... See more...
    I have a log that documents call results for phone calls as a CSV event record There is a field in the event record for the phone number The event record may contain a list of sub-events that I want to track. If the CSV event record contains a sting "MOCK,?,?,1" that is counted as a BAD call. The "1" is what determine it's a bad call we don't care what the ? number are) If the event record has any event ("MOCK,?,?,0" but not "MOCK,?,?,1") this is a Good call A would like to report to show the number of calls to  every  "phone number" and the percentage of Bad calls
Hi, Does anyone help me to download my exam scorecard and obtained marks.I have recently passed my splunk power user exam.
Consider these three searches that end with timechart.  The second one skews time range all the way to year 2038!  How do I fix that? 1. Index search   2. Change to equivalent tstats   | tst... See more...
Consider these three searches that end with timechart.  The second one skews time range all the way to year 2038!  How do I fix that? 1. Index search   2. Change to equivalent tstats   | tstats count where index=_internal earliest=-7d by _time span=1d | timechart span=1d sum(count)   Note how timespan magically changes all the way to 2038? 3. Do not use earliest with tstats; use time selector in search screen.   | tstats count where index=_internal ```earliest=-7d``` by _time span=1d | timechart span=1d sum(count)   I have specific reasons to set earliest with specific token in dashboard.  So, search time selector is not an option.
I have a lookup file called prefixes.csv, and it has about 5 headers: prefix,location,description,owner "1.0.0.0/8",usa,"corporate things", "joe schmoe" I want to be able to reference this fil... See more...
I have a lookup file called prefixes.csv, and it has about 5 headers: prefix,location,description,owner "1.0.0.0/8",usa,"corporate things", "joe schmoe" I want to be able to reference this file so that, for example, if I am looking at firewall logs, I can ignore or , alternatively, specifically look for events where their src_ip falls into these ranges. So for example, something like: index=firewall src_ip=* | search NOT [ |inputlookup | field + prefix | rename prefix as src_ip] I know that I can do something like this if I had every range expanded for single entries per IP, but is there a way to do this with cidr? I have tried doing the lookup definition route but I think I am missing something or misunderstanding something there. Thanks in advance __PRESENT
Hello. I am trying to take advantage of the free courses with splunk, but I am unable to view the videos. I've tried turning VPN off, turning off extensions, clearing cache, using incognito. Nothing ... See more...
Hello. I am trying to take advantage of the free courses with splunk, but I am unable to view the videos. I've tried turning VPN off, turning off extensions, clearing cache, using incognito. Nothing works. Thanks in advance for the responses and helping out.
we have some services, each produces some logs. these logs aggregated and store in a minio bucket (not aws! just a on-perm minio deployment). I want to integrate splunk with minio such that splunk ge... See more...
we have some services, each produces some logs. these logs aggregated and store in a minio bucket (not aws! just a on-perm minio deployment). I want to integrate splunk with minio such that splunk get these logs from bukcet (not minio pushes logs).