All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Can you please point me to the start up screen , where I can start a new search.
I've seen this on some older posts, but I am currently battling this issue. For some hosts, restarting it makes the logs start flowing again without the above error message (Suggesting a delayed star... See more...
I've seen this on some older posts, but I am currently battling this issue. For some hosts, restarting it makes the logs start flowing again without the above error message (Suggesting a delayed start is the answer). But on some of them, a restart does nothing, there is real security logs that Splunk is merely reporting above error message for. 
Hi,  I'm encountering this error when i run btool check: Invalid key in stanza [email] in /opt/splunk/etc/apps/search/local/alert_actions.conf, line 2: show_password (value: True). and inside ... See more...
Hi,  I'm encountering this error when i run btool check: Invalid key in stanza [email] in /opt/splunk/etc/apps/search/local/alert_actions.conf, line 2: show_password (value: True). and inside the alert_actions.conf: [email] show_password = True Could i just delete or rename the file ? and what is this stanza for ? Cause i can't see it in the documentation https://docs.splunk.com/Documentation/Splunk/8.2.6/Admin/Alertactionsconf
I am getting IPv6 with collapsed zero's and IPv4 quad (ie "fe80::192.168.10.100") for source and I want to parse out the IPv6 part of that field.  What do I need to add in my prop to parse that part ... See more...
I am getting IPv6 with collapsed zero's and IPv4 quad (ie "fe80::192.168.10.100") for source and I want to parse out the IPv6 part of that field.  What do I need to add in my prop to parse that part out?  Thank you for any and all help.
I'm currently developing a splunk query that will query 2 indexes to correlate data by leveraging a users email, but  I'm not receiving any luck       index="A" Example="A" | dedup email | ... See more...
I'm currently developing a splunk query that will query 2 indexes to correlate data by leveraging a users email, but  I'm not receiving any luck       index="A" Example="A" | dedup email | rename email AS actor | join actor [search index="B" | table _time, actor, fileName, shared, url ]     I also tried this query as well   (index="A" Example="A" OR index="B") | fields email | where email = actor | table _time, work_email, fileName, shared, url  
Hello,  I would like to know if I can monitor IOT with Appdyamics and is so what agent should I use. thanks.  ^ Post edited by @Ryan.Paredez . Please have the title of the post be in a diges... See more...
Hello,  I would like to know if I can monitor IOT with Appdyamics and is so what agent should I use. thanks.  ^ Post edited by @Ryan.Paredez . Please have the title of the post be in a digestible question format. This helps others search and find content. 
I have the Splunk Add-on for Microsoft Cloud Services (https://splunkbase.splunk.com/app/3110/) installed on my heavy forwarder and ingesting audit data from an event hub input configured as a centra... See more...
I have the Splunk Add-on for Microsoft Cloud Services (https://splunkbase.splunk.com/app/3110/) installed on my heavy forwarder and ingesting audit data from an event hub input configured as a central repository for our tenant's audit data. This is working like a champ. I see tons of event hub data, it's all parsing as expected. I'd love to use some dashboards to avoid making my own. I saw that the Microsoft Azure App for Splunk contains dashboards (https://splunkbase.splunk.com/app/4882/) for data collected from both the Cloud Services add-on above as well as the standard Azure add-on. Seems like what I want. However, after deploying the app to my SHC none of the dashboards work. Digging further into it it appears that the sourcetype the App is looking for is totally different than the sourcetype that the MCS add-on generates. All the events in the index are sourcetype=mscs:azure:eventhub but the App is looking for sourcetype=azure:eventhub. The question is, is the App actually supposed to work with the MCS add-on and if so does anyone have advice on making that work? Or is there a different app that provides dashboards for the data ingested by the MCS add-on? It looks like I could change the sourcetype in the configuration of the App but that doesn't feel like something I should be changing when the description says it works with the add-on.
Hi guys searched through all topics and couldn`t find anything relevant to my issue. So hope some one would help me with my question. We use splunk cloud enterprise security. We have AWS environment... See more...
Hi guys searched through all topics and couldn`t find anything relevant to my issue. So hope some one would help me with my question. We use splunk cloud enterprise security. We have AWS environment mostly with Linux instances what I want to achieve is to search on splunk for any new files put in /tmp or /sbin and etc.  Was googling, searching in documentation all I can find is this document https://docs.splunk.com/Documentation/SplunkCloud/8.2.2202/Data/Monitorfilesanddirectories that says that I should use CLI. But this doesn't`t make seance we use splunk cloud there is no CLI right? And I think this article is about accessing and monitoring Splunk files right?  So yee in short how to search for any new files added in to instances.   Cheers.
I would like to narrow down my results and rename a few fields using an initial search, let's call these results A. Then I want to take A and search on `event_type=event1` and massage the results ... See more...
I would like to narrow down my results and rename a few fields using an initial search, let's call these results A. Then I want to take A and search on `event_type=event1` and massage the results to get B Then take A and search on `event_type=event2` and massage the results to get C Then I want to combine the results B and C and use chart to dedup and display the combined result. With my current query, the column for stage1 in my table is all null, but if I do the search with the contents of the append at the root and remove the second search, I get populated results My current query is the following:   index=... ... | rename some_field as taskID | append [search "event.event_type"=event1 | eval stageDuration='event.payload.total_duration'-'event.payload.provisioning_duration' | eval stageID="stage1"] | search "event.event_type"=event2 | rename event.payload.total_duration as stageDuration event.payload.stageID as stageID | chart sum(stageDuration) over taskID by stageID | table taskID, stage1, * | where isnull(stage4) | fillnull value=0    
Hello, Are there any queries I can run from SPLUNK search head to find: 1. all configured DB Connections and their associated index/source Types in SPLUNK. Any help will be highly appreciated! 2. ... See more...
Hello, Are there any queries I can run from SPLUNK search head to find: 1. all configured DB Connections and their associated index/source Types in SPLUNK. Any help will be highly appreciated! 2. all Add On are currently using in SPLUK?   Any help will be highly appreciated, thank you!
Hello, We are looking to create a search that will return when two similar events occur within 1 second of each other. Sample log search results: 2022-04-19 18:42:39,210 INFO [stdout] (default ... See more...
Hello, We are looking to create a search that will return when two similar events occur within 1 second of each other. Sample log search results: 2022-04-19 18:42:39,210 INFO [stdout] (default task-43) [core.service.RestService] ==============POST Send Family============= 2022-04-19 18:39:31,142 INFO [stdout] (default task-43) [core.service.RestService] ==============POST Send Family============= 2022-04-19 18:35:38,403 INFO [stdout] (default task-41) [core.service.RestService] ==============POST Send Family============= 2022-04-19 18:35:38,371 INFO [stdout] (default task-42) [core.service.RestService] ==============POST Send Family============= 2022-04-19 18:34:01,696 INFO [stdout] (default task-40) [core.service.RestService] ==============POST Send Family============= 2022-04-19 18:30:36,450 INFO [stdout] (default task-39) [core.service.RestService] ==============POST Send Family============= 2022-04-19 16:57:39,144 INFO [stdout] (default task-36) [core.service.RestService] ==============POST Send Family============= 2022-04-19 14:01:42,904 INFO [stdout] (default task-153) [core.service.RestService] ==============POST Send Family============= 2022-04-19 13:46:00,629 INFO [stdout] (default task-153) [core.service.RestService] ==============POST Send Family============= 2022-04-19 13:42:39,944 INFO [stdout] (default task-153) [core.service.RestService] ==============POST Send Family============= 2022-04-19 13:32:59,488 INFO [stdout] (default task-145) [core.service.RestService] ==============POST Send Family=============   We would like a query to be able to return results when events occur, like the following times, since they are so close together: 2022-04-19 18:35:38,403 INFO [stdout] (default task-41) [core.service.RestService] ==============POST Send Family============= 2022-04-19 18:35:38,371 INFO [stdout] (default task-42) [core.service.RestService] ==============POST Send Family============= Is there a way we can generate a query that would find something like that?   Thanks!    
I've got a scripted input running on a universal forwarder that generates json output to the tune of 18,000+ lines.  However, when I query for its events in the Splunk search, it shows only 13 lines ... See more...
I've got a scripted input running on a universal forwarder that generates json output to the tune of 18,000+ lines.  However, when I query for its events in the Splunk search, it shows only 13 lines per event.   { "apiVersion": "v1", "items": [ { "apiVersion": "apps/v1", "kind": "Deployment", "metadata": { "annotations": { "deployment.kubernetes.io/revision": "1", "field.cattle.io/publicEndpoints": "myPublicEndpoint1", "meta.helm.sh/release-name": "myrelease1", "meta.helm.sh/release-namespace": "mynamespace1" },    I've tried setting TRUNCATE to both 0 and 1000000 in props.conf for the scripted input's sourcetype ("scriptedinput1") on both the universal forwarder and the search instances and restarted the services, but the truncating remains the same.   [scriptedinput1] KV_MODE=json TRUNCATE=1000000   I should also note that I'm not seeing "truncating" anywhere in my splunkd.log on the universal forwarder and search instances.  Any assistance with configuring Splunk to not truncate my scripted input running on the universal forwarder would be greatly appreciated.
Is splunk version 7.2.1 available to be downloaded and where can I find it? It is not listed in the older releases section.
I have code | eval m=case(minute>0 AND minute<15,15,minute>14 AND minute<30,15,minute>29 AND minute<45,30,minute>44,45) | eval where m = "00" | eval time1=strftime(_time, "%Y-%m-%d %H:00:00") | ... See more...
I have code | eval m=case(minute>0 AND minute<15,15,minute>14 AND minute<30,15,minute>29 AND minute<45,30,minute>44,45) | eval where m = "00" | eval time1=strftime(_time, "%Y-%m-%d %H:00:00") | eval where m = "15" | eval time1=strftime(_time, "%Y-%m-%d %H:15:00") | eval where m = "30" | eval time1=strftime(_time, "%Y-%m-%d %H:30:00") | eval where m = "45" | eval time1=strftime(_time, "%Y-%m-%d %H:45:00") | table system SMF30JBN _time SMF30SRV msused hour m time1 How do I get each where statement just to run the EVAL command that is right under the where statement Right now ever time ends up with 45 in the minute place 
I need to identify each Active Directory Service Accounts that are being used for authentication for my work group. I am trying to create a working list of active and disabled accounts. I am using th... See more...
I need to identify each Active Directory Service Accounts that are being used for authentication for my work group. I am trying to create a working list of active and disabled accounts. I am using the following, but not having any luck obtaining all Events with complete list of EventCodes.  index=xxxxxxxxxx inserted group*Svc source="xxxxxxxxx" | rex field=Account_Name "(?<Account_Name_parsed>[\w]*)\@.*" | where isnotnull(Account_Name_parsed) | eval startDate=strftime(strptime(whenCreated,"%Y%m%d%H%M"), "%Y/%m/%d %H:%M") | eval endDate=strftime(strptime(accountExpires,"%Y-%m-%dT%H:%M:%S%Z"), "%Y/%m/%d %H:%M") | eval lastDate=strftime(strptime(lastTime,"%Y-%m-%dT%H:%M:%S"), "%Y/%m/%d %H:%M") | eval Days=floor((now()-strptime(lastDate,"%Y/%m/%d %H:%M"))/(3600*24)) | rex field=userAccountControl "(?<userAccountControl_parsed>[^,]+)" | eval userAccountControl=lower(replace(mvjoin(userAccountControl_parsed, "|"), " ", "_")) | eval status=case( match(userAccountControl, "accountdisable") , "disabled", 1==1, "active" ) | stats first(_time) AS Time by Account_Name_parsed | eval Time=strftime(Time, "%b %d %H:%M:%S") The only thing showing in Statistics is the account name and time. I need for my list to show active and non-active accounts. I would also like for the Source_Workstation to be shown as well. I have attached an example of fields within an event.
Hi Splunk Community, I am currently working with a search but I am trying to filter certain events out. I am trying to remove events with user=unknown and id=123456. When I do | where (id=123456 AND... See more...
Hi Splunk Community, I am currently working with a search but I am trying to filter certain events out. I am trying to remove events with user=unknown and id=123456. When I do | where (id=123456 AND user!="unknown"), it removes both events with an unknown user and events with the id = 123456. I would like to keeps all other events with an unknown user and all other events from 123456 while only dropping events where user is unknown AND id=123456. Thanks in advance!
I have a splunk event as follow: request-id=123  STOP method TYPE=ABC, ID=[678] --- TIME_TAKEN=1281ms I have lot of events like this and I want to find the max time taken. I have query as : ****Q... See more...
I have a splunk event as follow: request-id=123  STOP method TYPE=ABC, ID=[678] --- TIME_TAKEN=1281ms I have lot of events like this and I want to find the max time taken. I have query as : ****QUERY**** || rex field=_raw "TIME_TAKEN=(?<TIME_TAKEN>\d+)ms" | table TIME_TAKEN Now I am able to get all Time value in table. But I dont want them in table but I just want the max entry out of this.How can i replace last operation so that I can max value from TIME_TAKEN in output.I dont need anything else
hello From the search below, I need to display only the result corresponding to the current time It means that if it's 17h15, i need to display only the count value corresponding to 17h15 in my s... See more...
hello From the search below, I need to display only the result corresponding to the current time It means that if it's 17h15, i need to display only the count value corresponding to 17h15 in my search     `tutu` sourcetype="session" | bin _time span=15m | stats dc(s) as count by _time      The time format is 2022-04-27 17:15:00 could you help please?
I am learning Splunk (early stages). I have been playing around with this search for the past 2 hours with little success.   I am running this query to get an ip address of the workstation this p... See more...
I am learning Splunk (early stages). I have been playing around with this search for the past 2 hours with little success.   I am running this query to get an ip address of the workstation this person is using: index=fortinet* user=XXXX* | top limit=1 sip | table sip I am trying to tie this search in with another index search ( index=wineventlog_pc ) and use that ip address as the source to find the actual name of the workstation being used.   Any help or insights would be awesome. Thank you  
Hello all, first time post. It's been a great adventure but boy there is alot to learn. I will try and be clear as possible. I have a dashboard I am making that pulls data from Splunk regarding supp... See more...
Hello all, first time post. It's been a great adventure but boy there is alot to learn. I will try and be clear as possible. I have a dashboard I am making that pulls data from Splunk regarding support tickets (specifically ticket #'s and supposedly current status).  I am finding that in any date range there can be multiple Splunk entries for the same ticket. It's like Splunk is picking up an event every time there is an update to said ticket. So if I say pull any tickets for a particular queue name with the status of Assigned, there may already be a newer event that has come in that is status of Closed. How can I filter my data to pull incidents by queue and be sure I am getting the most recent possible status? Here's a code example. I cut out some the eval statements to make it easier to read. ((index="wss_desktop_os") (sourcetype="db_itsm" OR sourcetype="wss_itsm_remedy")) earliest=-24h | search (queuename AND TOTAL_TRANSFERS >= "4" NOT STATUS_TXT="Closed") | dedup INCIDENT_# | table ASSIGNED_GROUP, INCIDENT_#,STATUS_TXT, ASSIGNEE, Age-Days, TOTAL_TRANSFERS It makes an output like this: ASSIGNED_GROUP INCIDENT_# STATUS_TXT Group ticket # status   John F