All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, We are looking to create a search that will return when two similar events occur within 1 second of each other. Sample log search results: 2022-04-19 18:42:39,210 INFO [stdout] (default ... See more...
Hello, We are looking to create a search that will return when two similar events occur within 1 second of each other. Sample log search results: 2022-04-19 18:42:39,210 INFO [stdout] (default task-43) [core.service.RestService] ==============POST Send Family============= 2022-04-19 18:39:31,142 INFO [stdout] (default task-43) [core.service.RestService] ==============POST Send Family============= 2022-04-19 18:35:38,403 INFO [stdout] (default task-41) [core.service.RestService] ==============POST Send Family============= 2022-04-19 18:35:38,371 INFO [stdout] (default task-42) [core.service.RestService] ==============POST Send Family============= 2022-04-19 18:34:01,696 INFO [stdout] (default task-40) [core.service.RestService] ==============POST Send Family============= 2022-04-19 18:30:36,450 INFO [stdout] (default task-39) [core.service.RestService] ==============POST Send Family============= 2022-04-19 16:57:39,144 INFO [stdout] (default task-36) [core.service.RestService] ==============POST Send Family============= 2022-04-19 14:01:42,904 INFO [stdout] (default task-153) [core.service.RestService] ==============POST Send Family============= 2022-04-19 13:46:00,629 INFO [stdout] (default task-153) [core.service.RestService] ==============POST Send Family============= 2022-04-19 13:42:39,944 INFO [stdout] (default task-153) [core.service.RestService] ==============POST Send Family============= 2022-04-19 13:32:59,488 INFO [stdout] (default task-145) [core.service.RestService] ==============POST Send Family=============   We would like a query to be able to return results when events occur, like the following times, since they are so close together: 2022-04-19 18:35:38,403 INFO [stdout] (default task-41) [core.service.RestService] ==============POST Send Family============= 2022-04-19 18:35:38,371 INFO [stdout] (default task-42) [core.service.RestService] ==============POST Send Family============= Is there a way we can generate a query that would find something like that?   Thanks!    
I've got a scripted input running on a universal forwarder that generates json output to the tune of 18,000+ lines.  However, when I query for its events in the Splunk search, it shows only 13 lines ... See more...
I've got a scripted input running on a universal forwarder that generates json output to the tune of 18,000+ lines.  However, when I query for its events in the Splunk search, it shows only 13 lines per event.   { "apiVersion": "v1", "items": [ { "apiVersion": "apps/v1", "kind": "Deployment", "metadata": { "annotations": { "deployment.kubernetes.io/revision": "1", "field.cattle.io/publicEndpoints": "myPublicEndpoint1", "meta.helm.sh/release-name": "myrelease1", "meta.helm.sh/release-namespace": "mynamespace1" },    I've tried setting TRUNCATE to both 0 and 1000000 in props.conf for the scripted input's sourcetype ("scriptedinput1") on both the universal forwarder and the search instances and restarted the services, but the truncating remains the same.   [scriptedinput1] KV_MODE=json TRUNCATE=1000000   I should also note that I'm not seeing "truncating" anywhere in my splunkd.log on the universal forwarder and search instances.  Any assistance with configuring Splunk to not truncate my scripted input running on the universal forwarder would be greatly appreciated.
Is splunk version 7.2.1 available to be downloaded and where can I find it? It is not listed in the older releases section.
I have code | eval m=case(minute>0 AND minute<15,15,minute>14 AND minute<30,15,minute>29 AND minute<45,30,minute>44,45) | eval where m = "00" | eval time1=strftime(_time, "%Y-%m-%d %H:00:00") | ... See more...
I have code | eval m=case(minute>0 AND minute<15,15,minute>14 AND minute<30,15,minute>29 AND minute<45,30,minute>44,45) | eval where m = "00" | eval time1=strftime(_time, "%Y-%m-%d %H:00:00") | eval where m = "15" | eval time1=strftime(_time, "%Y-%m-%d %H:15:00") | eval where m = "30" | eval time1=strftime(_time, "%Y-%m-%d %H:30:00") | eval where m = "45" | eval time1=strftime(_time, "%Y-%m-%d %H:45:00") | table system SMF30JBN _time SMF30SRV msused hour m time1 How do I get each where statement just to run the EVAL command that is right under the where statement Right now ever time ends up with 45 in the minute place 
I need to identify each Active Directory Service Accounts that are being used for authentication for my work group. I am trying to create a working list of active and disabled accounts. I am using th... See more...
I need to identify each Active Directory Service Accounts that are being used for authentication for my work group. I am trying to create a working list of active and disabled accounts. I am using the following, but not having any luck obtaining all Events with complete list of EventCodes.  index=xxxxxxxxxx inserted group*Svc source="xxxxxxxxx" | rex field=Account_Name "(?<Account_Name_parsed>[\w]*)\@.*" | where isnotnull(Account_Name_parsed) | eval startDate=strftime(strptime(whenCreated,"%Y%m%d%H%M"), "%Y/%m/%d %H:%M") | eval endDate=strftime(strptime(accountExpires,"%Y-%m-%dT%H:%M:%S%Z"), "%Y/%m/%d %H:%M") | eval lastDate=strftime(strptime(lastTime,"%Y-%m-%dT%H:%M:%S"), "%Y/%m/%d %H:%M") | eval Days=floor((now()-strptime(lastDate,"%Y/%m/%d %H:%M"))/(3600*24)) | rex field=userAccountControl "(?<userAccountControl_parsed>[^,]+)" | eval userAccountControl=lower(replace(mvjoin(userAccountControl_parsed, "|"), " ", "_")) | eval status=case( match(userAccountControl, "accountdisable") , "disabled", 1==1, "active" ) | stats first(_time) AS Time by Account_Name_parsed | eval Time=strftime(Time, "%b %d %H:%M:%S") The only thing showing in Statistics is the account name and time. I need for my list to show active and non-active accounts. I would also like for the Source_Workstation to be shown as well. I have attached an example of fields within an event.
Hi Splunk Community, I am currently working with a search but I am trying to filter certain events out. I am trying to remove events with user=unknown and id=123456. When I do | where (id=123456 AND... See more...
Hi Splunk Community, I am currently working with a search but I am trying to filter certain events out. I am trying to remove events with user=unknown and id=123456. When I do | where (id=123456 AND user!="unknown"), it removes both events with an unknown user and events with the id = 123456. I would like to keeps all other events with an unknown user and all other events from 123456 while only dropping events where user is unknown AND id=123456. Thanks in advance!
I have a splunk event as follow: request-id=123  STOP method TYPE=ABC, ID=[678] --- TIME_TAKEN=1281ms I have lot of events like this and I want to find the max time taken. I have query as : ****Q... See more...
I have a splunk event as follow: request-id=123  STOP method TYPE=ABC, ID=[678] --- TIME_TAKEN=1281ms I have lot of events like this and I want to find the max time taken. I have query as : ****QUERY**** || rex field=_raw "TIME_TAKEN=(?<TIME_TAKEN>\d+)ms" | table TIME_TAKEN Now I am able to get all Time value in table. But I dont want them in table but I just want the max entry out of this.How can i replace last operation so that I can max value from TIME_TAKEN in output.I dont need anything else
hello From the search below, I need to display only the result corresponding to the current time It means that if it's 17h15, i need to display only the count value corresponding to 17h15 in my s... See more...
hello From the search below, I need to display only the result corresponding to the current time It means that if it's 17h15, i need to display only the count value corresponding to 17h15 in my search     `tutu` sourcetype="session" | bin _time span=15m | stats dc(s) as count by _time      The time format is 2022-04-27 17:15:00 could you help please?
I am learning Splunk (early stages). I have been playing around with this search for the past 2 hours with little success.   I am running this query to get an ip address of the workstation this p... See more...
I am learning Splunk (early stages). I have been playing around with this search for the past 2 hours with little success.   I am running this query to get an ip address of the workstation this person is using: index=fortinet* user=XXXX* | top limit=1 sip | table sip I am trying to tie this search in with another index search ( index=wineventlog_pc ) and use that ip address as the source to find the actual name of the workstation being used.   Any help or insights would be awesome. Thank you  
Hello all, first time post. It's been a great adventure but boy there is alot to learn. I will try and be clear as possible. I have a dashboard I am making that pulls data from Splunk regarding supp... See more...
Hello all, first time post. It's been a great adventure but boy there is alot to learn. I will try and be clear as possible. I have a dashboard I am making that pulls data from Splunk regarding support tickets (specifically ticket #'s and supposedly current status).  I am finding that in any date range there can be multiple Splunk entries for the same ticket. It's like Splunk is picking up an event every time there is an update to said ticket. So if I say pull any tickets for a particular queue name with the status of Assigned, there may already be a newer event that has come in that is status of Closed. How can I filter my data to pull incidents by queue and be sure I am getting the most recent possible status? Here's a code example. I cut out some the eval statements to make it easier to read. ((index="wss_desktop_os") (sourcetype="db_itsm" OR sourcetype="wss_itsm_remedy")) earliest=-24h | search (queuename AND TOTAL_TRANSFERS >= "4" NOT STATUS_TXT="Closed") | dedup INCIDENT_# | table ASSIGNED_GROUP, INCIDENT_#,STATUS_TXT, ASSIGNEE, Age-Days, TOTAL_TRANSFERS It makes an output like this: ASSIGNED_GROUP INCIDENT_# STATUS_TXT Group ticket # status   John F
hi I transpose header field time like this     | eval time=strftime(_time,"%H:%M") | sort time | fields - _time _span _origtime _events | fillnull value=0 | transpose header_field=time 0 colu... See more...
hi I transpose header field time like this     | eval time=strftime(_time,"%H:%M") | sort time | fields - _time _span _origtime _events | fillnull value=0 | transpose header_field=time 0 column_name=KPI include_empty=true | sort KPI     Now I need to display only the fields for which _time is < to the current time So I am doing this and it works     | where _time < now()      But I also need to disply only the fields an hour earlier to the current time So I need something like this but I dont succeed     | where _time < now() AND _time > now()-1     Could you help please?
Hi Splunkers, for our environments, I needed a custom parser for some waf logs, so I created an addon to provide this. The addon has been created on a local Splunk istance on my Desktop; once compl... See more...
Hi Splunkers, for our environments, I needed a custom parser for some waf logs, so I created an addon to provide this. The addon has been created on a local Splunk istance on my Desktop; once completed and tested, it has been loaded on our Splunk Cloud istance, where it has Global permissions. The point is the following: the addon, once installed on cloud, correctly parse the logs and perform field extraction as desidered, coerently with results got on local istance; also, the events are correctly tagged with "attack" and "ids" as desired, due we want to see those events on Data Model Intrusion Detection. Unfortunately, when I try to perform a search with Intrusion Detection DM, the events are not present; a simple search like   |tstats summariesonly=true fillnull_value="N/D" count from datamodel=Intrusion_Detection by sourcetype   does not show me, in output, the sourcetype created during addon creation. I followed the usual way I use to create addon Data Model matching, which is: 1. create a eventtype in eventtypes.conf with syntax:   [<eventtype name>] search = <sourcetype> <parameters list>   2. use the above eventtype in tags.conf for tagging, with syntax   [eventtype=<eventtype name>] attack=enabled ids=enabled   If permissions are ok, what could be the root cause?
Lookup table fields that contain < or > symbols are getting escaped to &amp;gt; and &amp; lt;  How can I prevent this from occurring? It only happens when manipulating the field value using lookup e... See more...
Lookup table fields that contain < or > symbols are getting escaped to &amp;gt; and &amp; lt;  How can I prevent this from occurring? It only happens when manipulating the field value using lookup editor.  For example:  servicecode>="200" AND servicecode<400,1,0), failed=if(servicecode>400,1,0) gets rewritten as:  &amp;amp;gt;="200" AND servicecode&amp;amp;lt;400,1,0), failed=if(servicecode&amp;amp;gt;400,1,0)
Hi all, I have 2 panels, I can call it like panel1 and panel2. The panel2 is detail of a value in panel1. And I didn't user post-process type. This is 2 individual panels. The problem is the pane... See more...
Hi all, I have 2 panels, I can call it like panel1 and panel2. The panel2 is detail of a value in panel1. And I didn't user post-process type. This is 2 individual panels. The problem is the panel2 search different events, it missed the time to search.  So how i fix it? Panel1 with drilldown token ipDownload       <search> <query>index=... | fields SrcIP, DownSize | chart sum(DownSize) as Download by SrcIP | sort 10 -Download</query> <earliest>$Time.earliest$</earliest> <latest>$Time.latest$</latest> <sampleRatio>1</sampleRatio> </search>       Panel2       <search> <query>index=... | search SrcIP="$ipDownload$" | stats sum(DownSize) as Download by DstIP Client AppProtocol | sort 10 -Size | table DstIP, Client, AppProtocol, Download </query> <earliest>$Time.earliest$</earliest> <latest>$Time.latest$</latest> <sampleRatio>1</sampleRatio> </search>          
We have to filter the data which has Result=pass, status=200 and send the other logs to Splunk. we have received the logs to splunk before adding props.conf and transforms.conf. we have the following... See more...
We have to filter the data which has Result=pass, status=200 and send the other logs to Splunk. we have received the logs to splunk before adding props.conf and transforms.conf. we have the following configuration in props.conf & transforms.conf.  /opt/splunk/etc/apps/TA-AlibabaCloudSLS/default/transforms.conf [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue [setparsing] REGEX = result\=200 DEST_KEY = queue FORMAT = indexQueue [cloudnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue [cloudparsing] REGEX = result\=pass DEST_KEY = queue FORMAT = indexQueue   /opt/splunk/etc/apps/TA-AlibabaCloudSLS/default/props.conf [alibaba:cloudfirewall] TRANSFORMS-set= cloudnull,cloudparsing [alibaba:waf] TRANSFORMS-set= setnull,setparsing   But we are not receiving any logs to splunk for this although there are logs in alibaba cloud. Below is the inputs.conf file   /opt/splunk/etc/apps/TA-AlibabaCloudSLS/local/inputs.conf [sls_datainput://Alibaba_Cloud_Firewall] event_retry_times = 0 event_source = alibaba:cloudfirewall event_sourcetype = alibaba:cloudfirewall hec_timeout = 120 index = ***** interval = 300 protocol = private sls_accesskey = ***** sls_cg = ****** sls_cursor_start_time = end sls_data_fetch_interval = 1 sls_endpoint = ******* sls_heartbeat_interval = 60 sls_logstore = ***** sls_max_fetch_log_group_size = 1000 sls_project = ******* unfolded_fields = {"actiontrail_audit_event": ["event"], "actiontrail_event": ["event"] }   [sls_datainput://Alibaba_waf] event_retry_times = 0 event_source = alibaba:waf event_sourcetype = alibaba:waf hec_timeout = 120 index = ***** interval = 300 protocol = private sls_accesskey = ****** sls_cg = ******* sls_cursor_start_time = end sls_data_fetch_interval = 1 sls_endpoint = **** sls_heartbeat_interval = 60 sls_logstore = ***** sls_max_fetch_log_group_size = 1000 sls_project = **** unfolded_fields = {"actiontrail_audit_event": ["event"], "actiontrail_event": ["event"] }
Hi All, I have 2 different queries and I want to combine their results. These 2 queries return a single value output I want these 2 values in the same search result. Thanks for any help.     ... See more...
Hi All, I have 2 different queries and I want to combine their results. These 2 queries return a single value output I want these 2 values in the same search result. Thanks for any help.     index=“abc” (TYPE="Run bot finished" OR TYPE="Run bot Deployed") | search STATUS=Successful TYPE="Run bot finished" | stats count |rename count as Success_Count index = “abc” RPAEnvironment = "prd" ProcessName = "*" LogType = "*" TaskName = "*Main*" (LogLevel=ERROR OR LogLevel=FATAL) | eval Time = strftime(_time, "%Y-%m-%d %H:%M:%S") | eval LogDescription = trim(replace(LogDescription, "'", "")) | eval LogMessage = trim(replace(LogMessage, "'", "")) | eval TaskName = trim(replace(TaskName, "'", "")) | eval host=substr(host,12,4) | eval Account=if(User!= "" ,User,LoginUser) | table Time, LogNo, host, Account, LogType, LogMessage, TaskName ,ProcessName | rename LogMessage as "Log Message", TaskName as "Task Name", host as "VDI" | sort - Time|stats count|rename count as Failure_Count                    
Hi I need to do a timechart from a single panel result In this single panel, I stats events like this   | stats count as PbPerf by s | search PbPerf>10 | stats dc(s)   The results of thi... See more...
Hi I need to do a timechart from a single panel result In this single panel, I stats events like this   | stats count as PbPerf by s | search PbPerf>10 | stats dc(s)   The results of this search is 14 events Now I need to timechart these 14 events So I am doing this   | bin _time span=1d | stats count as PbPerf by s _time | search PbPerf>10 | timechart count span=1h    The first problem I have is that I want to retrieve the 14 events before doing the timechart is that I have to use a span=1d But of course all the 14 events are grouped with the same _time even if I use a span=1h in the timechart So how to display a timechart that display a _time value for my 14 events? Thanks
Hi I need to compare the results of 2 single panel between 2 different dates The first single panel concerns the results of the current day in the last 15 minutes and consists in a basic count ... See more...
Hi I need to compare the results of 2 single panel between 2 different dates The first single panel concerns the results of the current day in the last 15 minutes and consists in a basic count   | stats dc(s)   In the second single panel, I need to do the same count but for one week before but also in the last 15 minutes compared to the current time Is it possible to do such a thing? Thanks
Hi Team, I have a dashboard like below: what happens in my dashboard I have only 2 columns with 6 panels. All those 2 column name are "index" "Current status" like 6 panels I have it. Both 2 colum... See more...
Hi Team, I have a dashboard like below: what happens in my dashboard I have only 2 columns with 6 panels. All those 2 column name are "index" "Current status" like 6 panels I have it. Both 2 columns data is returning are as different but column name are same. Since 2 column name "index" "current status" are returning 6times in 6 different panels in single dashboard. I need to know, how do I make "index" and "Current status" column to be return only 1 time. Please suggest,  Below the dashboard sample results:    
Hello, I would like to copy paste my app dashboards, say from the app  A/local/data/ui/views folder to the corresponding backup app, say A_backup/../views several times a day adding the timestamp to... See more...
Hello, I would like to copy paste my app dashboards, say from the app  A/local/data/ui/views folder to the corresponding backup app, say A_backup/../views several times a day adding the timestamp to the dashboard name. The goal is to give developers the possibility to come back to their coding from like 3 hrs back. What do I need to take into consideration for that? I mean I would like to avoid restarting my splunk in-between to make the changes visible of course.  The developers should be able to access the A_backup and see their versioned dashboards by the corresponding name. I know, there are perhaps better ways (github app) for that, but I would like to keep it simple as that. I made a test with copy paste of one .xml file within the same app, but it is not visible in the UI, so I guess I miss some parts here. Can anyone help with the above? Kind Regards, Kamil