All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

On of the SHC member is going up and down every 5 mins. KV Store is stuck at starting first and then it is stuck at intial sync. The member is running fine from backend and going to up and down in t... See more...
On of the SHC member is going up and down every 5 mins. KV Store is stuck at starting first and then it is stuck at intial sync. The member is running fine from backend and going to up and down in the search head clustering page. I did try splunk restart on that member server and then the KV are rebuilding and getting stuck at each phase first at starting and now at the inital sync. Any help/inputs appreciated @rbal_splunk  Thank you
I have the following JSON: { "kind": "report", "id": { "time": "2021-12-24T15:45:01.331Z", }, "events": [ { "parameters": [ { ... See more...
I have the following JSON: { "kind": "report", "id": { "time": "2021-12-24T15:45:01.331Z", }, "events": [ { "parameters": [ { "name": "field1", "boolValue": true }, { "name": "field2", "boolValue": true }, { "name": "field3", "value": "value3" }, { "name": "field4", "value": "value4" }, { "name": "field5", "boolValue": false }, { "name": "field6", "value": "value6" }, { "name": "field7", "value": "value7" }, { "name": "field8", "boolValue": false }, { "name": "field9", "value": "value9" }, { "name": "field10", "boolValue": false }, { "name": "field11", "boolValue": false } ] } ] }   I'd like to marry the key/value pairing of name/value and name/boolValue with their corresponding values i.e.  field1: true field4: value4 I've attempted to use spath to extract values, but keep coming up short.
I've been looking around here and on Google but can't find an answer to this specific usecase: I have two sourcetypes, both being fed by Salesforce, that have a common Id but the column names are dif... See more...
I've been looking around here and on Google but can't find an answer to this specific usecase: I have two sourcetypes, both being fed by Salesforce, that have a common Id but the column names are different. How do I write a query to join them (or more specifically, augment the data from sourcetype A with some additional fields from sourcetype B)? 1. index=sfdc sourcetype=sfdc:LoginEvent 2. index=sfdc sourcetype=sfdc:User The common field on both is the user's platform GUID (on LoginEvent the column is called UserId, on the User object it's just called Id). My primary source here is LoginEvent. I'm trying coalesce() but getting what looks like a crossjoin (I'm getting users in my table output with no LoginEvent corollary). If I were to write what I'm trying to get in SQL, this is what I'm trying to do (hopefully this makes it easier to understand): SELECT le.EventDate, le.Application, le.LoginType, le.Platform, le.Status, u.Type, u.Location FROM LoginEvent le JOIN User u ON (u.Id = le.UserId) ORDER BY le.EventDate
Here i am having AWS data collecting through IDM on Splunk cloud. I need to route certain data basis on some regex pattern to other index. Is this feasible from GUI. If yes plz suggest. Many thanks i... See more...
Here i am having AWS data collecting through IDM on Splunk cloud. I need to route certain data basis on some regex pattern to other index. Is this feasible from GUI. If yes plz suggest. Many thanks in advance 
Hi team,    I need to fetch the 'InterfaceName' from the below payload.  I built a regular expression but it is not working as expression. Can someone please verify this and correct me where I am w... See more...
Hi team,    I need to fetch the 'InterfaceName' from the below payload.  I built a regular expression but it is not working as expression. Can someone please verify this and correct me where I am wrong.  "xmlns:bw":"http://www.tibco.com" "ns:InterfaceName":"MIAP-COVGBA", "ns:CorrelationID":"f84nvkdwe-4ij43kj" I created one regex expression like - | rex field=_raw "ns:InterfaceName\W+(?<AppId>\w+)" |stats count by AppId.   From the above expression I am getting only double quotes ("), but unable to fetch data.    Thanks in advance.   yours, ...    
I am building a new Splunk environment, and due to the number of clients we have, we are building a simple distributed environment that consists of 1 Heavy Forwarder, universal forwarders all pointin... See more...
I am building a new Splunk environment, and due to the number of clients we have, we are building a simple distributed environment that consists of 1 Heavy Forwarder, universal forwarders all pointing to the Heavy Forwarder, and 1 indexer. I would like the heavy forwarder to only forward certain events on to the indexer. Based upon my research (Route and Filter Data ) I have built the below configurations. outputs.conf [tcpout] indexAndForward = 1 [tcpout:indexerGroup] server = <ip address>:9997 props.conf [default] TRANSFORMS-allNull = setnull [WinEventLog:Security] TRANSFORMS-WinEventLog=setNull,SendToUserAccountIndex,SendToGroupAccountIndex transforms.conf [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue [SendToUserAccountIndex] REGEX = ^EventCode=(4720|4722|4725|4738) DEST_KEY = _TCP_ROUTING FORMAT = indexerGroup [SendToGroupAccountIndex] REGEX = ^EventCode=(4727|4730|4728) DEST_KEY = _TCP_ROUTING FORMAT = indexerGroup   Unfortunately, even though the specified events are showing in my Windows Event Logs, and the universal forwarder is forwarding the events properly, the events are not showing in the locally indexed items of forwarded items. It is also not showing on the destination server. If I remove the default stanza from props.conf every event populates, including those events that do not meet the criteria listed above. Based upon this, I believe there is an issue with my source types.  My environment is Splunk Enterprise Version 8.0.3 and and installed on Windows Server 2016. 
Hello all, One of the certificates associated with our universal forwarders is due to expire this week. While we have renewed the certificate, we are yet to push it on all universal forwarders. My q... See more...
Hello all, One of the certificates associated with our universal forwarders is due to expire this week. While we have renewed the certificate, we are yet to push it on all universal forwarders. My question is: if we are unable to push new certificate by weekend will the universal forwarders stop logging the data and sharing it with indexes? Thank you
Hey all, Learning Splunk and I've tried to install a few apps to see how they work but none of them seem to be working. Currently running Splunk Enterprise 8.2.4 installed on Server 2019 and I've tr... See more...
Hey all, Learning Splunk and I've tried to install a few apps to see how they work but none of them seem to be working. Currently running Splunk Enterprise 8.2.4 installed on Server 2019 and I've tried to install "RWI - Executive Dashboard" as well as "Splunk Dashboard Examples" When I open the "Examples" page of the latter, it is a completely blank white page. But the other three pages seem to work (Overview, Dashboards, Search). As for the RWI Dasshboard, if I click the "Continue to app setup page", it also takes me to a blank white screen where I can't do anything. I have tried Internet Explorer as well as Chrome for my browser and updated the server. Is there something simple I am missing to make apps work properly? Thanks!
I have an alert that runs every 10 minutes from 6am-3pm PST.  It checks to see if a file has arrived within the last few minutes (file arrives at 6:10, check for file at 6:12 with 4 minute timeframe)... See more...
I have an alert that runs every 10 minutes from 6am-3pm PST.  It checks to see if a file has arrived within the last few minutes (file arrives at 6:10, check for file at 6:12 with 4 minute timeframe).  The problem is that every day, I get 1-3 false positive alerts.  I get an email saying that a file didn't arrive on time but when I go to perform the exact search with the hardcoded timeframe in question, the file did arrive on time.  Anyone ever run into this problem? Is this an issue with the alert scheduler?  I tried moving the alert back to give the file time to process in the system (file arrives at 6:10, check for file at 6:15 with 8 minute timeframe) but to no avail.
I would like to install TrackMe on Splunk enterprise 7.3.4 and I only versions available for Splunk 8.x  is there available   TrackMe for Splunk enterprise 7.3.4 ?
hi as you can see in my xml, I use an eval command in order to define an health status this eval command is linked to a token time now I would like to correlate the rule of my eval command with th... See more...
hi as you can see in my xml, I use an eval command in order to define an health status this eval command is linked to a token time now I would like to correlate the rule of my eval command with the time token For example, if I choose the "last 7 days" in my time token, the hang has to be > 5 and the crash > 2 But if i choose the "last 30 days" in my time token, the hang has to be > 1 and the crash > 4 how to do this please?   <form theme="dark"> <search id="bib"> <query> index=toto ((sourcetype="hang") OR ( sourcetype="titi") OR (sourcetype="tutu" web_app_duration_avg_ms &gt; 7000)) </query> <earliest>$date.earliest$</earliest> <latest>$date.latest$</latest> </search> <input type="time" token="date" searchWhenChanged="true"> <label>Période</label> <default> <earliest>-7d@h</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <single> <search base="bib"> <query>| stats count(hang_process_name) as hang, count(crash_process_name) as crash by site | eval sante=if((hang&gt;5) AND (crash&gt;2), "Etat de santé dégradé","Etat de santé acceptable") </query> </search>    
Hi, I have a bunch of alerts in my savedsearches.conf. I would like to configure the alert action "Add to triggered alerts" (as is offered when you add the alert using the ui). I am doing this prog... See more...
Hi, I have a bunch of alerts in my savedsearches.conf. I would like to configure the alert action "Add to triggered alerts" (as is offered when you add the alert using the ui). I am doing this programmatically. After restarting splunk, the alerts do not show up as alerts, but rather as reports (in the reports tab). Is this intended behaviour by splunk or am I missing out on something? An example alert can be found below     [generic-alert-name] alert.expires = 120d alert.severity = 2 alert.suppress = 0 alert.track = 1 counttype = number of events cron_schedule = * * * * * description = dispatch.earliest_time = rt-30d dispatch.latest_time = rt-0d display.general.type = statistics display.page.search.tab = statistics enablesched = 1 quantity = 0 relation = greater than request.ui_dispatch_app = my_app request.ui_dispatch_view = my_app search = eventtype = "some-eventtype" | stats count by id | search count >= 4711      
I Want to create one splunk alert where it runs on all weekdays and Pause at "Friday 11:59 PM CST " and resume at Sunday 11:59 PM CST" Can you help me with a Cron expression that satisfies the above... See more...
I Want to create one splunk alert where it runs on all weekdays and Pause at "Friday 11:59 PM CST " and resume at Sunday 11:59 PM CST" Can you help me with a Cron expression that satisfies the above requirement? I need the alert to be set up in CST time zone only  
I am trying to learn how to use the Splunk REST API on Splunk SaaS. Can anyone point me to the documentation on how to configure and access it in Splunk SaaS. 
I have csv data( source .csv file with sourcetype=csv ) which I need to update every week.  Problem is that I might get duplicate data and I don't want this. What is the best way to avoid this? I am... See more...
I have csv data( source .csv file with sourcetype=csv ) which I need to update every week.  Problem is that I might get duplicate data and I don't want this. What is the best way to avoid this? I am thinking I might have to create a staging index every time and use this to compare with actual index and then insert from here only new records.  Then delete the staging index.  But for this again I am not sure how can I copy partial data from one index to another. Thank You.
hello   In my dashboard I use 2 different token time  Both of these token time are linked to the same post process search Instead of creating the same process search 2 times, I would like to use ... See more...
hello   In my dashboard I use 2 different token time  Both of these token time are linked to the same post process search Instead of creating the same process search 2 times, I would like to use only post process search linked to the 2 different token time Is it possible please?   <input type="time" token="timesource" searchWhenChanged="true"> <label>Période source</label> <default> <earliest>1638313200</earliest> <latest>1640991600</latest> </default> </input> <input type="time" token="timetarget" searchWhenChanged="true"> <label>Période cible</label> <default> <earliest>@mon</earliest> <latest>now</latest> </default> </input> <search id="hang"> <query>index=toto sourcetype=tutu </query> <earliest>$timesource.earliest$</earliest> <latest>$timesource.latest$</latest> </search> <search id="hang2"> <query>index=toto sourcetype=tutu </query> <earliest>$timetarget.earliest$</earliest> <latest>$timetarget.latest$</latest> </search>   I  
Hi, I am trying to calculate the duration of a call from the bellow search however it is appearing blank, the format is  01/01/2015 13:26:29.64321574:   index=test sourcetype=test | eval created=... See more...
Hi, I am trying to calculate the duration of a call from the bellow search however it is appearing blank, the format is  01/01/2015 13:26:29.64321574:   index=test sourcetype=test | eval created=strptime(starttime,"%Y-%m-%dT%H:%M:%S.%3N") | eval last_time=strptime(endtime,"%Y-%m-%dT%H:%M:%S.%3N") | eval diff=(last_time-created) | eval diff = round(diff/60/60/24) | table diff     Any help would be greatly appreciated.   Thanks
Now the path, filename and pagenumber is shown. I only want the pagenumber and if possible in the middle of the rule
Supposed if i have huge data off employees Like name department and status (login /logout ) One person can login and logout many times in One day.  I need to find out last logout time for each empl... See more...
Supposed if i have huge data off employees Like name department and status (login /logout ) One person can login and logout many times in One day.  I need to find out last logout time for each employee 
hi, i am using the below query to list the bootup time and downtime based on event code.. but if the bootuptime shows 3 different days in a month, the calculation for downtime is same for all the th... See more...
hi, i am using the below query to list the bootup time and downtime based on event code.. but if the bootuptime shows 3 different days in a month, the calculation for downtime is same for all the three days.. index="wineventlog" host IN (abc) (EventCode=6005) Type=Information | eval BootUptime = strftime(_time, "%Y-%d-%m %H:%M:%S") | table host, BootUptime | fields _time, host, BootUptime | join type=left host[| search index="wineventlog" host IN (abc) EventCode=6006 OR EventCode="6005" Type=Information | transaction host startswith=6006 endswith=6005 maxevents=2 | eval duration=tostring(duration,"duration") | eval time_taken = replace(duration,"(\d+)\:(\d+)\:(\d+)","\1h \2min \3sec") | rename time_taken AS Downtime] | table _time host BootUptime Downtime eg: host    bootuptime                                        downtime abc      2022-15-01 08:15:40                      00h 02min 51sec abc      2022-20-01 03:58:22                      00h 02min 51sec abc      2022-15-01 04:34:53                       00h 02min 51sec correct answer for downtime is  2.85min, 2.8min & 3.1666666666666665min How to correct it? Thanks in advance