All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

hi as you can see in my xml, I use an eval command in order to define an health status this eval command is linked to a token time now I would like to correlate the rule of my eval command with th... See more...
hi as you can see in my xml, I use an eval command in order to define an health status this eval command is linked to a token time now I would like to correlate the rule of my eval command with the time token For example, if I choose the "last 7 days" in my time token, the hang has to be > 5 and the crash > 2 But if i choose the "last 30 days" in my time token, the hang has to be > 1 and the crash > 4 how to do this please?   <form theme="dark"> <search id="bib"> <query> index=toto ((sourcetype="hang") OR ( sourcetype="titi") OR (sourcetype="tutu" web_app_duration_avg_ms &gt; 7000)) </query> <earliest>$date.earliest$</earliest> <latest>$date.latest$</latest> </search> <input type="time" token="date" searchWhenChanged="true"> <label>Période</label> <default> <earliest>-7d@h</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <single> <search base="bib"> <query>| stats count(hang_process_name) as hang, count(crash_process_name) as crash by site | eval sante=if((hang&gt;5) AND (crash&gt;2), "Etat de santé dégradé","Etat de santé acceptable") </query> </search>    
Hi, I have a bunch of alerts in my savedsearches.conf. I would like to configure the alert action "Add to triggered alerts" (as is offered when you add the alert using the ui). I am doing this prog... See more...
Hi, I have a bunch of alerts in my savedsearches.conf. I would like to configure the alert action "Add to triggered alerts" (as is offered when you add the alert using the ui). I am doing this programmatically. After restarting splunk, the alerts do not show up as alerts, but rather as reports (in the reports tab). Is this intended behaviour by splunk or am I missing out on something? An example alert can be found below     [generic-alert-name] alert.expires = 120d alert.severity = 2 alert.suppress = 0 alert.track = 1 counttype = number of events cron_schedule = * * * * * description = dispatch.earliest_time = rt-30d dispatch.latest_time = rt-0d display.general.type = statistics display.page.search.tab = statistics enablesched = 1 quantity = 0 relation = greater than request.ui_dispatch_app = my_app request.ui_dispatch_view = my_app search = eventtype = "some-eventtype" | stats count by id | search count >= 4711      
I Want to create one splunk alert where it runs on all weekdays and Pause at "Friday 11:59 PM CST " and resume at Sunday 11:59 PM CST" Can you help me with a Cron expression that satisfies the above... See more...
I Want to create one splunk alert where it runs on all weekdays and Pause at "Friday 11:59 PM CST " and resume at Sunday 11:59 PM CST" Can you help me with a Cron expression that satisfies the above requirement? I need the alert to be set up in CST time zone only  
I am trying to learn how to use the Splunk REST API on Splunk SaaS. Can anyone point me to the documentation on how to configure and access it in Splunk SaaS. 
I have csv data( source .csv file with sourcetype=csv ) which I need to update every week.  Problem is that I might get duplicate data and I don't want this. What is the best way to avoid this? I am... See more...
I have csv data( source .csv file with sourcetype=csv ) which I need to update every week.  Problem is that I might get duplicate data and I don't want this. What is the best way to avoid this? I am thinking I might have to create a staging index every time and use this to compare with actual index and then insert from here only new records.  Then delete the staging index.  But for this again I am not sure how can I copy partial data from one index to another. Thank You.
hello   In my dashboard I use 2 different token time  Both of these token time are linked to the same post process search Instead of creating the same process search 2 times, I would like to use ... See more...
hello   In my dashboard I use 2 different token time  Both of these token time are linked to the same post process search Instead of creating the same process search 2 times, I would like to use only post process search linked to the 2 different token time Is it possible please?   <input type="time" token="timesource" searchWhenChanged="true"> <label>Période source</label> <default> <earliest>1638313200</earliest> <latest>1640991600</latest> </default> </input> <input type="time" token="timetarget" searchWhenChanged="true"> <label>Période cible</label> <default> <earliest>@mon</earliest> <latest>now</latest> </default> </input> <search id="hang"> <query>index=toto sourcetype=tutu </query> <earliest>$timesource.earliest$</earliest> <latest>$timesource.latest$</latest> </search> <search id="hang2"> <query>index=toto sourcetype=tutu </query> <earliest>$timetarget.earliest$</earliest> <latest>$timetarget.latest$</latest> </search>   I  
Hi, I am trying to calculate the duration of a call from the bellow search however it is appearing blank, the format is  01/01/2015 13:26:29.64321574:   index=test sourcetype=test | eval created=... See more...
Hi, I am trying to calculate the duration of a call from the bellow search however it is appearing blank, the format is  01/01/2015 13:26:29.64321574:   index=test sourcetype=test | eval created=strptime(starttime,"%Y-%m-%dT%H:%M:%S.%3N") | eval last_time=strptime(endtime,"%Y-%m-%dT%H:%M:%S.%3N") | eval diff=(last_time-created) | eval diff = round(diff/60/60/24) | table diff     Any help would be greatly appreciated.   Thanks
Now the path, filename and pagenumber is shown. I only want the pagenumber and if possible in the middle of the rule
Supposed if i have huge data off employees Like name department and status (login /logout ) One person can login and logout many times in One day.  I need to find out last logout time for each empl... See more...
Supposed if i have huge data off employees Like name department and status (login /logout ) One person can login and logout many times in One day.  I need to find out last logout time for each employee 
hi, i am using the below query to list the bootup time and downtime based on event code.. but if the bootuptime shows 3 different days in a month, the calculation for downtime is same for all the th... See more...
hi, i am using the below query to list the bootup time and downtime based on event code.. but if the bootuptime shows 3 different days in a month, the calculation for downtime is same for all the three days.. index="wineventlog" host IN (abc) (EventCode=6005) Type=Information | eval BootUptime = strftime(_time, "%Y-%d-%m %H:%M:%S") | table host, BootUptime | fields _time, host, BootUptime | join type=left host[| search index="wineventlog" host IN (abc) EventCode=6006 OR EventCode="6005" Type=Information | transaction host startswith=6006 endswith=6005 maxevents=2 | eval duration=tostring(duration,"duration") | eval time_taken = replace(duration,"(\d+)\:(\d+)\:(\d+)","\1h \2min \3sec") | rename time_taken AS Downtime] | table _time host BootUptime Downtime eg: host    bootuptime                                        downtime abc      2022-15-01 08:15:40                      00h 02min 51sec abc      2022-20-01 03:58:22                      00h 02min 51sec abc      2022-15-01 04:34:53                       00h 02min 51sec correct answer for downtime is  2.85min, 2.8min & 3.1666666666666665min How to correct it? Thanks in advance
I have events like this comin from Heavy forwarder "geo": {"continent": "NA", "country": "UK", "city": "LONDON"}, "hostname": "xxxx xxx xxxx" I have to override the host metadata with the hostn... See more...
I have events like this comin from Heavy forwarder "geo": {"continent": "NA", "country": "UK", "city": "LONDON"}, "hostname": "xxxx xxx xxxx" I have to override the host metadata with the hostname field from the event. my transforms.conf [hostoverride] SOURCE_KEY = hostname REGEX = (.*) DEST_KEY = MetaData:Host FORMAT = host::$1 props.conf [sourcetypename] . . . TRANSFORMS-hostoverride = hostoverride In some of the events I am still getting the Heavy forwarder name.  Thanks for the help in Advance
Using Splunk Enterprise 8.2.4 on Windows and Deployment Server. Does deployment server remover all locally configured apps when it deploys one or more apps to a forwarder? If not, can this be configu... See more...
Using Splunk Enterprise 8.2.4 on Windows and Deployment Server. Does deployment server remover all locally configured apps when it deploys one or more apps to a forwarder? If not, can this be configured. I want to use deployment server and prevent local admins at servers creating their own configs. 
1. How each values can be clickable for a mutivalue cell output. 2. Upon clicking it will show the logs of success/warning/failure 3. How can we color the cells according to the percentage of the s... See more...
1. How each values can be clickable for a mutivalue cell output. 2. Upon clicking it will show the logs of success/warning/failure 3. How can we color the cells according to the percentage of the single value in multi value cell? (Is this achievable without using javascript or css as I dont have admin access in splunk) SuccessCode=389,876  WarningCode=234, FailureCode=809 for the events. cell color Red for Failurepercent> 80%, Yellow for Failurepercent- 20% to 80%, green for Failurepercent 0%-20% Below is the output table format.  
I have noticed that my Splunk Enterprise 8.2.4 (all windows) indexers are listening on TCP 9997 and forwarders are forwarding payloads in plaintext across the network which security are naturally not... See more...
I have noticed that my Splunk Enterprise 8.2.4 (all windows) indexers are listening on TCP 9997 and forwarders are forwarding payloads in plaintext across the network which security are naturally not happy with. So I'd like to use my PKI to issue some certificates for the indexer to start with (I'll worry about client certificates and mutual authentication down the line). I run a master, one search head and and indexer cluster with two nodes. The guides seem to be clear enough on how to create the additional listener etc. but one thing is confusing me.  The guide indicates to create the SSL listener and config under $SPLUNK_HOME/etc/system/local/inputs.conf but on my indexers the existing listener is under  etc\apps\search\local\inputs.conf
Issue if API response returns only one message tracking event. In this scenario, the start date and end date for the next API call would be same as previous call and it becomes like a loop, hence th... See more...
Issue if API response returns only one message tracking event. In this scenario, the start date and end date for the next API call would be same as previous call and it becomes like a loop, hence the script was unable to make next start time and end time if script has some issue and couldn't collect data. lets say 48 hours then to catch up new logs, the script has to make 48 calls (each call collect 1 hours of messages). if the interval of script is 5 minutes then to catch up latest events, the script will take at least 48 calls * 5 minutes = 240 minutes(around 4 hours).
Hi, some questions... Last weekend we've got an error on the indexers. It is a multisite indexers with 6<>6 indexers (each site 6 indexers).  Some indexers went down and the data storage went sky ... See more...
Hi, some questions... Last weekend we've got an error on the indexers. It is a multisite indexers with 6<>6 indexers (each site 6 indexers).  Some indexers went down and the data storage went sky high. Stil not sure why. But when we started the indexers which where down, the data storage went partly back on nomal except one indexer. I noticed a lot off excess buckets...  very very much. I started removing these buckets,  but it stopped on one point and never went further with cleaning.  Could this be because of the data part of this one indexer is full (it is at this moment in automatic detention state). I don't see the activity on the cluster master, so it seems it is finished.. but i can't start a new action to clean, is says "Previously scheduled Remove Excess Buckets is running".  I tried a rolling restart (in maint mode), but it doesn't allow because of the "remove excess buckets is running".. How can i stop this "Previously scheduled Remove Excess Buckets is running"  ? thanks in advance..
I'm a a very basic Splunk admin using Splunk Enterprise 8.2.4 with deployment server pushing out our apps/configs to the forwarders. I need to install the agent onto 100 existing Windows 2016/2019 se... See more...
I'm a a very basic Splunk admin using Splunk Enterprise 8.2.4 with deployment server pushing out our apps/configs to the forwarders. I need to install the agent onto 100 existing Windows 2016/2019 servers. I can easily script up the MSI using MECM or the like but I'm wondering if the Splunk Deployment server can push the agent or if It provides a Powershell script I could hand to my server admins to do same from the target servers? 
Hey All, We are currently transitioning our users from Local to SAML, and with this, the savedsearches/KO's owned by the local users would need to be reassigned as they will soon be deleted on our... See more...
Hey All, We are currently transitioning our users from Local to SAML, and with this, the savedsearches/KO's owned by the local users would need to be reassigned as they will soon be deleted on our environment.   What would be the best practice for this, should we just reassign all these knowledge objects owned by the users to nobody, or should we just assign them to their respective SAML user account equivalent?   The K.O's are general use cases so we're thinking that assigning it to nobody would be fine, but it may cause some quota hits or some searches might not be executed if all are assigned to nobody.
Splunk truncates leading and trailing spaces during field extraction. Normally this is desired behavior. However, for one of our fields, the leading spaces must be preserved. It is about a fileinput... See more...
Splunk truncates leading and trailing spaces during field extraction. Normally this is desired behavior. However, for one of our fields, the leading spaces must be preserved. It is about a fileinput with KV_MODE = none. This should also not involve any manipulation of the field value, such as inserting quotes at the beginning and end of the field value, which would preserve the spaces. Any ideas are welcome - is it possible at all?
Hello, We have a few URLs being monitored by a Splunk alert(query pasted below for reference) by making use of the "Website Monitoring" add on. index=myindex sourcetype="web_ping" [| inputlookup U... See more...
Hello, We have a few URLs being monitored by a Splunk alert(query pasted below for reference) by making use of the "Website Monitoring" add on. index=myindex sourcetype="web_ping" [| inputlookup URL.csv] | streamstats count by response_code url | where count>=2 and response_code>=300 | eval Timestamp=strftime(_time ,"%d/%m/%Y %H:%M:%S"),Status="Failure" | rename response_code as "HTTP Response Code" url as URL | dedup URL | table Timestamp "HTTP Response Code" URL Status  Here the problem is  we are receiving response_code and response_time fields as empty like below  proxy_server="" title=abc.com timed_out=False proxy_port="" url=https://abc.com total_time="" request_time="" timeout=120 response_code="" proxy_type=http can anyone suggest to resolve (troubleshooting steps) this issue.