All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All , The requirement is to get all usernames , username created date and email associated to it as below username             username_created_date                 email_associated testnoob   ... See more...
Hi All , The requirement is to get all usernames , username created date and email associated to it as below username             username_created_date                 email_associated testnoob                         03/22/2022                                testnoob@xxyy.com how can i achieve this ? can you please help me 
We have locally created users and have just enabled Azure AD SAML auth. Is there a way to map SAML authenticated accounts (Azure AD) to existing local accounts? Or enable SSO for existing local acc... See more...
We have locally created users and have just enabled Azure AD SAML auth. Is there a way to map SAML authenticated accounts (Azure AD) to existing local accounts? Or enable SSO for existing local accounts?
Hello, I have 2 Python scripts need to be run inside (hfscritps-1.py and hfscritps-2.py ) SPLUNK HF everyday at 5am ET. Scripts are required to import modules (import os and from datetime import da... See more...
Hello, I have 2 Python scripts need to be run inside (hfscritps-1.py and hfscritps-2.py ) SPLUNK HF everyday at 5am ET. Scripts are required to import modules (import os and from datetime import date); how would I configure my SPLUNK  HF (or Python Scripts) to perform these tasks. Any help/recommendation will be highly appreciated. Thank you.  
Hi Folks, I'm new to Spunk and I was working on creating a dashboard for one of my Application. Dashboard is built but when I want to populate the data for last 30 days, its giving result for onl... See more...
Hi Folks, I'm new to Spunk and I was working on creating a dashboard for one of my Application. Dashboard is built but when I want to populate the data for last 30 days, its giving result for only few day ( 7 to 8 days) and other days are populated as 0. When I look into that particular day, I can notice events are there. Can someone please help here? My Query format is as below, Main Query [search <subquery> ] | timechart span=1d count as total | sort by "_time" desc My Output is as below, 2022-03-22 647 2022-03-21 988 2022-03-20 279 2022-03-19 100 2022-03-18 879 2022-03-17 1169 2022-03-16 15 2022-03-15 0 2022-03-14 0 2022-03-13 0 2022-03-12 0 2022-03-11 0 2022-03-10 0 2022-03-09 0 2022-03-08 0 2022-03-07 0 2022-03-06 0 2022-03-05 0 2022-03-04 0 2022-03-03 0 2022-03-02 0 2022-03-01 0 2022-02-28 0   Before 15th March, I see data is populated as 0 but when the same query is ran for 15th March alone I noticed events are getting populated. For eg, I selected time range as 14th March 00:00 to 15th March 24:00 for the same query, I got result as below. But this value not getting populated when last 30days time period is selected. 2022-03-15 587 2022-03-14 654   Kindly need help on this.   Thanks in Advance.  
What do I need to add to this search, to make this search  | where Need >= 60min | tstats max(_indextime) AS Late where earliest=-24h latest=now (index=bluff) by sourcetype | eval CurrentTime=now()... See more...
What do I need to add to this search, to make this search  | where Need >= 60min | tstats max(_indextime) AS Late where earliest=-24h latest=now (index=bluff) by sourcetype | eval CurrentTime=now() | eval Need = CurrentTime - Late, LastIngestionTime=strftime(Late,"%Y/%m/%d %H:%M:%S %Z"), CurrentTime =strftime(CurrentTime,"%Y/%m/%d %H:%M:%S %Z") | table sourcetype, LastIngestionTime, CurrentTime, Need | rename LastIngestionTime as "Last", CurrentTime AS "Search time", Need AS "Latency in Minutes"  
Hello,  I am working on an old box that failed to upgrade to 8.2.x. We needed to download back to 8.0.3. I was trying to find the download on https://www.splunk.com/en_us/download/previous-release... See more...
Hello,  I am working on an old box that failed to upgrade to 8.2.x. We needed to download back to 8.0.3. I was trying to find the download on https://www.splunk.com/en_us/download/previous-releases.html?locale=en_us for linux, but I cannot find it anymore. I know we should be upgrading to a higher number by now, but we need to downgrade after a vm image didn't revert properly. Any help?  Thanks!
Hello all I have installed universal forwarder on Databases and now want to create a weekly report which covers database operations, for example table deletion, database modifications etc. Do I nee... See more...
Hello all I have installed universal forwarder on Databases and now want to create a weekly report which covers database operations, for example table deletion, database modifications etc. Do I need to install any app? Currently forwarders are configured only to collect windows events.   Regards  
Following the override documentation, I am confused... When creating an override, and the pop up box appears, do you select the persons name that take your on-call, or do you create the override in... See more...
Following the override documentation, I am confused... When creating an override, and the pop up box appears, do you select the persons name that take your on-call, or do you create the override in your name, then get the override assigned to another person, as I am not a Global Admin? Thanks BME1
Query 1: (index=iks) "Procces started" | timechart count span=1d Query 2:  (index=iks) "Procces finished" | timechart count span=1d   I want to display the result of Query 1 - Query 2 for e... See more...
Query 1: (index=iks) "Procces started" | timechart count span=1d Query 2:  (index=iks) "Procces finished" | timechart count span=1d   I want to display the result of Query 1 - Query 2 for each day
Hi, I currently have Windows Event Logs ingesting, they are all being rendered as XML. Logs are being parsed at the indexer, no HF involvement. I have Windows TA 8.4.0 installed and pushed to all... See more...
Hi, I currently have Windows Event Logs ingesting, they are all being rendered as XML. Logs are being parsed at the indexer, no HF involvement. I have Windows TA 8.4.0 installed and pushed to all indexers, and this I know comes with default SEDCMD commands in the default props.conf file. What I am trying to acheive is to entirely overwrite the 'Message' field of XmlWinEventLog:Security logs with a blank field. This is to reduce license consumption, as the majority of the content within the message field is already denoted previously in the same log and is essentially just duplicating content. Anyway, have transferred the relevant SEDCMD lins to a local props.conf file however the filters did not work, even after pushing. I believe this is because the logs are in an XML format and not the native format, however I am happy to be corrected there if I am wrong. The current config file I am running in local/props.conf is as follows:       [source::WinEventLog:Security] SEDCMD-windows_security_event_formater = s/(?m)(^\s+[^:]+\:)\s+-?$/\1/g SEDCMD-windows_security_event_formater_null_sid_id = s/(?m)(:)(\s+NULL SID)$/\1/g s/(?m)(ID:)(\s+0x0)$/\1/g SEDCMD-cleansrcip = s/(Source Network Address: (\:\:1|127\.0\.0\.1))/Source Network Address:/ SEDCMD-cleansrcport = s/(Source Port:\s*0)/Source Port:/ SEDCMD-remove_ffff = s/::ffff://g SEDCMD-clean_info_text_from_winsecurity_events_certificate_information = s/Certificate information is only[\S\s\r\n]+$//g SEDCMD-clean_info_text_from_winsecurity_events_token_elevation_type = s/Token Elevation Type indicates[\S\s\r\n]+$//g SEDCMD-clean_info_text_from_winsecurity_events_this_event = s/This event is generated[\S\s\r\n]+$//g #For XmlWinEventLog:Security SEDCMD-cleanxmlsrcport = s/<Data Name='IpPort'>0<\/Data>/<Data Name='IpPort'><\/Data>/ SEDCMD-cleanxmlsrcip = s/<Data Name='IpAddress'>(\:\:1|127\.0\.0\.1)<\/Data>/<Data Name='IpAddress'><\/Data>/ SEDCMD-cleanxmlseclogs = s/<Message>[\S\s\r\n]+<\/Message>/<Message></Message>       I have left some of the default lines in for WinEventLog:Security for no other reason that just to test. I have added the cleanxmlseclogs line at the end. It is here I am trying to detect the whole Message field and then overwrite with just the headers, so that the content of the field gets dropped. Can anyone assist with where I am going wrong here?
Hi all, I would like some help related to the wrong time value in Threat Intelligence (KV Store Lookup ) "ip_intel". Each entry has a value of "1970/01/20 02:45:00" or similar to it...the date is... See more...
Hi all, I would like some help related to the wrong time value in Threat Intelligence (KV Store Lookup ) "ip_intel". Each entry has a value of "1970/01/20 02:45:00" or similar to it...the date is same...  I would assume that this is an issue related to parsing epoch time? But I am having a hard time identifying how this could be fixed. I would be happy with the approximate time of upload to "ip_intel". If anyone has suggestions I would appreciate it. Thanks
Hi everyone, Pretty new to Splunk and would really appreciate your insight on my current project. Currently creating a dashboard where I want to use a timepicker to change the values in my charts d... See more...
Hi everyone, Pretty new to Splunk and would really appreciate your insight on my current project. Currently creating a dashboard where I want to use a timepicker to change the values in my charts depending on the time period selected by the user via the Date Range - Between. Currently experiencing problems formatting my _time value to include DATE and eventHour together. Below is my search query and search result for reference. Thank you in advance. index=mainframe-platform sourcetype="mainframe:cecmaverage" EXPRSSN = D7X0 | dedup DATE EXPRSSN MIPS | eval DATE=strftime(strptime(DATE,"%d%b%Y"),"%Y-%m-%d") | eval HOUR=if (isnull(HOUR),"0",HOUR) | eval eventHour=substr("0".HOUR,-2,2).":00:00" | eval _time=strptime(DATE." ".eventHour,"%Y-%m-%d %H:%M:%S") | table DATE eventHour _time EXPRSSN MIPS  
Hi, i have 2 events with 3 fields: timestamp , servername, cpu_usage: 22-Mar-2022 00:00:00, server1 ,18 23-Mar-2022, 00:01:00 server1 , 82 22-Mar-2022 00:00:00, server2 ,78 23-Mar-2022, 00:... See more...
Hi, i have 2 events with 3 fields: timestamp , servername, cpu_usage: 22-Mar-2022 00:00:00, server1 ,18 23-Mar-2022, 00:01:00 server1 , 82 22-Mar-2022 00:00:00, server2 ,78 23-Mar-2022, 00:01:00 server2 , 14 I want to calculate difference between 2nd and 1st event for each server. Can you please suggest, how this can be done?
I have a scheduled report that runs once every 12 hour. But once it runs , it generates same email alerts multiple times during the scheduled time, Is there any way to compress / throttle to just on... See more...
I have a scheduled report that runs once every 12 hour. But once it runs , it generates same email alerts multiple times during the scheduled time, Is there any way to compress / throttle to just one report/email ? | tstats min(_time) as first_time max(_time) as last_time values(sourcetype) where TERM(121.121.1.165) OR TERM(876.234.11.214) OR TERM(192.176.30.196) by index | convert ctime(first_time) ctime(last_time)
Hi, I have 2 indexes and i am performing a join between both indexes to get the top 10 categories per region. Categories come from one index and region comes from the other index. I am able to perf... See more...
Hi, I have 2 indexes and i am performing a join between both indexes to get the top 10 categories per region. Categories come from one index and region comes from the other index. I am able to perform the join but I am unable to incorporate the top function to get the top 10 categories per region. Here is my query:   Can you please help? Many thanks, Patrick
    I am able to perfom search for disk space and can see the reuslts. However, I am not getting alert when I setup it in alert option. Below are the settings I have used: Search script: =======... See more...
    I am able to perfom search for disk space and can see the reuslts. However, I am not getting alert when I setup it in alert option. Below are the settings I have used: Search script: =============== index=perfmon host=XXXXXX OR host=YYYYYYYsourcetype="Perfmon:LogicalDisk" counter="% Free Space" instance="C:" OR instance="D:" OR instance="E:" Value earliest=-1m latest=now |dedup instance host| sort host| eval Value=round(Value,0)| where Value<50| stats list(host),list(instance),list(Value)| rename list(host) as Servers, list(instance) as Drives, list(Value) as FreeSpaceLeft% Cron expression : ===================== */5 * * * * Trigger alert condition: ========================= search Value <= 50 CAn you please help me on where it went wrong. I am not getting alert for this condition.    
hi   I have 2 pb with my eval clause below 1) when I have a look to the events collected, they dont correspond to the domain specified and the url specified so the sum on the field tpscap is ... See more...
hi   I have 2 pb with my eval clause below 1) when I have a look to the events collected, they dont correspond to the domain specified and the url specified so the sum on the field tpscap is wrong       | eval tpscap =if(domain="stm" AND url="*%g6_%*" OR url="*WS_STOMV2_H55*" AND web_dura > 50, 1, 0) | chart sum(tpscap) as tps        so what is wrong please? 2)  thanks
i want splunk to show me the geolocation from incoming traffic. as everyone knows syslog lines can vary a lot, it is not parsed at all besides the time and date. after downloading a days worth of sys... See more...
i want splunk to show me the geolocation from incoming traffic. as everyone knows syslog lines can vary a lot, it is not parsed at all besides the time and date. after downloading a days worth of syslog traffic and using the "extract fields" to highlight the IP address that needs to be used to find the location it was possible to see on the world map where the traffic came from. what i need is this exact feature but for real time data, i want to see this information from the syslog file in real time. so far it hasn't worked, and i dont know how to fix it. i use the same search on both the real-time syslog and downloaded syslog file however it only works with the downloaded syslog index=_* OR index=* sourcetype=syslog | iplocation clientip | geostats count by Country    
Hi, I have a dropdown with 3 options. When I select one of the option, the value should be in the token and passed to a base search. However, on the panel that uses this base search, the input neve... See more...
Hi, I have a dropdown with 3 options. When I select one of the option, the value should be in the token and passed to a base search. However, on the panel that uses this base search, the input never appears to be understood:   Here is the XML code for the dropdown and base search and panel:     Can you please help? Many thanks, Patrick
Hello. Given these logs: 2022-03-16 16:08:43.991 traceId="7890" svc="Service1" duration=132 2022-03-16 16:10:43.279 traceId="1234" svc="Service1" duration=132 2022-03-16 16:38:43.281 traceId="5... See more...
Hello. Given these logs: 2022-03-16 16:08:43.991 traceId="7890" svc="Service1" duration=132 2022-03-16 16:10:43.279 traceId="1234" svc="Service1" duration=132 2022-03-16 16:38:43.281 traceId="5678" svc="Service3" duration=219 2022-03-16 16:43:43.284 traceId="1234" svc="Service2" duration=320 2022-03-16 17:03:44.010 traceId="1234" svc="Service2" duration=1023 2022-03-16 17:04:44.299 traceId="5678" svc="Service3" duration=822 2022-03-16 17:19:44.579 traceId="5678" svc="Service2" duration=340 2022-03-16 17:32:44.928 traceId="1234" svc="Service1" duration=543 I would like in a single search to: extract all traceIds that happened between 17:00 and 17:05 search for the captured traceIds in larger range (say between 16:00 and 18:00) Is that possible? Thank you!