All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

My audit logs are not being sent to splunk. The inputs.conf file is configured to monitor everything under /var/log. Please see below. Any assistance would be helpful, Thanks. [monitor:///var/log] ... See more...
My audit logs are not being sent to splunk. The inputs.conf file is configured to monitor everything under /var/log. Please see below. Any assistance would be helpful, Thanks. [monitor:///var/log] disabled=false index=linux sourcetype=linux_messages_syslog   I also tried to specify the log in the inputs.conf file as seen below. But still no luck. [monitor:///var/log/audit/audit.log] disabled=false index=linux sourcetype=linux_audit  
I have gone through the forums looking for an answer to this, but nothing has worked.  I am trying to convert a string to a date.  I have data in an index that is extracted, with a field named Expira... See more...
I have gone through the forums looking for an answer to this, but nothing has worked.  I am trying to convert a string to a date.  I have data in an index that is extracted, with a field named Expiration_Date that contains a string that is actually a date/time, such as 5/22/2022 10:10:25 PM. I found that this query works properly: | makeresults | eval x="08/04/16 9:40:41 PM" | eval y=strptime(x, "%m/%d/%y %H:%M:%S") | eval z=strftime(y, "%m/%d/%Y") | table x y z This query outputs the converted time properly in the z field.  However, when I try to use this with my data, as such index = ssl_certs | eval x=Expiration_Date | eval y=strptime(x, "%m/%d/%y %H:%M:%S") | eval z=strftime(y, "%m/%d/%Y") | table Expiration_Date, x, y, z The x field is equal to the Expiration_Date field, but y and z fields are empty.  Is there something special I am missing here with loading in the values of the Expiration_Date field into the eval statements?
When trying to change the tenant contact information on Phantom in the multi-tenancy section of Product Settings , I am getting this error posted in the screenshot below when I select save. Has anybo... See more...
When trying to change the tenant contact information on Phantom in the multi-tenancy section of Product Settings , I am getting this error posted in the screenshot below when I select save. Has anybody experienced this, and how do I resolve this issue?  
Has anyone had good results when showing Dashboard Studio Dashboard in Splunk Mobile APP? I'm getting the same result as with the Simple XML dashboard. I would like to know if it's possible to show ... See more...
Has anyone had good results when showing Dashboard Studio Dashboard in Splunk Mobile APP? I'm getting the same result as with the Simple XML dashboard. I would like to know if it's possible to show the whole dashboard with the Background image in it. Thanks.  
I have a scripted input created to monitor certificate expiration. An example event: Tue Jul 27 12:07:55 CDT 2021,/opt/splunk/etc/auth/server.pem,notAfter=Nov 29 16:58:08 2023 GMT Splunk ingests t... See more...
I have a scripted input created to monitor certificate expiration. An example event: Tue Jul 27 12:07:55 CDT 2021,/opt/splunk/etc/auth/server.pem,notAfter=Nov 29 16:58:08 2023 GMT Splunk ingests the data using the first timestamp (Tue Jul 27 12:07:55 CDT 2021). Which is not a problem. I am wanting Splunk to recognize the portion after 'notAfter=' as a second date to where I can sort based upon month, day, and year in order to report when a certificate is nearing expiration. I have a regular field extraction to include the expiration date in a table, but sorting it only sorts by the first letter of the month. Is it possible for Splunk to recognize a secondary date/timestamp? Possibly though a regex?  
Hi, using splunkjs we can display a search or saved search(report). is there a way where I can display an existing dashboard in my own webapp? is there anything other than splunkjs that can be use... See more...
Hi, using splunkjs we can display a search or saved search(report). is there a way where I can display an existing dashboard in my own webapp? is there anything other than splunkjs that can be used for this? Thanks
Hi All, I am trying to write simple & single query to alert when a process is down and alert again when the same process is up. However, it seems there is no straightforward way. used below query t... See more...
Hi All, I am trying to write simple & single query to alert when a process is down and alert again when the same process is up. However, it seems there is no straightforward way. used below query to get alert when process is down and it is working  perfectly. | mstats latest(_value) as RSS_Memory WHERE index=telegraf metric_name=procstat.memory_rss host=<hostname> process_name=<processname> by process_name pid However, I am seeking help in writing single query alert when a process is down and alert again when the same process is up.  Please help, struggling from many days on this. -- Thanks Sarves
Hello All, We currently have a single standalone deployment (index and search head on single system).  In addition, we have deployed a new index cluster (3 nodes) and single search head.  We will be... See more...
Hello All, We currently have a single standalone deployment (index and search head on single system).  In addition, we have deployed a new index cluster (3 nodes) and single search head.  We will be migrating all of our forwarders to point to the newly deployed cluster.  However, we still have data on the single deployment server that has not aged out yet.   Does anyone know if it is possible to configure our search head to also search the old standalone Splunk environment ?     Thanks.  
Hi, I am trying to build a alert action where I have an drop down with fixed values. But when I am passing the data to  internal value . I am getting error like Internal Value can only contain alpha... See more...
Hi, I am trying to build a alert action where I have an drop down with fixed values. But when I am passing the data to  internal value . I am getting error like Internal Value can only contain alphanumeric characters and underscores.  How to resolve this issue can anyone help? Internal Value: 4-Minor/Localized    
Below the excerpt from my HTTP request and I'm trying to get the User-Agent value from it and so far not successful. Will appreciate any help. This Splunk editor is removing the carriage return and ... See more...
Below the excerpt from my HTTP request and I'm trying to get the User-Agent value from it and so far not successful. Will appreciate any help. This Splunk editor is removing the carriage return and line feed characters so below is the regex101 link.https://regex101.com/r/rdu8yE/1 Also attached is the screenshot of the HTTP request.    
Hi, I have configured a couple of new hosts to forward Windows logs directly to Splunk cloud rather than going via on prem Splunk. I have implemented this configured on a Splunk distribution server ... See more...
Hi, I have configured a couple of new hosts to forward Windows logs directly to Splunk cloud rather than going via on prem Splunk. I have implemented this configured on a Splunk distribution server and defined the hosts via server class. I can see the hosts logs appearing in Splunk but am unsure how to verify they are being injested via Splunk cloud rather than on prem. Could someone advise on how I can validate this? Thanks
Hi Expert,                      Quite new to Splunk . From the example log line below 03:23:05.056 [publish-1] INFO LoggingAuditor - [testout] TracingOutgoing: 8=FIX.4.29=90635=8115=ONMI=SOMEVENUE3... See more...
Hi Expert,                      Quite new to Splunk . From the example log line below 03:23:05.056 [publish-1] INFO LoggingAuditor - [testout] TracingOutgoing: 8=FIX.4.29=90635=8115=ONMI=SOMEVENUE34=37249=BRX60256=testout 52=20210727-07:23:05.05 Is it possible to pull out in columns headers somehow ?  LogType=LoggingAuditor Destination=[testout] Direction=TracingOutgoing SendingTime=52=20210727-07:23:05.05 (just the time ) ? Thanks so much ! 
We are running Splunk on Windows and are moving from a multisite cluster with two sites to a single site. Are there any detrimental effects from leaving the cluster master in multisite mode even thou... See more...
We are running Splunk on Windows and are moving from a multisite cluster with two sites to a single site. Are there any detrimental effects from leaving the cluster master in multisite mode even though only one site is configured? We would like to avoid the down time required to switch to single site mode if possible.
Hello I have a auditd search like type=EXECVE msg=audit(16): a0="sendmail" a1="-t" I would like one field with any field like a (a0, a1, a2, a3  ect..) I try: "type=EXECVE msg=audit(16): argc=2 a... See more...
Hello I have a auditd search like type=EXECVE msg=audit(16): a0="sendmail" a1="-t" I would like one field with any field like a (a0, a1, a2, a3  ect..) I try: "type=EXECVE msg=audit(16): argc=2 a0="sendmail""  | foreach a* [ eval test = test +  '<<FIELD>>' ]   No result, I need you help please.  
It seems that the authenticationDetail resource type is no longer part of the: Sign-ins - Azure AD sign-ins including conditional access policies and MFA After researching the issue it seems only th... See more...
It seems that the authenticationDetail resource type is no longer part of the: Sign-ins - Azure AD sign-ins including conditional access policies and MFA After researching the issue it seems only the Beta API NOT the v1.0 API has the data we want. However toggling the addon to Beta Has not affect on the log structure we still don't see authenticationDetail resource type in the logs.  Microsoft Azure Add-on for Splunk Version: 3.1.1 Splunk Enterprise 8.1 Is this a problem with the TA not having the correct python to pull the data or the MS API changing ? worked in April this year. 
I'm searching about how can I get the saved searches creation date, but I didn't see it in any documentation. Is it possible to use rest command to see this info or any other command? I got only the... See more...
I'm searching about how can I get the saved searches creation date, but I didn't see it in any documentation. Is it possible to use rest command to see this info or any other command? I got only the updated field, but it's not what I need.   
Dear Splunk community, I received an alert mail  from Splunk about the need to update my splunkbase app with latest add-on builder by 31 august. If I understood correctly the message, the need for ... See more...
Dear Splunk community, I received an alert mail  from Splunk about the need to update my splunkbase app with latest add-on builder by 31 august. If I understood correctly the message, the need for upgrading is mandatory only if app is built with the addon builder. Actually I've never used the addon builder for packaging, I've used instead the packaging toolkit through the command line. Does that mean that no action is required from me? Thanks in advance for your time and support.
Hi, We have recently migrated from LEA to checkpoint log exporter facility to collect Checkpoint firewall logs in CEF format. Even after trying multiple props configuration, we still observe events... See more...
Hi, We have recently migrated from LEA to checkpoint log exporter facility to collect Checkpoint firewall logs in CEF format. Even after trying multiple props configuration, we still observe events breaking at irregular intervals. Some events parse correctly at the start and end of the event and some in between or abruptly. we even tried the splunk add-on - https://splunkbase.splunk.com/app/4180/ Does anyone have a working props?
I'm monitoring AD and DNS Server logs on Windows 2019 servers and Universal Forwarder has been the resource utilization offender. Is it possible to limit the server's memory or CPU usage by UF? I'm... See more...
I'm monitoring AD and DNS Server logs on Windows 2019 servers and Universal Forwarder has been the resource utilization offender. Is it possible to limit the server's memory or CPU usage by UF? I'm running UF version 8.2.1 Windows Server 2019, Splunk Enterprise 8.2.1 Linux_64. The amount of DNS events is huge. [monitor://C:\Windows\System32\Dns\dns*.log] Thanks in advance. James \0/
We are monitoring through HF a directory through 'batch' input. The directory contains 100s of zip files, each zip contains 1000s of log files. We have a requirement to write down each log file sep... See more...
We are monitoring through HF a directory through 'batch' input. The directory contains 100s of zip files, each zip contains 1000s of log files. We have a requirement to write down each log file separately, after it was being forwarded. 1. We tried to use splunkd.log in the HF The component "ArchiveProcessor" does not write each log file within the zip. The component "Metrics" holds stastics of top 10 every 30 seconds, so it does not write every file. Can we change the 'Metrics' paremeters, to write top X and not top 10? Is there any other component we can use, even in DEBUG mode, to get that information? 2. We tried to approach this through the index itself in the indexer. The log files timestamp is not close to 'now', but is spreaded across many years, so it is very slow and not efficient to search them through the index. For example: (| tstats count where earliest=0 latest=now index=myindex by source)