All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I need to create a dashboard in which dropdown will have two values "Yesterday" and "last week" (basically compares today's data with "Yesterday" and "last week" -- this part is completed). Now I n... See more...
I need to create a dashboard in which dropdown will have two values "Yesterday" and "last week" (basically compares today's data with "Yesterday" and "last week" -- this part is completed). Now I need to display only today's data in panels before user selects any input from drop-down menu. How can I achieve that ?  
We got the email alert notifications running in Splunk and the configuration the same across all of the alerts but only some of them actually send an email. We have a separate page where we can see a... See more...
We got the email alert notifications running in Splunk and the configuration the same across all of the alerts but only some of them actually send an email. We have a separate page where we can see all of the alerts but we don't see all of them come across our emails. All of the alerts are configured the same way as seen below:  I'm not understanding why the email notifications only work for certain alerts when we can see all of the alerts on our dashboard and they're all configured the same. 
Trying to uninstalling old version of splunk forwarder, but the msi isn't on the machine. When attempting to unistall, it asks to be pointed to the msi and then fails due to it not being present. I ... See more...
Trying to uninstalling old version of splunk forwarder, but the msi isn't on the machine. When attempting to unistall, it asks to be pointed to the msi and then fails due to it not being present. I looked at the older versions on the website and it only goes to 7.   Any ideas as to what I can do?  
My cluster has one issue with data durability, everything else seems fine. All Indexers are online and running, even the healthchecks return a somewhat good result. What I noticed is one peer has 920... See more...
My cluster has one issue with data durability, everything else seems fine. All Indexers are online and running, even the healthchecks return a somewhat good result. What I noticed is one peer has 920 buckets and the other has 919 buckets, is that the issue? What should I do?
  Hi Team, We are facing discrepancy with Splunk License total usage vs Index wise usage. Could you please help us on this?  Our Actual Splunk Stack is 50GB. 1. Index wise License Usage:   ... See more...
  Hi Team, We are facing discrepancy with Splunk License total usage vs Index wise usage. Could you please help us on this?  Our Actual Splunk Stack is 50GB. 1. Index wise License Usage:   for individual index for 1 index showing 65.46GB for the same day Total usage we are getting 55.42GB as shown in below screen shots. 2. Total License Usage: This is the Overall License usage for Feb 15.   Kindly assist us with License Usage query based on index wise and it should match with the total License Usage and indicate any changes that need to be made at the server or configuration level. @gcusello @isoutamo @PickleRick Regards, Siva.
I have been building KV Store lookups with the lookup editor and I have noticed that when I add a line in the UI, when I leave it and come back to it, it duplicates the line multiple times and I have... See more...
I have been building KV Store lookups with the lookup editor and I have noticed that when I add a line in the UI, when I leave it and come back to it, it duplicates the line multiple times and I have to go back and delete the duplicates.  This seems to happen whether I am copying and pasting or just simply adding a line by hand.  Has anyone else seen this issue or am I doing something wrong?  To add a line I right-click on the row and select add a new line above.  Once I finish the data input I leave the line to commit it.  I go to my dashboard that is displaying the store, refresh and note that there are multiple copies of the line I just added. This does not happen with CSV file lookups, just the KV Stores. Thoughts?  More info?
Hi guys, I am trying to set up a code in javascript which will refresh page after javascript run, because now my dashboards loads, but javascript run first and the visualizations depends on javascr... See more...
Hi guys, I am trying to set up a code in javascript which will refresh page after javascript run, because now my dashboards loads, but javascript run first and the visualizations depends on javascript and then coloring for example don't change. When I tried to put refresh under query to 5seconds, then it was reloaded and all visualizations were loaded, but I would like to do it better way and I am sure with javascript is possible, but I am very basic with javascript, so I was searching here, but nothing worked, because mainly it was set up, that after some button click the javascript will reload the page, but I would like to have it automatically. Thank you for any ideas. v.    
Looking to create a report showing the uptime of all hosts in a specific index which ingest data via a UF. I would like to see over the past 30 days, what was the percentage of uptime per host in tha... See more...
Looking to create a report showing the uptime of all hosts in a specific index which ingest data via a UF. I would like to see over the past 30 days, what was the percentage of uptime per host in that index=abc.  I am trying to create a metrics report showing the frequency a host is logging to Splunk.    
I have a number of log-rotated files for mail.log in the /var/log folder on a unix system. The /var/log/mail.log file gets ingested just fine, so I know permissions aren't an issue. However, I'd like... See more...
I have a number of log-rotated files for mail.log in the /var/log folder on a unix system. The /var/log/mail.log file gets ingested just fine, so I know permissions aren't an issue. However, I'd like to also ingest the older data that was log-rotated, but for the purpose of ingesting, those files were untarred again, so I have mail.log.1 to mail.log.4 I have tried numerous stanzas and regexes in the whitelist, but none lead to the older data getting ingested.  The one I currently have in place is: [monitor:///var/log/] index = postfix sourcetype = postfix_syslog whitelist = (mail\.log$|mail\.log\.\d+) Thanks for any suggestions in advance.  
Thanks in Advance. In my scenario i want to club the the result using correlationID .so i used transaction command .Below query have multiple conditions are checking from same field called message.S... See more...
Thanks in Advance. In my scenario i want to club the the result using correlationID .so i used transaction command .Below query have multiple conditions are checking from same field called message.So i want to exclude some of the search string in this.So after the transaction i tried to exclude the search string but i am not getting the result. index="mulesoft" applicationName="concur" environment=DEV ("Concur Ondemand Started*") OR (message="Expense Extract Process started for jobName :*") OR ("Before Calling flow archive-Concur*") OR (message="Concur AP/GL File/s Process Status*") OR (message="Records Count Validation Passed*") OR (message="API: START: /v1/expense/extract/ondemand*" OR message="API: START: /v1/fin*") OR (message="Post - Expense Extract processing to Oracle*") | transaction correlationId| search NOT ("*Failed Processing Concur*")| rename content.SourceFileName as SourceFileName content.JobName as JobName content.loggerPayload.archiveFileName AS ArchivedFileName content.payload{} as Response content.Region as Region content.ConcurRunId as ConcurRunId content.HeaderCount as HeaderCount content.SourceFileDTLCount as SourceFileDTLCount content.APRecordsCountStaged as APRecordsCountStaged content.GLRecordsCountStaged as GLRecordsCountStaged | eval "FileName/JobName"= coalesce(SourceFileName,JobName)| eval JobType=case(like('message',"%Concur Ondemand Started%"),"OnDemand",like('message',"Expense Extract Process started%"),"Scheduled", true() , "Unknown")| eval Status=case(like('message' ,"%Concur AP/GL File/s Process Status%"),"SUCCESS", like('message',"%EXCEPTION%"),"ERROR") |table correlationId "FileName/JobName" Status ArchivedFileName JobType Response Region ConcurRunId HeaderCount SourceFileDTLCount APRecordsCountStaged GLRecordsCountStaged  
Hi All,   I have logs like below in splunk: Log1: Tue Feb 25 04:00:20 2024 EST 10G 59M 1% /apps Log2: Tue Feb 25 04:00:20 2024 EST 10G 6.4G 64% /logs Log3: Tue Feb 25 04:00:20 2024 EST 10G 2G 20% ... See more...
Hi All,   I have logs like below in splunk: Log1: Tue Feb 25 04:00:20 2024 EST 10G 59M 1% /apps Log2: Tue Feb 25 04:00:20 2024 EST 10G 6.4G 64% /logs Log3: Tue Feb 25 04:00:20 2024 EST 10G 2G 20% /opt Log4: Tue Feb 25 04:00:20 2024 EST 30G 282M 1% /var  I have used the below query to extract the required fields: ... | rex field=_raw "EST\s(?P<Total_Space>[^\s]+)\s(?P<Used_Space>[^\s]+)\s(?P<Disk_Usage>[^%]+)\%\s(?P<File_System>[^\s]+)" Here, the output values of "Used_Space" field has both GB and MB values and I need to convert only MB values to GB.  Please help to create a query to get the MB values only converted to GB.   Your kind inputs are highly appreciated..!! Thank You..!!
I try to send the logs to Splunk via API that shown "Re-enter client secret" after I added the Tenant within 15 mins.  and I installed the add-on in 2 servers that shown feature item unequal as pictu... See more...
I try to send the logs to Splunk via API that shown "Re-enter client secret" after I added the Tenant within 15 mins.  and I installed the add-on in 2 servers that shown feature item unequal as picture below Feature unequal in 2 servers error re-enter client secret
Hi everyone, I would like to restart and apply the rtsearch role to my sc_admin on my free-trial but I cannot sumbit a ticket with the forms. Do you have any solution for me please ? ... See more...
Hi everyone, I would like to restart and apply the rtsearch role to my sc_admin on my free-trial but I cannot sumbit a ticket with the forms. Do you have any solution for me please ?  
Hi  We are facing below error while Run the search in search head. This is coming frequently and unable to solve it. We have checked bundle size & Network connectivity between Indexer & Search he... See more...
Hi  We are facing below error while Run the search in search head. This is coming frequently and unable to solve it. We have checked bundle size & Network connectivity between Indexer & Search heads. All looks good but still getting this below error. Please check and provide me hopeful solution on this. Unable to distribute to peer named uswaa-dopsidt01.cgdop.com at uri https://XXXX:8089 because replication was unsuccessful. ReplicationStatus: Failed - Failure info: failed_because_BUNDLE_DATA_TRANSMIT_FAILURE. Verify connectivity to the search peer, that the search peer is up, and that an adequate level of system resources are available. See the Troubleshooting Manual for more information.
Hello Splunker In my request, I want to monitor the below files, which are under the network folder. I have configured indexes.conf, props.conf, inputs.conf & transforms.conf but nothing is workin... See more...
Hello Splunker In my request, I want to monitor the below files, which are under the network folder. I have configured indexes.conf, props.conf, inputs.conf & transforms.conf but nothing is working for me to get data into Splunk. Please check my config and help or suggest me if any changes are required. inputs.conf : [monitor://\\WALVAU-SCADA-1\d$\CM\alarmreports\outgoing*] disabled = false index = scada host = WALVAU-SCADA-1 sourcetype = cm_scada_xml indexes.conf : [scada] coldPath = $SPLUNK_DB/scada/colddb enableDataIntegrityControl = 0 enableTsidxReduction = 0 homePath = $SPLUNK_DB/scada/db maxTotalDataSizeMB = 512000 thawedPath = $SPLUNK_DB/scada/thaweddb   props.conf : [cm_scada_xml] KEEP_EMPTY_VALS = false KV_MODE = xml LINE_BREAKER = <\/eqtext:EquipmentEvent>() MAX_TIMESTAMP_LOOKAHEAD = 24 NO_BINARY_CHECK = true SEDCMD-first = s/^.*<eqtext:EquipmentEvent/<eqtext:EquipmentEvent/g SHOULD_LINEMERGE = false TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%3f%Z TIME_PREFIX = ((?<!ReceiverFmInstanceName>))<eqtext:EventTime> TRUNCATE = 100000000 category = Custom disabled = false pulldown_type = true TRANSFORMS-remove-xml-footer = remove-xml-footer TRANSFORMS-keep-came-in-and-went-out-states = keep-came-in-and-went-out-states FIELDALIAS-fields_scada_xml = "eqtext:EquipmentEvent.eqtext:ID.eqtext:Location.eqtext:PhysicalLocation.AreaID" AS area "eqtext:EquipmentEvent.eqtext:ID.eqtext:Location.eqtext:PhysicalLocation.ElementID" AS element "eqtext:EquipmentEvent.eqtext:ID.eqtext:Location.eqtext:PhysicalLocation.EquipmentID" AS equipment "eqtext:EquipmentEvent.eqtext:ID.eqtext:Location.eqtext:PhysicalLocation.ZoneID" AS zone "eqtext:EquipmentEvent.eqtext:ID.eqtext:Description" AS description "eqtext:EquipmentEvent.eqtext:ID.eqtext:MIS_Address" AS mis_address "eqtext:EquipmentEvent.eqtext:Detail.State" AS state "eqtext:EquipmentEvent.eqtext:Detail.eqtext:EventTime" AS event_time "eqtext:EquipmentEvent.eqtext:Detail.eqtext:MsgNr" AS msg_nr "eqtext:EquipmentEvent.eqtext:Detail.eqtext:OperatorID" AS operator_id "eqtext:EquipmentEvent.eqtext:Detail.ErrorType" AS error_type "eqtext:EquipmentEvent.eqtext:Detail.Severity" AS severity transforms.conf : [remove-xml-footer] REGEX = <\/eqtexo:EquipmentEventReport> DEST_KEY = queue FORMAT = nullQueue [keep-came-in-and-went-out-states] REGEX = <State>(?!CAME_IN|WENT_OUT).*?<\/State> DEST_KEY = queue FORMAT = nullQueue    
I need to post some custom metrics to AppDynamics for analytics purpose. So I am trying to create new Transaction from application source code using below code. Transaction transaction = Appdynamic... See more...
I need to post some custom metrics to AppDynamics for analytics purpose. So I am trying to create new Transaction from application source code using below code. Transaction transaction = AppdynamicsAgent.getTransaction(getProcessorName().name(), null, EntryTypes.POJO, false) However getting below while loading AppdynamicsAgenet class Caused by: java.lang.ClassNotFoundException: com.appdynamics.apm.appagent.api.NoOpInvocationHandler at java.net.URLClassLoader.findClass(URLClassLoader.java:382) at java.lang.ClassLoader.loadClass(ClassLoader.java:418) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352) at java.lang.ClassLoader.loadClass(ClassLoader.java:351) ... 92 common frames omitted​ Anyone know how to fix this?
Hi SMEs, there are logs coming from one of the application in one single event. How to split it in a seperate log event. Like there are 3 diff logs which are tagged to one event log type=USER_ACCT m... See more...
Hi SMEs, there are logs coming from one of the application in one single event. How to split it in a seperate log event. Like there are 3 diff logs which are tagged to one event log type=USER_ACCT msg=audit(Thu Sep 22 09:09:09 2023.333.12221): pid=12345 uid=0 auid=424242424 ses=6535872 subj=system_u:system_r:crond_t:s0-s0:c0.c1111 msg='op=PAM:accounting grantors=pam_access,pam_faillock,pam_unix,pam_localuser acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success' type=USER_ACCT msg=audit(Thu Sep 22 09:09:09 2023.333.12223): pid=12345 uid=0 auid=424242424 ses=6535872 subj=system_u:system_r:crond_t:s0-s0:c0.c1111 msg='op=PAM:accounting grantors=pam_access,pam_faillock,pam_unix,pam_localuser acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success' type=USER_ACCT msg=audit(Thu Sep 22 09:09:09 2023.333.12229): pid=12345 uid=0 auid=424242424 ses=6535872 subj=system_u:system_r:crond_t:s0-s0:c0.c1111 msg='op=PAM:accounting grantors=pam_access,pam_faillock,pam_unix,pam_localuser acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=success'  
Hi SMEs, morning I have a situation where logs are coming from an application recently on-boarded in below format, seems like they are in JSON and should be parsed as per key:value mechanism. Any sugg... See more...
Hi SMEs, morning I have a situation where logs are coming from an application recently on-boarded in below format, seems like they are in JSON and should be parsed as per key:value mechanism. Any suggestion how to fix it. Many thanks in advance <11>1 2024-02-27T03:22:53.376823921Z hostname-1 ipsec ipsecd[85] log - {"time":"2024-02-27T03:22:53.376823921Z","type":"log","level":"error","log":{"msg":"et_backend: connection failed while getting et keys"},"process":"ipsecd[85]","service":"ipsec","system":"hostname-1","neid":"414399","container":"784722400000","host":"hostname-1","timezone":"UAT"}
I am using Splunk Enterprise version 9.2.0.1 ( Upgraded from 9.0.5 to latest). Before the upgrade, the Splunk deployment server is working as well. When Splunk DS was upgraded to version 9.2.0.1, w... See more...
I am using Splunk Enterprise version 9.2.0.1 ( Upgraded from 9.0.5 to latest). Before the upgrade, the Splunk deployment server is working as well. When Splunk DS was upgraded to version 9.2.0.1, we saw issues with the client's server class. Client name: EC2AMAZ-XXXXX 1. Client in DS server before upgraded (9.0.5) Splunk Server class: UF_input_WIN, UF_output 2. Client in DS server after upgraded (9.2.0.1) Server class: UF_input_Linux, UF_output The server class "UF_input_Linux" only filters by machine type Linux (see section 3 below). I did not know why this server class is applied to this windows client 3. "UF_input_Linux" Server class configuration 4. "UF_input_WIN" Server class configuration Client is listed in the match list on UF_input_WIN server class Is that a bug? The filter Machine type does not work correctly. I did not change any thing on server class & app when upgraded Splunk DS. Does anyone know or meet this issue before? 
Do we have any content to detect "Moniker Link" - CVE-2024-21413