All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Currently I am feeding Splunk Zeek logs (formerly known as bro) via the monitor command. Some of the logs in the Zeek index are being parsed correctly. Other logs, however, are still appearing as raw... See more...
Currently I am feeding Splunk Zeek logs (formerly known as bro) via the monitor command. Some of the logs in the Zeek index are being parsed correctly. Other logs, however, are still appearing as raw text.  I remember in the past there was a certain link in the settings where I could specify how to extract each field in the event what to call the field and what data belonged to it.  I also remember being able to test the specific settings I was applying via a log of the same index/source type. Any help interpreting what I am trying to communicate or guidance as to finding that specific page I am looking for is very much appreciated. 
Hi Splunk experts, We have some apache tomcat web servers which are installed on windows so we want to monitor those servers via OTEL collector but while checking the document it says the configurat... See more...
Hi Splunk experts, We have some apache tomcat web servers which are installed on windows so we want to monitor those servers via OTEL collector but while checking the document it says the configuration only support on Kubernetes and Linux. So, is there a way that we can monitor windows apache tomcat servers? Please suggest! Thank in advance. Regards, Eshwar
Created a supportticket: Sendemail does not work if selected and set in the Alert config. But Sendemail function is working OK!?  Business Impact: Can not respond on any "System_down/System_off... See more...
Created a supportticket: Sendemail does not work if selected and set in the Alert config. But Sendemail function is working OK!?  Business Impact: Can not respond on any "System_down/System_offline" situation . Happens not very often but very critical to respond to. Product Version : 9.2.0.1 / I assume that it might not work since Splunk Enterprise v9.1.2 either (not sure) Area: Search/Index - Splunk Enterprise Deployment Type: On Prem / Small instance with only indexer, kV Search-head active  OS:  Windows 2019 server When did you first notice the issue? Somewhere 1/26/2024 (noticed a system_down situation on dashboard but was not notified by email) Did you make any changes recently?: I upgrade last week to v9.2.0.1 on our test server. Later I found that our production server (v9.1.2) has the same issue Steps Reproduce: You can created an Alert_Trigger_Test: (zie code below) | makeresults | eval ATT=4 | stats max(ATT) as mincount Then test it every 5 minutes (cron-schedule) by: search mincount < 7 ======================Alert Code : in savedsearch.conf (.../search/local) ===================== [Alert_trigger_1v1] action.email = 1 action.email.cc = <your@email_address_2> action.email.include.search = 1 action.email.inline = 1 action.email.priority = 2 action.email.sendresults = 1 action.email.to = <your@email_address_1> action.email.useNSSubject = 1 action.lookup = 0 action.lookup.append = 1 action.lookup.filename = alerttrigger.csv alert.digest_mode = 0 alert.expires = 1h alert.suppress = 0 alert.track = 1 alert_condition = search mincount >7 allow_skew = 5m counttype = custom cron_schedule = */5 * * * * dispatch.earliest_time = -5m dispatch.latest_time = now display.general.type = statistics display.page.search.mode = verbose display.page.search.tab = statistics enableSched = 1 quantity = 0 relation = greater than request.ui_dispatch_app = search request.ui_dispatch_view = search search = | makeresults \ | eval ATT=3\ | stats max(ATT) as mincount =======================END Code =================== Is there any one else suffering from the same issues? regards AshleyP
Hello, I have a lookup table called account_audit.csv and have a timestamp field UPDATE_DATE=01/05/24 04:49:26. How can I find all events within that lookup with UPDATE_DATE  >= 01/25/24. Any recomm... See more...
Hello, I have a lookup table called account_audit.csv and have a timestamp field UPDATE_DATE=01/05/24 04:49:26. How can I find all events within that lookup with UPDATE_DATE  >= 01/25/24. Any recommendations will be highly appreciated. Thank you!   
We have data similar to the below and are trying to chart it with a line or bar graph similar to the chart shown that was created in excel.   Been able to do different things to calculate a duration ... See more...
We have data similar to the below and are trying to chart it with a line or bar graph similar to the chart shown that was created in excel.   Been able to do different things to calculate a duration since midnight on the date to the end time to give a consistent starting point for each, but splunk does not seem to like to chart the duration or a time stamp as they are strings.   We can chart it as a value like a unix format date but that isn't really human readable.     Date System End Time 20240209 SYSTEM1 2/9/24 10:39 PM 20240209 SYSTEM2 2/9/24 10:34 PM 20240209 SYSTEM3 2/9/24 11:08 PM 20240212 SYSTEM1 2/12/24 10:37 PM 20240212 SYSTEM2 2/12/24 10:19 PM 20240212 SYSTEM3 2/12/24 11:10 PM 20240213 SYSTEM1 2/13/24 11:19 PM 20240213 SYSTEM2 2/13/24 10:17 PM 20240213 SYSTEM3 2/13/24 11:00 PM 20240214 SYSTEM1 2/14/24 10:35 PM 20240214 SYSTEM2 2/14/24 10:23 PM 20240214 SYSTEM3 2/14/24 11:08 PM 20240215 SYSTEM1 2/15/24 10:36 PM 20240215 SYSTEM2 2/15/24 10:17 PM 20240215 SYSTEM3 2/15/24 11:03 PM
Our splunk implementation has SERVERNAME as a preset field, and there are servers in different locations, but there is no location field. How can I count errors by location? I envision something like... See more...
Our splunk implementation has SERVERNAME as a preset field, and there are servers in different locations, but there is no location field. How can I count errors by location? I envision something like this but cannot find a way to implement: index=some_index "some search criteria" | eval PODNAME="ONTARIO" if SERVERNAME IN ({list of servernames}) | eval PODNAME="GEORGIA" if SERVERNAME IN ({list of servernames}) | timechart span=30min count by PODNAME Any ideas?
Hello Splunk Community,  I have a requirement to exclude the events from field values between  2AM-3AM everyday. For Example Field USA has 4 values USA = Texas, California, Washington, New York ... See more...
Hello Splunk Community,  I have a requirement to exclude the events from field values between  2AM-3AM everyday. For Example Field USA has 4 values USA = Texas, California, Washington, New York I want to exclude the events from Washington between 2AM-3AM .However, I want them in remaining time 23 hours period. Is there a search to achieve this? 
As the titles suggests, I'm looking into whether it's possible or not to load balance Universal Forwarder hosts that are also hosting rsyslog. I want to pointedly ask, Is there anyone here doing som... See more...
As the titles suggests, I'm looking into whether it's possible or not to load balance Universal Forwarder hosts that are also hosting rsyslog. I want to pointedly ask, Is there anyone here doing something like this?   The rsyslog config on each host is quite complex.. I'm using 9 different custom ports for up to 20 different source devices. If you are curious its setup like such: port xxxx used for PDU's, port cccc used for switches, port vvvv for routers, etc, etc. The Universal Forwarders then sent the data directly to Splunk Cloud. It's likely not the best, and is certainly not pretty but it gets the job done. Currently there is 2 dedicated UF hosts for two physical sites. These sites are being combined into a single colo, hence the LB question. Thanks!
I have a search that gives me the total number of hits to my website and the average number of hits over a 5 day period. I need to know how to setup a splunk alert, that notifies me when the avg numb... See more...
I have a search that gives me the total number of hits to my website and the average number of hits over a 5 day period. I need to know how to setup a splunk alert, that notifies me when the avg number of hits over a 5 day period increases or decreases by 10%. I can't seem to figure this out, any help would be appreciated.
Unable to fetch any data from Ubuntu UF which should be reporting to cloud splunk.   1) Installed splunk UF 9.2.0 and installed credentials package from splunk cloud too. 2) ports are open and tra... See more...
Unable to fetch any data from Ubuntu UF which should be reporting to cloud splunk.   1) Installed splunk UF 9.2.0 and installed credentials package from splunk cloud too. 2) ports are open and traffic is allowed  3) No error in splunkd.log. 4) Currently no inputs are configured, checking data connectivity by internal logs. check data as index=_internal source=*metric.log* Splukd.log shows below warnings only 02-16-2024 15:53:30.843 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/search_messages.log'. 02-16-2024 15:53:30.852 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/splunkd_ui_access.log'. 02-16-2024 15:53:30.859 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/btool.log'. 02-16-2024 15:53:30.876 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/mergebuckets.log'. 02-16-2024 15:53:30.885 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/wlm_monitor.log'. 02-16-2024 15:53:30.891 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/license_usage_summary.log'. 02-16-2024 15:53:30.898 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/searchhistory.log'. 02-16-2024 15:53:30.907 +0000 INFO WatchedFile [156345 tailreader0] - Will begin reading at offset=2859 for file='/opt/splunkforwarder/var/log/watchdog/watchdog.log'. 02-16-2024 15:53:31.112 +0000 INFO AutoLoadBalancedConnectionStrategy [156338 TcpOutEloop] - Connected to idx=1.2.3.5:9997:2, pset=0, reuse=0. autoBatch=1 02-16-2024 15:53:31.112 +0000 WARN AutoLoadBalancedConnectionStrategy [156338 TcpOutEloop] - Current dest host connection 1.2.3.5:9997, oneTimeClient=0, _events.size()=0, _refCount=1, _waitingAckQ.size()=0, _supportsACK=0, _lastHBRecvTime=Fri Feb 16 15:53:31 2024 is using 18446604251980134224 bytes. Total tcpout queue size is 512000. Warningcount=1 02-16-2024 15:54:00.446 +0000 INFO ScheduledViewsReaper [156309 DispatchReaper] - Scheduled views reaper run complete. Reaped count=0 scheduled views 02-16-2024 15:54:00.446 +0000 INFO CascadingReplicationManager [156309 DispatchReaper] - Using value for property max_replication_threads=2. 02-16-2024 15:54:00.446 +0000 INFO CascadingReplicationManager [156309 DispatchReaper] - Using value for property max_replication_jobs=5. 02-16-2024 15:54:00.447 +0000 WARN AutoLoadBalancedConnectionStrategy [156338 TcpOutEloop] - Current dest host connection 1.2.3.5:9997, oneTimeClient=0, _events.size()=0, _refCount=1, _waitingAckQ.size()=0, _supportsACK=0, _lastHBRecvTime=Fri Feb 16 15:53:31 2024 is using 18446604251980134224 bytes. Total tcpout queue size is 512000. Warningcount=21 Please assist.
In Microsoft IIS logs, when a field is empty, a dash ( - ) is used instead of leaving the value blank.  Presumably this is because IIS logs are space delimited, so otherwise it would just have three ... See more...
In Microsoft IIS logs, when a field is empty, a dash ( - ) is used instead of leaving the value blank.  Presumably this is because IIS logs are space delimited, so otherwise it would just have three consecutive spaces which might be ignored.  However, even though there is something in the field, I can't search for something like cs_username="-" and get any results.  Is this something Splunk is doing, where it is treating the dash as a NULL?  I have a dashboard where I track HTTP errors by cs_username, but when the username is not present, I can't drill down on the dash, I can only drill down on actual username values.  Is there a way to make the dash an active, drillable value?  I tried this but it didn't work: | fillnull value="-" cs_username How can I search the cs_username field when the value is a dash?
Hey Splunk Gurus, One quick question, is there any way to ship out all the splunk data from its indexers to aws s3 buckets? Environment is splunk cloud. Appreciate your response. Thanks Abhi
I have a logfile like this -   2024-02-15 09:07:47,770 INFO [com.mysite.core.app1.upload.FileUploadWebScript] [http-nio-8080-exec-202] The Upload Service /app1/service/site/upload failed in 0.1240... See more...
I have a logfile like this -   2024-02-15 09:07:47,770 INFO [com.mysite.core.app1.upload.FileUploadWebScript] [http-nio-8080-exec-202] The Upload Service /app1/service/site/upload failed in 0.124000 seconds, {comments=xxx-123, senderCompany=Company1, source=Web, title=Submitted via Site website, submitterType=Others, senderName=ROMAN , confirmationNumber=ND_50249-02152024, clmNumber=99900468430, name=ROAMN Claim # 99900468430 Invoice.pdf, contentType=Email} 2024-02-15 09:07:47,772 ERROR [org.springframework.extensions.webscripts.AbstractRuntime] [http-nio-8080-exec-202] Exception from executeScript: 0115100898 Duplicate Child Exception - ROAMN Claim # 99900468430 Invoice.pdf already exists in the location. --- --- --- 2024-02-15 09:41:16,762 INFO [com.mysite.core.app1.upload.FileUploadWebScript] [http-nio-8080-exec-200] The Upload Service /app1/service/site/upload failed in 0.138000 seconds, {comments=yyy-789, senderCompany=Company2, source=Web, title=Submitted via Site website, submitterType=Public Adjuster, senderName=Tristian, confirmationNumber=ND_52233-02152024, clmNumber=99900470018, name=Tristian CLAIM #99900470018 PACKAGE.pdf, contentType=Email} 2024-02-15 09:41:16,764 ERROR [org.springframework.extensions.webscripts.AbstractRuntime] [http-nio-8080-exec-200] Exception from executeScript: 0115100953 Document not found - Tristian CLAIM #99900470018 PACKAGE.pdf   We need to look at index=<myindex> "/alfresco/service/site/upload failed" and get the table with the following information.   _time clmNumber confirmationNumber name Exception 2024-02-15 09:07:47 99900468430 ND_50249-02152024 ROMAN Claim # 99900468430 Invoice.pdf 0115100898 Duplicate Child Exception - ROAMN Claim # 99900468430 Invoice.pdf already exists in the location 2024-02-15 09:41:16 99900470018 ND_52233-02152024 Tristian CLAIM #99900470018 PACKAGE.pdf 0115100953 Document not found - Tristian CLAIM #99900470018 PACKAGE.pdf   Exception is in another event line in logfile but just after the line from where to get first 4 metadata. Both of the rows/ events in the logs have sessionID in common and can have DOCNAME also in common but SessionID can have multiple transactions so can have different name.  I created following script for this purpose but its providing different DocName  -   (index="myindex" "/app1/service/site/upload failed" AND "source=Web" AND "confirmationNumber=ND_*") OR (index="myindex" "Exception from executeScript") | rex "clmNumber=(?<ClaimNumber>[^,]+)" | rex "confirmationNumber=(?<SubmissionNumber>[^},]+)" | rex "contentType=(?<ContentType>[^},]+)" | rex "name=(?<DocName>[^,]+)" | rex "(?<SessionID>\[http-nio-8080-exec-\d+\])" | eval EventType=if(match(_raw, "Exception from executeScript"), "Exception", "Upload Failure") | eventstats first(EventType) as first_EventType by SessionID | where EventType="Upload Failure" | join type=outer SessionID [ search index="myindex" "Exception from executeScript" | rex "Exception from executeScript: (?<Exception>[^:]+)" | rex "(?<SessionID>\[http-nio-8080-exec-\d+\])" | rex "(?<ExceptionDocName>.+\.pdf)" | eval EventType="Exception" | eventstats first(EventType) as first_EventType by SessionID ] | where EventType="Exception" OR isnull(Exception) | table _time, ClaimNumber, SubmissionNumber, ContentType, DocName, Exception | sort _time desc ClaimNumber   Here is the result that I got - _time clmNumber confirmationNumber name Exception 2024-02-15 09:07:47 99900468430 ND_50249-02152024 ROMAN Claim # 99900468430 Invoice.pdf 0115105149 Duplicate Child Exception - Rakesh lease 4 already exists in the location. 2024-02-15 09:41:16 99900470018 ND_52233-02152024 Tristian CLAIM #99900470018 PACKAGE.pdf 0115105128 Duplicate Child Exception - Combined 4 Point signed Ramesh 399 Coral Island. disk 3 already exists in the location.   So, although I am able to get first four metadata in the table correctly, but the exception is coming from another event in the log with same sessionID I believe. How can we fix the script to provide the expected result? Thanks in Advance.  
Hi, I am looking to add a custom time picker on dashboard.  Its going to be Simple dropdown with option of  last 12 months (one option to each month in last 1 year) I have created dropdown as p... See more...
Hi, I am looking to add a custom time picker on dashboard.  Its going to be Simple dropdown with option of  last 12 months (one option to each month in last 1 year) I have created dropdown as per requirement. Now wondering how to use it in rest of the dashboard so dashboard will get updated as per selection. Query | makeresults | addinfo | eval date=mvrange(info_min_time,info_max_time,"1mon") | mvexpand date | sort - date | eval Month=strftime(date,"%b-%y") | table Month date  
I have events like the below that are saying when a particular pool member was out of rotation for a particular period of time.  What would be an ideal search would be to match all events that have t... See more...
I have events like the below that are saying when a particular pool member was out of rotation for a particular period of time.  What would be an ideal search would be to match all events that have the "was down for" and then the length of time and simply average that, and take the 95th percentile of that duration.   Probably more difficult than it seems and I'm not sure how to approach it. <133>Feb 13 13:01:33 slot2/US66666-CORE-LTM1.company.COM notice mcpd[8701]: 01070727:5: Pool /Common/pool-generic member /Common/servernamew006:8080 monitor status up. [ /Common/mon-xxx-prod-xxx-liveness: up ] [ was down for 0hr:0min:15sec ] host = US66666-core-ltm1.company.com source = /var/log/HOSTS/US66666-core-ltm1.company.com/xxx.xxx.com-syslog.log sourcetype = syslog_alb
Hi, I have an index that doesn't show events anymore.  Could you help me please? On November I had a problem with Mongo DB and I tried this solutions: - https://community.splunk.com/t5/Knowledge-M... See more...
Hi, I have an index that doesn't show events anymore.  Could you help me please? On November I had a problem with Mongo DB and I tried this solutions: - https://community.splunk.com/t5/Knowledge-Management/Why-are-we-getting-these-errors-KV-Store-Process-Terminated/m-p/449940  --> doing this I noticed that permissions of files inside this folder have changed. May this be the cause of the problem? This solutiion didn't work - I solved the problem doing this Could you help me please? Thank you
Hi, I created a column chart that displays avg(totalTime) over a 5min increment by the organization. I am looking to add in the bottom corner of the chart the latest count of the organization. I ju... See more...
Hi, I created a column chart that displays avg(totalTime) over a 5min increment by the organization. I am looking to add in the bottom corner of the chart the latest count of the organization. I just want to display the count at the bottom of the chart where the legend is. How do I accomplish this? Column Chart query to graph avg(totalTime) by organization index | timechart span-5m avg(totalTime) as avg Volume (where I want to display the value of the latest count on the chart above near the legend) index | timechart span=5m count by organization Kindly help. 
Hi,  I am trying to create a column chart that if the value is greater than 3 then the column of the Column chart turns red while if the value is less than or equal to 3, the column of the chart is... See more...
Hi,  I am trying to create a column chart that if the value is greater than 3 then the column of the Column chart turns red while if the value is less than or equal to 3, the column of the chart is green.  Below is my search that I started off with: index | timechart span=5m avg(totalTime) as avg_value limit=20 | eval threshold=3 I tried: index | timechart span=5m avg(totalTime) as avg_value limit=20 | eval threshold=3 | eval "red"=if(avg_value > threshold, avg_value,0) | eval "green"=if(avg_value<threshold, avg_value,0) |fields - avg_value Then I went into the source code and defined the colors but the column chart did not change colors.   <option name="charting.fieldColors">{"red":0xFF0000,"green":0x73A550}</option>  I do not want the columns stacked.  Kindly help. 
Hi All We are starting to look at application monitoring and our first target will definitely be SAP. I can see there are a number of SAP apps in Splunkbase. Does anyone have any info on a compariso... See more...
Hi All We are starting to look at application monitoring and our first target will definitely be SAP. I can see there are a number of SAP apps in Splunkbase. Does anyone have any info on a comparison of these, and any Splunk guides or best practises to start looking at this? I've not worked with any monitoring at this application level previously so really starting at first principles and gathering as much info as possible. Thank you for reading. All the best.   
Hi. I'm looking for a query/solution that will alert me when a log source is no longer sending logs. We have 4 indexes to monitor with a lot of log sources. So, having the log sources in input looku... See more...
Hi. I'm looking for a query/solution that will alert me when a log source is no longer sending logs. We have 4 indexes to monitor with a lot of log sources. So, having the log sources in input lookup would not be a good idea as it would have to be maintained every time new log source is added. Thus, i am looking for a query which alerts me if any of the log sources currently configured in any of the 4 indexes goes silent for 24 hours. Would prefer not to have lookup command in the query as file would have to be maintained in that scenario. Need to run this query on all the currently configured log sources. Thank you.