All Topics

Top

All Topics

Hi, I want to know if there is any resources available to get a notification or some way to know when a new Splunk Enterprise version is released. This could either be through mail, a rss feed or som... See more...
Hi, I want to know if there is any resources available to get a notification or some way to know when a new Splunk Enterprise version is released. This could either be through mail, a rss feed or something similar? I already know that this one exists https://www.splunk.com/page/release_rss But it is not up to date. Thanks, Zarge
query: |tstats count where index=new_index host=new-host source=https://itcsr.welcome.com/logs* by PREFIX(status:) _time |rename status:  as Total_Status |where isnotnull(Total_Status) |eval Succ... See more...
query: |tstats count where index=new_index host=new-host source=https://itcsr.welcome.com/logs* by PREFIX(status:) _time |rename status:  as Total_Status |where isnotnull(Total_Status) |eval SuccessCount=if(Total_Status="0", count, Success), FailedCount=if(Total_Status!="0", count, Failed) OUTPUT: Total_Status _time count FailedCount SuccessCount 0 2022-01-12 13:30 100   100 0 2022-01-12 13:00 200   200 0 2022-01-13 11:30 110   110 500 2022-01-13 11:00 2 2   500 2022-01-11 10:30 4 4   500 2022-01-11 10:00 8 8     But i want the output as shown below table: _time SuccessCount FailedCount 2022-01-13 110 2 2022-01-12 300 0 2022-01-11 0 12
I am trying to get a understanding why I get a different count total for the number of events for the following searches 1. index=some_specific_index  (Returns the following  total for events 7,601,... See more...
I am trying to get a understanding why I get a different count total for the number of events for the following searches 1. index=some_specific_index  (Returns the following  total for events 7,601,134) 2. | tstats count where index=some_specific_index (Returns 7,593,248)   I do have the same date and time range sent when I run the query. I understand why tstats and stats have different values.      
Hi , I want Search query to fetch PCF application instances and its event messages such as start, stop and crash and with the reason. Can anyone help me with the query how to fetch this. Thanks, A... See more...
Hi , I want Search query to fetch PCF application instances and its event messages such as start, stop and crash and with the reason. Can anyone help me with the query how to fetch this. Thanks, Abhigyan.
Daftar Sekarang hitslot >>>> https://a6q6.short.gy/SesuapNasi Di hitslot memiliki berbagai games online terbaik serta event event games online yang terbaru di tahun 2024 Dengan Permainan ingin menco... See more...
Daftar Sekarang hitslot >>>> https://a6q6.short.gy/SesuapNasi Di hitslot memiliki berbagai games online terbaik serta event event games online yang terbaru di tahun 2024 Dengan Permainan ingin mencoba games online terbaik di tahun 2024 anda bisa mencobanya sekarang hanya di situs hitslot dengan design dan serta event terbaik
CHECK_METHOD = modtime is not working as expected due to a regression in 9.x as there is wrong calculation which will lead to un-expected re-reading of a file. Until next patch, use following workar... See more...
CHECK_METHOD = modtime is not working as expected due to a regression in 9.x as there is wrong calculation which will lead to un-expected re-reading of a file. Until next patch, use following workaround for inputs with CHECK_METHOD = modtime In inputs.conf set following for impacted stanza time_before_close=0  
Created 2 drop downs in a dashboard.  1. Country 2. Applications (getting data from .csv file) In applications drop down i am seeing individual applications in the drop down but I need "All" opt... See more...
Created 2 drop downs in a dashboard.  1. Country 2. Applications (getting data from .csv file) In applications drop down i am seeing individual applications in the drop down but I need "All" options in the dropdown. How can i do it??   <input type="radio" token="country"> <label>Country</label> <choice value="india">India</choice> <choice value="australian">Australian</choice> <default>india</default> <intialValue>india</intialValue> <change> <condition label="India"> <set token="sorc">callsource</set> </condition> <condition label="Australian"> <set token="sorc">callsource2</set> </condition> </change> </input> <input type="dropdown" token="application" searchWhenChanged="false"> <label>Application</label> <fieldForLabel>application_Succ</fieldForLabel> <fieldForValue>application_Fail</fieldForValue> <search> <query> |inputlookup application_lists.csv |search country=$country$ |sort country application_Succ |fields application_Succ application_Fail</query> <earliest>-15m</earliest> <latest>now</latest> </search> </input> </fieldset>      
Currently I am feeding Splunk Zeek logs (formerly known as bro) via the monitor command. Some of the logs in the Zeek index are being parsed correctly. Other logs, however, are still appearing as raw... See more...
Currently I am feeding Splunk Zeek logs (formerly known as bro) via the monitor command. Some of the logs in the Zeek index are being parsed correctly. Other logs, however, are still appearing as raw text.  I remember in the past there was a certain link in the settings where I could specify how to extract each field in the event what to call the field and what data belonged to it.  I also remember being able to test the specific settings I was applying via a log of the same index/source type. Any help interpreting what I am trying to communicate or guidance as to finding that specific page I am looking for is very much appreciated. 
Hi Splunk experts, We have some apache tomcat web servers which are installed on windows so we want to monitor those servers via OTEL collector but while checking the document it says the configurat... See more...
Hi Splunk experts, We have some apache tomcat web servers which are installed on windows so we want to monitor those servers via OTEL collector but while checking the document it says the configuration only support on Kubernetes and Linux. So, is there a way that we can monitor windows apache tomcat servers? Please suggest! Thank in advance. Regards, Eshwar
Created a supportticket: Sendemail does not work if selected and set in the Alert config. But Sendemail function is working OK!?  Business Impact: Can not respond on any "System_down/System_off... See more...
Created a supportticket: Sendemail does not work if selected and set in the Alert config. But Sendemail function is working OK!?  Business Impact: Can not respond on any "System_down/System_offline" situation . Happens not very often but very critical to respond to. Product Version : 9.2.0.1 / I assume that it might not work since Splunk Enterprise v9.1.2 either (not sure) Area: Search/Index - Splunk Enterprise Deployment Type: On Prem / Small instance with only indexer, kV Search-head active  OS:  Windows 2019 server When did you first notice the issue? Somewhere 1/26/2024 (noticed a system_down situation on dashboard but was not notified by email) Did you make any changes recently?: I upgrade last week to v9.2.0.1 on our test server. Later I found that our production server (v9.1.2) has the same issue Steps Reproduce: You can created an Alert_Trigger_Test: (zie code below) | makeresults | eval ATT=4 | stats max(ATT) as mincount Then test it every 5 minutes (cron-schedule) by: search mincount < 7 ======================Alert Code : in savedsearch.conf (.../search/local) ===================== [Alert_trigger_1v1] action.email = 1 action.email.cc = <your@email_address_2> action.email.include.search = 1 action.email.inline = 1 action.email.priority = 2 action.email.sendresults = 1 action.email.to = <your@email_address_1> action.email.useNSSubject = 1 action.lookup = 0 action.lookup.append = 1 action.lookup.filename = alerttrigger.csv alert.digest_mode = 0 alert.expires = 1h alert.suppress = 0 alert.track = 1 alert_condition = search mincount >7 allow_skew = 5m counttype = custom cron_schedule = */5 * * * * dispatch.earliest_time = -5m dispatch.latest_time = now display.general.type = statistics display.page.search.mode = verbose display.page.search.tab = statistics enableSched = 1 quantity = 0 relation = greater than request.ui_dispatch_app = search request.ui_dispatch_view = search search = | makeresults \ | eval ATT=3\ | stats max(ATT) as mincount =======================END Code =================== Is there any one else suffering from the same issues? regards AshleyP
Hello, I have a lookup table called account_audit.csv and have a timestamp field UPDATE_DATE=01/05/24 04:49:26. How can I find all events within that lookup with UPDATE_DATE  >= 01/25/24. Any recomm... See more...
Hello, I have a lookup table called account_audit.csv and have a timestamp field UPDATE_DATE=01/05/24 04:49:26. How can I find all events within that lookup with UPDATE_DATE  >= 01/25/24. Any recommendations will be highly appreciated. Thank you!   
We have data similar to the below and are trying to chart it with a line or bar graph similar to the chart shown that was created in excel.   Been able to do different things to calculate a duration ... See more...
We have data similar to the below and are trying to chart it with a line or bar graph similar to the chart shown that was created in excel.   Been able to do different things to calculate a duration since midnight on the date to the end time to give a consistent starting point for each, but splunk does not seem to like to chart the duration or a time stamp as they are strings.   We can chart it as a value like a unix format date but that isn't really human readable.     Date System End Time 20240209 SYSTEM1 2/9/24 10:39 PM 20240209 SYSTEM2 2/9/24 10:34 PM 20240209 SYSTEM3 2/9/24 11:08 PM 20240212 SYSTEM1 2/12/24 10:37 PM 20240212 SYSTEM2 2/12/24 10:19 PM 20240212 SYSTEM3 2/12/24 11:10 PM 20240213 SYSTEM1 2/13/24 11:19 PM 20240213 SYSTEM2 2/13/24 10:17 PM 20240213 SYSTEM3 2/13/24 11:00 PM 20240214 SYSTEM1 2/14/24 10:35 PM 20240214 SYSTEM2 2/14/24 10:23 PM 20240214 SYSTEM3 2/14/24 11:08 PM 20240215 SYSTEM1 2/15/24 10:36 PM 20240215 SYSTEM2 2/15/24 10:17 PM 20240215 SYSTEM3 2/15/24 11:03 PM
Our splunk implementation has SERVERNAME as a preset field, and there are servers in different locations, but there is no location field. How can I count errors by location? I envision something like... See more...
Our splunk implementation has SERVERNAME as a preset field, and there are servers in different locations, but there is no location field. How can I count errors by location? I envision something like this but cannot find a way to implement: index=some_index "some search criteria" | eval PODNAME="ONTARIO" if SERVERNAME IN ({list of servernames}) | eval PODNAME="GEORGIA" if SERVERNAME IN ({list of servernames}) | timechart span=30min count by PODNAME Any ideas?
Hello Splunk Community,  I have a requirement to exclude the events from field values between  2AM-3AM everyday. For Example Field USA has 4 values USA = Texas, California, Washington, New York ... See more...
Hello Splunk Community,  I have a requirement to exclude the events from field values between  2AM-3AM everyday. For Example Field USA has 4 values USA = Texas, California, Washington, New York I want to exclude the events from Washington between 2AM-3AM .However, I want them in remaining time 23 hours period. Is there a search to achieve this? 
As the titles suggests, I'm looking into whether it's possible or not to load balance Universal Forwarder hosts that are also hosting rsyslog. I want to pointedly ask, Is there anyone here doing som... See more...
As the titles suggests, I'm looking into whether it's possible or not to load balance Universal Forwarder hosts that are also hosting rsyslog. I want to pointedly ask, Is there anyone here doing something like this?   The rsyslog config on each host is quite complex.. I'm using 9 different custom ports for up to 20 different source devices. If you are curious its setup like such: port xxxx used for PDU's, port cccc used for switches, port vvvv for routers, etc, etc. The Universal Forwarders then sent the data directly to Splunk Cloud. It's likely not the best, and is certainly not pretty but it gets the job done. Currently there is 2 dedicated UF hosts for two physical sites. These sites are being combined into a single colo, hence the LB question. Thanks!
I have a search that gives me the total number of hits to my website and the average number of hits over a 5 day period. I need to know how to setup a splunk alert, that notifies me when the avg numb... See more...
I have a search that gives me the total number of hits to my website and the average number of hits over a 5 day period. I need to know how to setup a splunk alert, that notifies me when the avg number of hits over a 5 day period increases or decreases by 10%. I can't seem to figure this out, any help would be appreciated.
Unable to fetch any data from Ubuntu UF which should be reporting to cloud splunk.   1) Installed splunk UF 9.2.0 and installed credentials package from splunk cloud too. 2) ports are open and tra... See more...
Unable to fetch any data from Ubuntu UF which should be reporting to cloud splunk.   1) Installed splunk UF 9.2.0 and installed credentials package from splunk cloud too. 2) ports are open and traffic is allowed  3) No error in splunkd.log. 4) Currently no inputs are configured, checking data connectivity by internal logs. check data as index=_internal source=*metric.log* Splukd.log shows below warnings only 02-16-2024 15:53:30.843 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/search_messages.log'. 02-16-2024 15:53:30.852 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/splunkd_ui_access.log'. 02-16-2024 15:53:30.859 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/btool.log'. 02-16-2024 15:53:30.876 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/mergebuckets.log'. 02-16-2024 15:53:30.885 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/wlm_monitor.log'. 02-16-2024 15:53:30.891 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/license_usage_summary.log'. 02-16-2024 15:53:30.898 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/searchhistory.log'. 02-16-2024 15:53:30.907 +0000 INFO WatchedFile [156345 tailreader0] - Will begin reading at offset=2859 for file='/opt/splunkforwarder/var/log/watchdog/watchdog.log'. 02-16-2024 15:53:31.112 +0000 INFO AutoLoadBalancedConnectionStrategy [156338 TcpOutEloop] - Connected to idx=1.2.3.5:9997:2, pset=0, reuse=0. autoBatch=1 02-16-2024 15:53:31.112 +0000 WARN AutoLoadBalancedConnectionStrategy [156338 TcpOutEloop] - Current dest host connection 1.2.3.5:9997, oneTimeClient=0, _events.size()=0, _refCount=1, _waitingAckQ.size()=0, _supportsACK=0, _lastHBRecvTime=Fri Feb 16 15:53:31 2024 is using 18446604251980134224 bytes. Total tcpout queue size is 512000. Warningcount=1 02-16-2024 15:54:00.446 +0000 INFO ScheduledViewsReaper [156309 DispatchReaper] - Scheduled views reaper run complete. Reaped count=0 scheduled views 02-16-2024 15:54:00.446 +0000 INFO CascadingReplicationManager [156309 DispatchReaper] - Using value for property max_replication_threads=2. 02-16-2024 15:54:00.446 +0000 INFO CascadingReplicationManager [156309 DispatchReaper] - Using value for property max_replication_jobs=5. 02-16-2024 15:54:00.447 +0000 WARN AutoLoadBalancedConnectionStrategy [156338 TcpOutEloop] - Current dest host connection 1.2.3.5:9997, oneTimeClient=0, _events.size()=0, _refCount=1, _waitingAckQ.size()=0, _supportsACK=0, _lastHBRecvTime=Fri Feb 16 15:53:31 2024 is using 18446604251980134224 bytes. Total tcpout queue size is 512000. Warningcount=21 Please assist.
In Microsoft IIS logs, when a field is empty, a dash ( - ) is used instead of leaving the value blank.  Presumably this is because IIS logs are space delimited, so otherwise it would just have three ... See more...
In Microsoft IIS logs, when a field is empty, a dash ( - ) is used instead of leaving the value blank.  Presumably this is because IIS logs are space delimited, so otherwise it would just have three consecutive spaces which might be ignored.  However, even though there is something in the field, I can't search for something like cs_username="-" and get any results.  Is this something Splunk is doing, where it is treating the dash as a NULL?  I have a dashboard where I track HTTP errors by cs_username, but when the username is not present, I can't drill down on the dash, I can only drill down on actual username values.  Is there a way to make the dash an active, drillable value?  I tried this but it didn't work: | fillnull value="-" cs_username How can I search the cs_username field when the value is a dash?
Hey Splunk Gurus, One quick question, is there any way to ship out all the splunk data from its indexers to aws s3 buckets? Environment is splunk cloud. Appreciate your response. Thanks Abhi
I have a logfile like this -   2024-02-15 09:07:47,770 INFO [com.mysite.core.app1.upload.FileUploadWebScript] [http-nio-8080-exec-202] The Upload Service /app1/service/site/upload failed in 0.1240... See more...
I have a logfile like this -   2024-02-15 09:07:47,770 INFO [com.mysite.core.app1.upload.FileUploadWebScript] [http-nio-8080-exec-202] The Upload Service /app1/service/site/upload failed in 0.124000 seconds, {comments=xxx-123, senderCompany=Company1, source=Web, title=Submitted via Site website, submitterType=Others, senderName=ROMAN , confirmationNumber=ND_50249-02152024, clmNumber=99900468430, name=ROAMN Claim # 99900468430 Invoice.pdf, contentType=Email} 2024-02-15 09:07:47,772 ERROR [org.springframework.extensions.webscripts.AbstractRuntime] [http-nio-8080-exec-202] Exception from executeScript: 0115100898 Duplicate Child Exception - ROAMN Claim # 99900468430 Invoice.pdf already exists in the location. --- --- --- 2024-02-15 09:41:16,762 INFO [com.mysite.core.app1.upload.FileUploadWebScript] [http-nio-8080-exec-200] The Upload Service /app1/service/site/upload failed in 0.138000 seconds, {comments=yyy-789, senderCompany=Company2, source=Web, title=Submitted via Site website, submitterType=Public Adjuster, senderName=Tristian, confirmationNumber=ND_52233-02152024, clmNumber=99900470018, name=Tristian CLAIM #99900470018 PACKAGE.pdf, contentType=Email} 2024-02-15 09:41:16,764 ERROR [org.springframework.extensions.webscripts.AbstractRuntime] [http-nio-8080-exec-200] Exception from executeScript: 0115100953 Document not found - Tristian CLAIM #99900470018 PACKAGE.pdf   We need to look at index=<myindex> "/alfresco/service/site/upload failed" and get the table with the following information.   _time clmNumber confirmationNumber name Exception 2024-02-15 09:07:47 99900468430 ND_50249-02152024 ROMAN Claim # 99900468430 Invoice.pdf 0115100898 Duplicate Child Exception - ROAMN Claim # 99900468430 Invoice.pdf already exists in the location 2024-02-15 09:41:16 99900470018 ND_52233-02152024 Tristian CLAIM #99900470018 PACKAGE.pdf 0115100953 Document not found - Tristian CLAIM #99900470018 PACKAGE.pdf   Exception is in another event line in logfile but just after the line from where to get first 4 metadata. Both of the rows/ events in the logs have sessionID in common and can have DOCNAME also in common but SessionID can have multiple transactions so can have different name.  I created following script for this purpose but its providing different DocName  -   (index="myindex" "/app1/service/site/upload failed" AND "source=Web" AND "confirmationNumber=ND_*") OR (index="myindex" "Exception from executeScript") | rex "clmNumber=(?<ClaimNumber>[^,]+)" | rex "confirmationNumber=(?<SubmissionNumber>[^},]+)" | rex "contentType=(?<ContentType>[^},]+)" | rex "name=(?<DocName>[^,]+)" | rex "(?<SessionID>\[http-nio-8080-exec-\d+\])" | eval EventType=if(match(_raw, "Exception from executeScript"), "Exception", "Upload Failure") | eventstats first(EventType) as first_EventType by SessionID | where EventType="Upload Failure" | join type=outer SessionID [ search index="myindex" "Exception from executeScript" | rex "Exception from executeScript: (?<Exception>[^:]+)" | rex "(?<SessionID>\[http-nio-8080-exec-\d+\])" | rex "(?<ExceptionDocName>.+\.pdf)" | eval EventType="Exception" | eventstats first(EventType) as first_EventType by SessionID ] | where EventType="Exception" OR isnull(Exception) | table _time, ClaimNumber, SubmissionNumber, ContentType, DocName, Exception | sort _time desc ClaimNumber   Here is the result that I got - _time clmNumber confirmationNumber name Exception 2024-02-15 09:07:47 99900468430 ND_50249-02152024 ROMAN Claim # 99900468430 Invoice.pdf 0115105149 Duplicate Child Exception - Rakesh lease 4 already exists in the location. 2024-02-15 09:41:16 99900470018 ND_52233-02152024 Tristian CLAIM #99900470018 PACKAGE.pdf 0115105128 Duplicate Child Exception - Combined 4 Point signed Ramesh 399 Coral Island. disk 3 already exists in the location.   So, although I am able to get first four metadata in the table correctly, but the exception is coming from another event in the log with same sessionID I believe. How can we fix the script to provide the expected result? Thanks in Advance.