All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thank you for your suggestions, As per your suggestions we have changed the SQL quiry. After changes results showing it's still "Winows_Support - Operations" group.  Can you please help me here.
Hi @gcusello ,  thank you for answering. Index stopped working when I had problems with MongoDB so I tought it was correlated. What should I expect to find from inputs.conf? Sorry, I'm a beginner. ... See more...
Hi @gcusello ,  thank you for answering. Index stopped working when I had problems with MongoDB so I tought it was correlated. What should I expect to find from inputs.conf? Sorry, I'm a beginner. ciao, Mattia
Hi @jyates76, you have to extract the down duratio and then run a simple search: index=your_index "was down for" | rex "was\s+down\s+for\s+(?<hours>\d+)hr:(?<minutes>\d+)min:(?<seconds>\d+)sec" | e... See more...
Hi @jyates76, you have to extract the down duratio and then run a simple search: index=your_index "was down for" | rex "was\s+down\s+for\s+(?<hours>\d+)hr:(?<minutes>\d+)min:(?<seconds>\d+)sec" | eval duration=hours*3600+minutes*60+seconds | timechart perc90(duration) BY host You can test the regex at https://regex101.com/r/75pRcf/1 then you can use other functions or aggregations. Ciao. Giuseppe
There are several ready-made apps aiming at that. You can also try to look on your index data "backwards" and compare with - let's say - last day if you don't want to have a predefined set of source... See more...
There are several ready-made apps aiming at that. You can also try to look on your index data "backwards" and compare with - let's say - last day if you don't want to have a predefined set of sources/sourcetypes/hosts/whatever that you will look for but that approach is prone to timing out your alerts - if you search 3 days backwards, it won't alert you of the sources that used to send data 4 days ago but stopped after that. So you can only do so much. There are no miracles and Splunk doesn't know what "is supposed" to be sent to it unless you tell it.
Still trying to figure this one out....  
It is deliberately set this way by the IIS TA. EVAL-user = if(cs_username == "-", null(), cs_username) So it's not that Splunk doesn't find it, it's just that the field is set to empty value when t... See more...
It is deliberately set this way by the IIS TA. EVAL-user = if(cs_username == "-", null(), cs_username) So it's not that Splunk doesn't find it, it's just that the field is set to empty value when there is nothing there. Looking for cs_username!=* or NOT cs_username=* (these are not equivalent in general but in this case both can be used)
Unable to fetch any data from Ubuntu UF which should be reporting to cloud splunk.   1) Installed splunk UF 9.2.0 and installed credentials package from splunk cloud too. 2) ports are open and tra... See more...
Unable to fetch any data from Ubuntu UF which should be reporting to cloud splunk.   1) Installed splunk UF 9.2.0 and installed credentials package from splunk cloud too. 2) ports are open and traffic is allowed  3) No error in splunkd.log. 4) Currently no inputs are configured, checking data connectivity by internal logs. check data as index=_internal source=*metric.log* Splukd.log shows below warnings only 02-16-2024 15:53:30.843 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/search_messages.log'. 02-16-2024 15:53:30.852 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/splunkd_ui_access.log'. 02-16-2024 15:53:30.859 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/btool.log'. 02-16-2024 15:53:30.876 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/mergebuckets.log'. 02-16-2024 15:53:30.885 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/wlm_monitor.log'. 02-16-2024 15:53:30.891 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/license_usage_summary.log'. 02-16-2024 15:53:30.898 +0000 INFO WatchedFile [156345 tailreader0] - File too small to check seekcrc, probably truncated. Will re-read entire file='/opt/splunkforwarder/var/log/splunk/searchhistory.log'. 02-16-2024 15:53:30.907 +0000 INFO WatchedFile [156345 tailreader0] - Will begin reading at offset=2859 for file='/opt/splunkforwarder/var/log/watchdog/watchdog.log'. 02-16-2024 15:53:31.112 +0000 INFO AutoLoadBalancedConnectionStrategy [156338 TcpOutEloop] - Connected to idx=1.2.3.5:9997:2, pset=0, reuse=0. autoBatch=1 02-16-2024 15:53:31.112 +0000 WARN AutoLoadBalancedConnectionStrategy [156338 TcpOutEloop] - Current dest host connection 1.2.3.5:9997, oneTimeClient=0, _events.size()=0, _refCount=1, _waitingAckQ.size()=0, _supportsACK=0, _lastHBRecvTime=Fri Feb 16 15:53:31 2024 is using 18446604251980134224 bytes. Total tcpout queue size is 512000. Warningcount=1 02-16-2024 15:54:00.446 +0000 INFO ScheduledViewsReaper [156309 DispatchReaper] - Scheduled views reaper run complete. Reaped count=0 scheduled views 02-16-2024 15:54:00.446 +0000 INFO CascadingReplicationManager [156309 DispatchReaper] - Using value for property max_replication_threads=2. 02-16-2024 15:54:00.446 +0000 INFO CascadingReplicationManager [156309 DispatchReaper] - Using value for property max_replication_jobs=5. 02-16-2024 15:54:00.447 +0000 WARN AutoLoadBalancedConnectionStrategy [156338 TcpOutEloop] - Current dest host connection 1.2.3.5:9997, oneTimeClient=0, _events.size()=0, _refCount=1, _waitingAckQ.size()=0, _supportsACK=0, _lastHBRecvTime=Fri Feb 16 15:53:31 2024 is using 18446604251980134224 bytes. Total tcpout queue size is 512000. Warningcount=21 Please assist.
1. This is a quite old thread. You'd be better off by starting a new one (possibly putting link to this one for reference). 2. Generally speaking, as a good practice:  - don't mix management with p... See more...
1. This is a quite old thread. You'd be better off by starting a new one (possibly putting link to this one for reference). 2. Generally speaking, as a good practice:  - don't mix management with package manager with manually dropping in files. It can end badly. - If you have a package for your system it's often (although not always; there are sometimes very badly built packages) a better solution I'm not sure about Splunk but depending on how/where you installed your software before, the RPM might not fit exactly that layout. So while you can try to install RPM package over a tarball-based /opt/splunk, I think I'd try to go for backup/remove/install/restore. Oh, and don't try to mess with your production server without testing it in dev environment.
Anybody got opinions on the opposite situation? I've always upgraded splunk using a tarball and extracted over top the prior /opt/splunk installation: ~6 upgrades. Now I'd like to switch to RPM, fo... See more...
Anybody got opinions on the opposite situation? I've always upgraded splunk using a tarball and extracted over top the prior /opt/splunk installation: ~6 upgrades. Now I'd like to switch to RPM, for all the stated advantages.  Any issues I need to worry about by installing the RPM over a prior tarball install?  
Here is described how it can do https://docs.splunk.com/Documentation/SplunkCloud/9.1.2308/Admin/DataSelfStorage But remember that this remove all data from that index, only hot buckets are available!
Hi Could you also tell what you want to do with those values? Are you looking events based on it for only 1 month at time or what you are looking for? Basically do something like this: <form versi... See more...
Hi Could you also tell what you want to do with those values? Are you looking events based on it for only 1 month at time or what you are looking for? Basically do something like this: <form version="1.1" theme="light"> <label>Create month base time picker</label> <fieldset submitButton="false"> <input type="dropdown" token="timePicker" searchWhenChanged="true"> <label>Month Year</label> <fieldForLabel>Month</fieldForLabel> <fieldForValue>rTime</fieldForValue> <search> <query>| makeresults count=12 | streamstats count | eval rTime = relative_time(now(), "-". count . "mon") | eval Month = strftime(rTime,"%b-%y") | table rTime Month</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> </input> </fieldset> </form> Then add this token "$timePicker$" to your other searches like <panel> <title>Simple timechart</title> <chart> <title>SOMETHING</title> <search> <query>YOUR QUERY HERE</query> <earliest>$timePicker.earliest$</earliest> <latest>$timePicker.latest$</latest> <sampleRatio>1</sampleRatio> </search> </chart> </panel> But when you want to show event e.g. within one month at time, you must use another tokens on dashboard. You must use set those e.g. tokEarliest and tokLatest based on first and last second based on your selection and then use those instead of $timePicker.earliest$ and $timePicker.latest$. r. Ismo
Thanks for your response. Let us suppose I want to export all the contents of a particular index to s3 buckets, can we do it?
Hi basically yes, or at least all warm and cold, but this means that you will frozen those and those are not searchable anymore. Just decrease your retention time for all indexes as small as needed ... See more...
Hi basically yes, or at least all warm and cold, but this means that you will frozen those and those are not searchable anymore. Just decrease your retention time for all indexes as small as needed and ensure that you have defined and configured your own S3 buckets for storing frozen data. Otherwise you will lost your events!!! BUT I'm not sure if this is what you are looking for? Can you describe your real issue, not your solution for it? r. Ismo
As it has said earlier you couldn't get 100% sure answer for this. You should look those old answers to see what you could try to get some answers.
Thanks @ITWhisperer but this doesn't seem to work.  I've simulated the average request time being over 1 second in the logs and this search returns alert=1 straight away.  When I'd want it to return ... See more...
Thanks @ITWhisperer but this doesn't seem to work.  I've simulated the average request time being over 1 second in the logs and this search returns alert=1 straight away.  When I'd want it to return this when searching the 2nd time window to say that we'd actually recovered from this high request time. Can you explain what is happening from streamstats onwards as I can't get my head round it?  I don't get how this separates the 2 time windows.  I've been running the search manually looking back 30 mins and it just returns alert=1 every time. FYI initially got my time window wrong and it is actually checking every 15 minute window so I'd want to compare the 2x 15 min windows over the last 30 mins to see if it has recovered.  I don't think this makes a difference to the query though.  
In Microsoft IIS logs, when a field is empty, a dash ( - ) is used instead of leaving the value blank.  Presumably this is because IIS logs are space delimited, so otherwise it would just have three ... See more...
In Microsoft IIS logs, when a field is empty, a dash ( - ) is used instead of leaving the value blank.  Presumably this is because IIS logs are space delimited, so otherwise it would just have three consecutive spaces which might be ignored.  However, even though there is something in the field, I can't search for something like cs_username="-" and get any results.  Is this something Splunk is doing, where it is treating the dash as a NULL?  I have a dashboard where I track HTTP errors by cs_username, but when the username is not present, I can't drill down on the dash, I can only drill down on actual username values.  Is there a way to make the dash an active, drillable value?  I tried this but it didn't work: | fillnull value="-" cs_username How can I search the cs_username field when the value is a dash?
Hey Splunk Gurus, One quick question, is there any way to ship out all the splunk data from its indexers to aws s3 buckets? Environment is splunk cloud. Appreciate your response. Thanks Abhi
I have a logfile like this -   2024-02-15 09:07:47,770 INFO [com.mysite.core.app1.upload.FileUploadWebScript] [http-nio-8080-exec-202] The Upload Service /app1/service/site/upload failed in 0.1240... See more...
I have a logfile like this -   2024-02-15 09:07:47,770 INFO [com.mysite.core.app1.upload.FileUploadWebScript] [http-nio-8080-exec-202] The Upload Service /app1/service/site/upload failed in 0.124000 seconds, {comments=xxx-123, senderCompany=Company1, source=Web, title=Submitted via Site website, submitterType=Others, senderName=ROMAN , confirmationNumber=ND_50249-02152024, clmNumber=99900468430, name=ROAMN Claim # 99900468430 Invoice.pdf, contentType=Email} 2024-02-15 09:07:47,772 ERROR [org.springframework.extensions.webscripts.AbstractRuntime] [http-nio-8080-exec-202] Exception from executeScript: 0115100898 Duplicate Child Exception - ROAMN Claim # 99900468430 Invoice.pdf already exists in the location. --- --- --- 2024-02-15 09:41:16,762 INFO [com.mysite.core.app1.upload.FileUploadWebScript] [http-nio-8080-exec-200] The Upload Service /app1/service/site/upload failed in 0.138000 seconds, {comments=yyy-789, senderCompany=Company2, source=Web, title=Submitted via Site website, submitterType=Public Adjuster, senderName=Tristian, confirmationNumber=ND_52233-02152024, clmNumber=99900470018, name=Tristian CLAIM #99900470018 PACKAGE.pdf, contentType=Email} 2024-02-15 09:41:16,764 ERROR [org.springframework.extensions.webscripts.AbstractRuntime] [http-nio-8080-exec-200] Exception from executeScript: 0115100953 Document not found - Tristian CLAIM #99900470018 PACKAGE.pdf   We need to look at index=<myindex> "/alfresco/service/site/upload failed" and get the table with the following information.   _time clmNumber confirmationNumber name Exception 2024-02-15 09:07:47 99900468430 ND_50249-02152024 ROMAN Claim # 99900468430 Invoice.pdf 0115100898 Duplicate Child Exception - ROAMN Claim # 99900468430 Invoice.pdf already exists in the location 2024-02-15 09:41:16 99900470018 ND_52233-02152024 Tristian CLAIM #99900470018 PACKAGE.pdf 0115100953 Document not found - Tristian CLAIM #99900470018 PACKAGE.pdf   Exception is in another event line in logfile but just after the line from where to get first 4 metadata. Both of the rows/ events in the logs have sessionID in common and can have DOCNAME also in common but SessionID can have multiple transactions so can have different name.  I created following script for this purpose but its providing different DocName  -   (index="myindex" "/app1/service/site/upload failed" AND "source=Web" AND "confirmationNumber=ND_*") OR (index="myindex" "Exception from executeScript") | rex "clmNumber=(?<ClaimNumber>[^,]+)" | rex "confirmationNumber=(?<SubmissionNumber>[^},]+)" | rex "contentType=(?<ContentType>[^},]+)" | rex "name=(?<DocName>[^,]+)" | rex "(?<SessionID>\[http-nio-8080-exec-\d+\])" | eval EventType=if(match(_raw, "Exception from executeScript"), "Exception", "Upload Failure") | eventstats first(EventType) as first_EventType by SessionID | where EventType="Upload Failure" | join type=outer SessionID [ search index="myindex" "Exception from executeScript" | rex "Exception from executeScript: (?<Exception>[^:]+)" | rex "(?<SessionID>\[http-nio-8080-exec-\d+\])" | rex "(?<ExceptionDocName>.+\.pdf)" | eval EventType="Exception" | eventstats first(EventType) as first_EventType by SessionID ] | where EventType="Exception" OR isnull(Exception) | table _time, ClaimNumber, SubmissionNumber, ContentType, DocName, Exception | sort _time desc ClaimNumber   Here is the result that I got - _time clmNumber confirmationNumber name Exception 2024-02-15 09:07:47 99900468430 ND_50249-02152024 ROMAN Claim # 99900468430 Invoice.pdf 0115105149 Duplicate Child Exception - Rakesh lease 4 already exists in the location. 2024-02-15 09:41:16 99900470018 ND_52233-02152024 Tristian CLAIM #99900470018 PACKAGE.pdf 0115105128 Duplicate Child Exception - Combined 4 Point signed Ramesh 399 Coral Island. disk 3 already exists in the location.   So, although I am able to get first four metadata in the table correctly, but the exception is coming from another event in the log with same sessionID I believe. How can we fix the script to provide the expected result? Thanks in Advance.  
Thank you @bowesmana and @yuanliu for helping with this!  This worked, but I just had to add a ) at the end to balance the parenthesis. The values when tabled out all include "event" in addition t... See more...
Thank you @bowesmana and @yuanliu for helping with this!  This worked, but I just had to add a ) at the end to balance the parenthesis. The values when tabled out all include "event" in addition to the targeted values, which I'm guessing is somehow coming from the top element in the array. Not a huge problem for me, but figured I'd mention it. Results: event name-resource-121sg6fe event name-resource-387762fg   Sample JSON array: event: { AccountId: xxxxxxxxxx CloudPlatform: CloudProvider CloudService: Service ResourceAttributes: {"key1": "value1", "key2": "value2", "key3": value3, "key4": [{"key": "value", "key": "value"}], "Resource Name": "name-resource-121sg6fe", etc} }  
Thanks for the hint I was checking via index=metric_indexname query. Utilized mstat it started fetching data.