All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

In Microsoft IIS logs, when a field is empty, a dash ( - ) is used instead of leaving the value blank.  Presumably this is because IIS logs are space delimited, so otherwise it would just have three ... See more...
In Microsoft IIS logs, when a field is empty, a dash ( - ) is used instead of leaving the value blank.  Presumably this is because IIS logs are space delimited, so otherwise it would just have three consecutive spaces which might be ignored.  However, even though there is something in the field, I can't search for something like cs_username="-" and get any results.  Is this something Splunk is doing, where it is treating the dash as a NULL?  I have a dashboard where I track HTTP errors by cs_username, but when the username is not present, I can't drill down on the dash, I can only drill down on actual username values.  Is there a way to make the dash an active, drillable value?  I tried this but it didn't work: | fillnull value="-" cs_username How can I search the cs_username field when the value is a dash?
Hey Splunk Gurus, One quick question, is there any way to ship out all the splunk data from its indexers to aws s3 buckets? Environment is splunk cloud. Appreciate your response. Thanks Abhi
I have a logfile like this -   2024-02-15 09:07:47,770 INFO [com.mysite.core.app1.upload.FileUploadWebScript] [http-nio-8080-exec-202] The Upload Service /app1/service/site/upload failed in 0.1240... See more...
I have a logfile like this -   2024-02-15 09:07:47,770 INFO [com.mysite.core.app1.upload.FileUploadWebScript] [http-nio-8080-exec-202] The Upload Service /app1/service/site/upload failed in 0.124000 seconds, {comments=xxx-123, senderCompany=Company1, source=Web, title=Submitted via Site website, submitterType=Others, senderName=ROMAN , confirmationNumber=ND_50249-02152024, clmNumber=99900468430, name=ROAMN Claim # 99900468430 Invoice.pdf, contentType=Email} 2024-02-15 09:07:47,772 ERROR [org.springframework.extensions.webscripts.AbstractRuntime] [http-nio-8080-exec-202] Exception from executeScript: 0115100898 Duplicate Child Exception - ROAMN Claim # 99900468430 Invoice.pdf already exists in the location. --- --- --- 2024-02-15 09:41:16,762 INFO [com.mysite.core.app1.upload.FileUploadWebScript] [http-nio-8080-exec-200] The Upload Service /app1/service/site/upload failed in 0.138000 seconds, {comments=yyy-789, senderCompany=Company2, source=Web, title=Submitted via Site website, submitterType=Public Adjuster, senderName=Tristian, confirmationNumber=ND_52233-02152024, clmNumber=99900470018, name=Tristian CLAIM #99900470018 PACKAGE.pdf, contentType=Email} 2024-02-15 09:41:16,764 ERROR [org.springframework.extensions.webscripts.AbstractRuntime] [http-nio-8080-exec-200] Exception from executeScript: 0115100953 Document not found - Tristian CLAIM #99900470018 PACKAGE.pdf   We need to look at index=<myindex> "/alfresco/service/site/upload failed" and get the table with the following information.   _time clmNumber confirmationNumber name Exception 2024-02-15 09:07:47 99900468430 ND_50249-02152024 ROMAN Claim # 99900468430 Invoice.pdf 0115100898 Duplicate Child Exception - ROAMN Claim # 99900468430 Invoice.pdf already exists in the location 2024-02-15 09:41:16 99900470018 ND_52233-02152024 Tristian CLAIM #99900470018 PACKAGE.pdf 0115100953 Document not found - Tristian CLAIM #99900470018 PACKAGE.pdf   Exception is in another event line in logfile but just after the line from where to get first 4 metadata. Both of the rows/ events in the logs have sessionID in common and can have DOCNAME also in common but SessionID can have multiple transactions so can have different name.  I created following script for this purpose but its providing different DocName  -   (index="myindex" "/app1/service/site/upload failed" AND "source=Web" AND "confirmationNumber=ND_*") OR (index="myindex" "Exception from executeScript") | rex "clmNumber=(?<ClaimNumber>[^,]+)" | rex "confirmationNumber=(?<SubmissionNumber>[^},]+)" | rex "contentType=(?<ContentType>[^},]+)" | rex "name=(?<DocName>[^,]+)" | rex "(?<SessionID>\[http-nio-8080-exec-\d+\])" | eval EventType=if(match(_raw, "Exception from executeScript"), "Exception", "Upload Failure") | eventstats first(EventType) as first_EventType by SessionID | where EventType="Upload Failure" | join type=outer SessionID [ search index="myindex" "Exception from executeScript" | rex "Exception from executeScript: (?<Exception>[^:]+)" | rex "(?<SessionID>\[http-nio-8080-exec-\d+\])" | rex "(?<ExceptionDocName>.+\.pdf)" | eval EventType="Exception" | eventstats first(EventType) as first_EventType by SessionID ] | where EventType="Exception" OR isnull(Exception) | table _time, ClaimNumber, SubmissionNumber, ContentType, DocName, Exception | sort _time desc ClaimNumber   Here is the result that I got - _time clmNumber confirmationNumber name Exception 2024-02-15 09:07:47 99900468430 ND_50249-02152024 ROMAN Claim # 99900468430 Invoice.pdf 0115105149 Duplicate Child Exception - Rakesh lease 4 already exists in the location. 2024-02-15 09:41:16 99900470018 ND_52233-02152024 Tristian CLAIM #99900470018 PACKAGE.pdf 0115105128 Duplicate Child Exception - Combined 4 Point signed Ramesh 399 Coral Island. disk 3 already exists in the location.   So, although I am able to get first four metadata in the table correctly, but the exception is coming from another event in the log with same sessionID I believe. How can we fix the script to provide the expected result? Thanks in Advance.  
Thank you @bowesmana and @yuanliu for helping with this!  This worked, but I just had to add a ) at the end to balance the parenthesis. The values when tabled out all include "event" in addition t... See more...
Thank you @bowesmana and @yuanliu for helping with this!  This worked, but I just had to add a ) at the end to balance the parenthesis. The values when tabled out all include "event" in addition to the targeted values, which I'm guessing is somehow coming from the top element in the array. Not a huge problem for me, but figured I'd mention it. Results: event name-resource-121sg6fe event name-resource-387762fg   Sample JSON array: event: { AccountId: xxxxxxxxxx CloudPlatform: CloudProvider CloudService: Service ResourceAttributes: {"key1": "value1", "key2": "value2", "key3": value3, "key4": [{"key": "value", "key": "value"}], "Resource Name": "name-resource-121sg6fe", etc} }  
Thanks for the hint I was checking via index=metric_indexname query. Utilized mstat it started fetching data.
This worked perfectly. Thank you!
Hi, I am looking to add a custom time picker on dashboard.  Its going to be Simple dropdown with option of  last 12 months (one option to each month in last 1 year) I have created dropdown as p... See more...
Hi, I am looking to add a custom time picker on dashboard.  Its going to be Simple dropdown with option of  last 12 months (one option to each month in last 1 year) I have created dropdown as per requirement. Now wondering how to use it in rest of the dashboard so dashboard will get updated as per selection. Query | makeresults | addinfo | eval date=mvrange(info_min_time,info_max_time,"1mon") | mvexpand date | sort - date | eval Month=strftime(date,"%b-%y") | table Month date  
Yes, this looks correct
I have events like the below that are saying when a particular pool member was out of rotation for a particular period of time.  What would be an ideal search would be to match all events that have t... See more...
I have events like the below that are saying when a particular pool member was out of rotation for a particular period of time.  What would be an ideal search would be to match all events that have the "was down for" and then the length of time and simply average that, and take the 95th percentile of that duration.   Probably more difficult than it seems and I'm not sure how to approach it. <133>Feb 13 13:01:33 slot2/US66666-CORE-LTM1.company.COM notice mcpd[8701]: 01070727:5: Pool /Common/pool-generic member /Common/servernamew006:8080 monitor status up. [ /Common/mon-xxx-prod-xxx-liveness: up ] [ was down for 0hr:0min:15sec ] host = US66666-core-ltm1.company.com source = /var/log/HOSTS/US66666-core-ltm1.company.com/xxx.xxx.com-syslog.log sourcetype = syslog_alb
@ITWhisperer I just want to be sure that we are extracting the timestamp correctly, so my doubt is that for an event which has timestamp as 2024-02-16T11:46:02.9895330Z, we have created the time form... See more...
@ITWhisperer I just want to be sure that we are extracting the timestamp correctly, so my doubt is that for an event which has timestamp as 2024-02-16T11:46:02.9895330Z, we have created the time format as %Y-%m-%dT%H:%M:%S.%9N%Z but the _time field shows values like 2/16/24 11:46:02.989 AM, So is this correct? Thank you so much for the help. Regards.
Hello, you can collect the logs with the following configuration on inputs.conf: [WinEventLog://Microsoft-Windows-Windows Defender/Operational] disabled = 0 index = windefender evt_resolve_ad_obj... See more...
Hello, you can collect the logs with the following configuration on inputs.conf: [WinEventLog://Microsoft-Windows-Windows Defender/Operational] disabled = 0 index = windefender evt_resolve_ad_obj = 1
Hi @MattiaP, sorry, what's the relation between an index and MongoDB? if you haven't events only in one index, you should check the inputs.conf that ingest data stored in that index. The only exce... See more...
Hi @MattiaP, sorry, what's the relation between an index and MongoDB? if you haven't events only in one index, you should check the inputs.conf that ingest data stored in that index. The only excepton is if you have an index overriding, have you this? Ciao. Giuseppe
Yes, I need to check if a particular index is used in any TA.  
You can see the index of the source by using below query; | tstats latest(_indextime) as latest where index IN (index1,index2,index3,index4) earliest=-48h by source index | eval delay = now() -lates... See more...
You can see the index of the source by using below query; | tstats latest(_indextime) as latest where index IN (index1,index2,index3,index4) earliest=-48h by source index | eval delay = now() -latest | where delay > 86400 | eval delay=tostring(delay, "duration") | fields - latest Since above query check the latest 48 hours ingested events. Filters the results that do not send data for at least 24 hours. Looking for 48 hours back will make sure daily updated sources will taken into account.    
Have you tried series colors rather than fieldColors?
Hi, I have an index that doesn't show events anymore.  Could you help me please? On November I had a problem with Mongo DB and I tried this solutions: - https://community.splunk.com/t5/Knowledge-M... See more...
Hi, I have an index that doesn't show events anymore.  Could you help me please? On November I had a problem with Mongo DB and I tried this solutions: - https://community.splunk.com/t5/Knowledge-Management/Why-are-we-getting-these-errors-KV-Store-Process-Terminated/m-p/449940  --> doing this I noticed that permissions of files inside this folder have changed. May this be the cause of the problem? This solutiion didn't work - I solved the problem doing this Could you help me please? Thank you
Essentially, you can't do this - you have the chart and the legend, nothing else. Having said that, you could rename the x-axis field so that it includes the data you want to display. However, since ... See more...
Essentially, you can't do this - you have the chart and the legend, nothing else. Having said that, you could rename the x-axis field so that it includes the data you want to display. However, since your x-axis is _time, if you rename it to something else, it will not be displayed in the same was - x-axis using _time are treated in a special way by charts.
Hi, I created a column chart that displays avg(totalTime) over a 5min increment by the organization. I am looking to add in the bottom corner of the chart the latest count of the organization. I ju... See more...
Hi, I created a column chart that displays avg(totalTime) over a 5min increment by the organization. I am looking to add in the bottom corner of the chart the latest count of the organization. I just want to display the count at the bottom of the chart where the legend is. How do I accomplish this? Column Chart query to graph avg(totalTime) by organization index | timechart span-5m avg(totalTime) as avg Volume (where I want to display the value of the latest count on the chart above near the legend) index | timechart span=5m count by organization Kindly help. 
Hi,  I am trying to create a column chart that if the value is greater than 3 then the column of the Column chart turns red while if the value is less than or equal to 3, the column of the chart is... See more...
Hi,  I am trying to create a column chart that if the value is greater than 3 then the column of the Column chart turns red while if the value is less than or equal to 3, the column of the chart is green.  Below is my search that I started off with: index | timechart span=5m avg(totalTime) as avg_value limit=20 | eval threshold=3 I tried: index | timechart span=5m avg(totalTime) as avg_value limit=20 | eval threshold=3 | eval "red"=if(avg_value > threshold, avg_value,0) | eval "green"=if(avg_value<threshold, avg_value,0) |fields - avg_value Then I went into the source code and defined the colors but the column chart did not change colors.   <option name="charting.fieldColors">{"red":0xFF0000,"green":0x73A550}</option>  I do not want the columns stacked.  Kindly help. 
Hi All We are starting to look at application monitoring and our first target will definitely be SAP. I can see there are a number of SAP apps in Splunkbase. Does anyone have any info on a compariso... See more...
Hi All We are starting to look at application monitoring and our first target will definitely be SAP. I can see there are a number of SAP apps in Splunkbase. Does anyone have any info on a comparison of these, and any Splunk guides or best practises to start looking at this? I've not worked with any monitoring at this application level previously so really starting at first principles and gathering as much info as possible. Thank you for reading. All the best.