All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@ITWhisperer @richgalloway Thank you for those suggestions. Actually, the requirement has just changed and we want to see if the policy_name status changed from X(A) to X. So, I tried the followin... See more...
@ITWhisperer @richgalloway Thank you for those suggestions. Actually, the requirement has just changed and we want to see if the policy_name status changed from X(A) to X. So, I tried the following query to get the results from Today and Last 48 hours to compare and see how many of them changed from X(A) to X. index=xyz sourcetype=abc earliest=-48h@h latest=@d-1m | search (policy_name="*-X" OR policy_name="*-X(A)") | rex field=policy_name "(?<namePattern>([^-]+\-){5})(?<State>[^-]+)" | rename namePattern as oldPattern | stats count by _time, policy_name, oldPattern, State | append [search index=xyz sourcetype=abc earliest=@d latest=now | search (policy_name="*-X" OR policy_name="*-X(A)") | rex field=policy_name "(?<namePattern>([^-]+\-){5})(?<State>[^-]+)" | rename namePattern as newPattern | stats count by _time, policy_name, newPattern, State ] | table _time, policy_name, newPatternm oldPattern, State Now the results from this query are in the below format. I'm trying to compare the State field values based on newPattern and oldPattern of the policy_name field values to see which of them are changed from X(A) to X and get the list of changed policies ending in state X. But, as they are on different rows, I'm unable to compare them or I'm not sure how to compare them. Ex: _time policy_name newPattern oldPattern State 2024-08-27 13:00:06.827 policy_1_X(A)   policy_1_ X(A) 2024-08-27 13:00:06.827 policy_2_X(A)   policy_2_ X(A) 2024-08-28 06:31:24.775 policy_1_X policy_1_   X 2024-08-28 06:31:24.775 policy_2_X policy_2_   X 2024-08-29 10:57:25.000 policy_3_X(A)   policy_3_ X(A) 2024-08-29 10:57:25.000 policy_4_X(A)   policy_4_ X(A) 2024-08-29 11:57:25.000 policy_3_X policy_3_   X 2024-08-29 11:57:25.000 policy_4_X policy_4_   X   changed_policies policy_1_X policy_2_X policy_3_X policy_4_X
Example: 1st report Date is from 1st June~16th June 2nd report Date is from 17thJune ~ 30 June and have it send the two reports on the end of the beginning of the next month.  July... See more...
Example: 1st report Date is from 1st June~16th June 2nd report Date is from 17thJune ~ 30 June and have it send the two reports on the end of the beginning of the next month.  July 1st. Next month rolls in.... 1st report Date is from 1st July~16th July 2nd report Date is from 17th July ~ 31 July and have it send two reports on the end of the beginning of the next month.  August 1st, ect... and so on.    
Are you saying that you want a health field that has "Bad" in for all the events if any of the events have status="Issue"?
Use the eval command to create a field. | eval Health = if(status="Issue", "Bad", "Ok")  
Looks like there may not be a space after the colon so use * instead of + | rex field=_raw "Crestron Package Firmware version :\s*(?<CCSFirmware>\S*?)" It would help if you share your event data in... See more...
Looks like there may not be a space after the colon so use * instead of + | rex field=_raw "Crestron Package Firmware version :\s*(?<CCSFirmware>\S*?)" It would help if you share your event data in a code block so that formatting e.g. spaces are preserved
Check those events - my hunch is there is something wrong with formatting in those rows - some inconsistent quoting or something like that.
Oh. This is something we in Poland call "shooting the sparrow with a cannon". If you really want to modify user's input, you should do so on client's side using the <change> functionality of the dash... See more...
Oh. This is something we in Poland call "shooting the sparrow with a cannon". If you really want to modify user's input, you should do so on client's side using the <change> functionality of the dashboard. But I'm still asking what's the point in doing so. If you want to have predefined choices you use different inputs. If you let the user type in something freely honor their choice (and/or educate the users to add the wildcard by themselves).
Hello All,   I need to search for SPLs having time range as All time. I used the below SPL:-     index=_audit action=search provenance=* info=completed host IN (...) |table user, apiStartTime, ... See more...
Hello All,   I need to search for SPLs having time range as All time. I used the below SPL:-     index=_audit action=search provenance=* info=completed host IN (...) |table user, apiStartTime, apiEndTime, search_,et, search_lt, search |search apiStartTime='ZERO_TIME' OR apiEndTime='ZERO_TIME' |convert ctime(search_*)     I get results with  apiStartTime as Empty apiEndTime as 'ZERO_TIME' search_et 07/31/2024 00:00:00 search_lt 08/29/2024 13:10:58   Thus, how do I interpret the above results and how do I modify the SPL to fetch correct results?   Thank you Taruchit
Ok. Aren't you perchance searching in fast mode? Oh, and I of course assume you have your TA_windows installed in all required places, right?
The general form for that regex is "<<delimiter>>(?<field>[^<<delimiter>>]+)".  In this case, the delimiter is a regex special character so escaping is needed.  Try this command: | rex "\?(?<field>[... See more...
The general form for that regex is "<<delimiter>>(?<field>[^<<delimiter>>]+)".  In this case, the delimiter is a regex special character so escaping is needed.  Try this command: | rex "\?(?<field>[^\?]+)"
It would help if you would explain "its not working" and show the output of the sample query.  However, I think I know what the problem is.  I left out a command in the query. ... | eval FinalStatus... See more...
It would help if you would explain "its not working" and show the output of the sample query.  However, I think I know what the problem is.  I left out a command in the query. ... | eval FinalStatus = if(Status="Yes", 1, 0) | eventstats min(FinalStatus) as FinalStatus by ServerName | stats min(FinalStatus) as FinalStatus by ServerName | eval FinalStatus = if(FinalStatus=1, "Yes", "No") | table ServerName, FinalStatus
Hi,    Can you please help me with the code I can add to have more options for Dynatrace collection interval for v2 metrics collections. ExP: collecting 4 mins of data in every 4 mins.    THanks!... See more...
Hi,    Can you please help me with the code I can add to have more options for Dynatrace collection interval for v2 metrics collections. ExP: collecting 4 mins of data in every 4 mins.    THanks! #Dyntraceaddon  
Hello, Thank you for your help on this in advance,  I just need to create a field in Splunk Search that contains the value between 2 delimiters.    The delimiter is "?".  For example. Athena.siteon... See more...
Hello, Thank you for your help on this in advance,  I just need to create a field in Splunk Search that contains the value between 2 delimiters.    The delimiter is "?".  For example. Athena.siteone.com?suvathp001?443 What would be the regex to only extract suvathp001 Thanks again for your help, Tom    
Background I have a very legacy application with bad/inconsistent log formatting, and I want to be able to somehow collect this in Splunk via Universal Forwarder. The issue is with multiple line eve... See more...
Background I have a very legacy application with bad/inconsistent log formatting, and I want to be able to somehow collect this in Splunk via Universal Forwarder. The issue is with multiple line events, which dump XML documents containing separate timestamps into log messages. Issue Because these multiline messages contain a timestamp within the body of the XML, and this becomes part of the body of the log message, Splunk is inserting events with "impossible" timestamps. For example an event will get indexed as happening in 2019 when this is actually a log event from 2024, which output an XML body containing an <example></example>  element which contains a 2019 timestamp, and part of this body is stored as a Splunk event from 5 years ago. Constraints I cannot modify the configuration of the Splunk indexer/search head/anything other than the Universal Forwarder that I control I do not have access to licensing to be able to run any Heavy Forwarders; I can only go from Universal Forwarder on hosts which I control directly to a HTTP Event Collector endpoint that I do not control I cannot (easily) change the log format to not dump these bodies. There is a long term ask on the team to fix up logging to be a) consistent and b) more ingest-friendly - but I'm looking for any interim solution that I can apply on the component I control directly, which is basically the Universal Forwarder only. Ideas? My only idea so far is a custom sourcetype which specifies the log timestamp format exactly including a regex anchor to the start of the line, and also reduces/removes the MAX_TIMESTAMP_LOOKAHEAD value to stop Splunk from looking past the first match - I believe this would mean that all the lines in an event would be considered correctly because the XML document would start with either whitespace or a < character. However my understanding is that this would require a change either to the indexer or to a Heavy Forwarder which I can't do. I'm looking for any alternatives this community can offer as a potential workaround until the log sanitization effort gets off the ground.
Its not working & not giving required output . below is my sample query: index=abc laas_appId=XYZ source="/opt/var/directory/sample.csv" | dedup _raw | table ServerName,Status | eval FinalStatus = ... See more...
Its not working & not giving required output . below is my sample query: index=abc laas_appId=XYZ source="/opt/var/directory/sample.csv" | dedup _raw | table ServerName,Status | eval FinalStatus = if(Status="Yes", 1, 0) | eventstats min(FinalStatus) as FinalStatus by ServerName | eval FinalStatus = if(FinalStatus=1, "Yes", "No") | table ServerName, FinalStatus
Hi, could you please add a troubleshooting description for the app. We just installed it and unfortunately can't configure it, the page gets immediatly an HTTP 500, e.g.   "GET /en-US/splunkd/__r... See more...
Hi, could you please add a troubleshooting description for the app. We just installed it and unfortunately can't configure it, the page gets immediatly an HTTP 500, e.g.   "GET /en-US/splunkd/__raw/servicesNS/nobody/ta-mdi-health-splunk/TA_microsoft_graph_security_add_on_for_splunk_microsoft_graph_security?output_mode=json&count=-1 HTTP/1.1" 500 303 "-" "   Thanks
Hi @Steve.Williams, Thanks for asking your question on the Community. I found this doc that mentions this is supported. https://docs.appdynamics.com/appd/24.x/24.8/en/application-monitoring/inst... See more...
Hi @Steve.Williams, Thanks for asking your question on the Community. I found this doc that mentions this is supported. https://docs.appdynamics.com/appd/24.x/24.8/en/application-monitoring/install-app-server-agents/java-agent/java-supported-environments It was suggested you contact AppDynamics Support for this issue. AppDynamics is migrating our Support case handling system to Cisco Support Case Manager (SCM). Read on to learn how to manage your cases.  If you do contact Support and find a resolution, can you share your learnings as a reply to this post, please?
I'm curious about this value. "reason": "unrecognized character follows \\", Since the \\ is a literal escape is it reading the remainder of the message as text until the next naturally occurring "... See more...
I'm curious about this value. "reason": "unrecognized character follows \\", Since the \\ is a literal escape is it reading the remainder of the message as text until the next naturally occurring " on it's own?  Can you try changing the "\\" in the text portion of the message to "escape character set".  
As I am learning with Write-Once-Read-Many (WORM), there are situations where the buckets/tsidx files are re-uploaded when the indexers have hiccups during an upload. https://community.splunk.com/... See more...
As I am learning with Write-Once-Read-Many (WORM), there are situations where the buckets/tsidx files are re-uploaded when the indexers have hiccups during an upload. https://community.splunk.com/t5/Deployment-Architecture/smartstore-splunk-smartstore-and-Data-integrity/m-p/506769
Try this inside the quotes   Crestron Package Firmware version :(?<CCSFirmware>[^\s]+)