All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Are you saying that you want a health field that has "Bad" in for all the events if any of the events have status="Issue"?
Use the eval command to create a field. | eval Health = if(status="Issue", "Bad", "Ok")  
Looks like there may not be a space after the colon so use * instead of + | rex field=_raw "Crestron Package Firmware version :\s*(?<CCSFirmware>\S*?)" It would help if you share your event data in... See more...
Looks like there may not be a space after the colon so use * instead of + | rex field=_raw "Crestron Package Firmware version :\s*(?<CCSFirmware>\S*?)" It would help if you share your event data in a code block so that formatting e.g. spaces are preserved
Check those events - my hunch is there is something wrong with formatting in those rows - some inconsistent quoting or something like that.
Oh. This is something we in Poland call "shooting the sparrow with a cannon". If you really want to modify user's input, you should do so on client's side using the <change> functionality of the dash... See more...
Oh. This is something we in Poland call "shooting the sparrow with a cannon". If you really want to modify user's input, you should do so on client's side using the <change> functionality of the dashboard. But I'm still asking what's the point in doing so. If you want to have predefined choices you use different inputs. If you let the user type in something freely honor their choice (and/or educate the users to add the wildcard by themselves).
Hello All,   I need to search for SPLs having time range as All time. I used the below SPL:-     index=_audit action=search provenance=* info=completed host IN (...) |table user, apiStartTime, ... See more...
Hello All,   I need to search for SPLs having time range as All time. I used the below SPL:-     index=_audit action=search provenance=* info=completed host IN (...) |table user, apiStartTime, apiEndTime, search_,et, search_lt, search |search apiStartTime='ZERO_TIME' OR apiEndTime='ZERO_TIME' |convert ctime(search_*)     I get results with  apiStartTime as Empty apiEndTime as 'ZERO_TIME' search_et 07/31/2024 00:00:00 search_lt 08/29/2024 13:10:58   Thus, how do I interpret the above results and how do I modify the SPL to fetch correct results?   Thank you Taruchit
Ok. Aren't you perchance searching in fast mode? Oh, and I of course assume you have your TA_windows installed in all required places, right?
The general form for that regex is "<<delimiter>>(?<field>[^<<delimiter>>]+)".  In this case, the delimiter is a regex special character so escaping is needed.  Try this command: | rex "\?(?<field>[... See more...
The general form for that regex is "<<delimiter>>(?<field>[^<<delimiter>>]+)".  In this case, the delimiter is a regex special character so escaping is needed.  Try this command: | rex "\?(?<field>[^\?]+)"
It would help if you would explain "its not working" and show the output of the sample query.  However, I think I know what the problem is.  I left out a command in the query. ... | eval FinalStatus... See more...
It would help if you would explain "its not working" and show the output of the sample query.  However, I think I know what the problem is.  I left out a command in the query. ... | eval FinalStatus = if(Status="Yes", 1, 0) | eventstats min(FinalStatus) as FinalStatus by ServerName | stats min(FinalStatus) as FinalStatus by ServerName | eval FinalStatus = if(FinalStatus=1, "Yes", "No") | table ServerName, FinalStatus
Hi,    Can you please help me with the code I can add to have more options for Dynatrace collection interval for v2 metrics collections. ExP: collecting 4 mins of data in every 4 mins.    THanks!... See more...
Hi,    Can you please help me with the code I can add to have more options for Dynatrace collection interval for v2 metrics collections. ExP: collecting 4 mins of data in every 4 mins.    THanks! #Dyntraceaddon  
Hello, Thank you for your help on this in advance,  I just need to create a field in Splunk Search that contains the value between 2 delimiters.    The delimiter is "?".  For example. Athena.siteon... See more...
Hello, Thank you for your help on this in advance,  I just need to create a field in Splunk Search that contains the value between 2 delimiters.    The delimiter is "?".  For example. Athena.siteone.com?suvathp001?443 What would be the regex to only extract suvathp001 Thanks again for your help, Tom    
Background I have a very legacy application with bad/inconsistent log formatting, and I want to be able to somehow collect this in Splunk via Universal Forwarder. The issue is with multiple line eve... See more...
Background I have a very legacy application with bad/inconsistent log formatting, and I want to be able to somehow collect this in Splunk via Universal Forwarder. The issue is with multiple line events, which dump XML documents containing separate timestamps into log messages. Issue Because these multiline messages contain a timestamp within the body of the XML, and this becomes part of the body of the log message, Splunk is inserting events with "impossible" timestamps. For example an event will get indexed as happening in 2019 when this is actually a log event from 2024, which output an XML body containing an <example></example>  element which contains a 2019 timestamp, and part of this body is stored as a Splunk event from 5 years ago. Constraints I cannot modify the configuration of the Splunk indexer/search head/anything other than the Universal Forwarder that I control I do not have access to licensing to be able to run any Heavy Forwarders; I can only go from Universal Forwarder on hosts which I control directly to a HTTP Event Collector endpoint that I do not control I cannot (easily) change the log format to not dump these bodies. There is a long term ask on the team to fix up logging to be a) consistent and b) more ingest-friendly - but I'm looking for any interim solution that I can apply on the component I control directly, which is basically the Universal Forwarder only. Ideas? My only idea so far is a custom sourcetype which specifies the log timestamp format exactly including a regex anchor to the start of the line, and also reduces/removes the MAX_TIMESTAMP_LOOKAHEAD value to stop Splunk from looking past the first match - I believe this would mean that all the lines in an event would be considered correctly because the XML document would start with either whitespace or a < character. However my understanding is that this would require a change either to the indexer or to a Heavy Forwarder which I can't do. I'm looking for any alternatives this community can offer as a potential workaround until the log sanitization effort gets off the ground.
Its not working & not giving required output . below is my sample query: index=abc laas_appId=XYZ source="/opt/var/directory/sample.csv" | dedup _raw | table ServerName,Status | eval FinalStatus = ... See more...
Its not working & not giving required output . below is my sample query: index=abc laas_appId=XYZ source="/opt/var/directory/sample.csv" | dedup _raw | table ServerName,Status | eval FinalStatus = if(Status="Yes", 1, 0) | eventstats min(FinalStatus) as FinalStatus by ServerName | eval FinalStatus = if(FinalStatus=1, "Yes", "No") | table ServerName, FinalStatus
Hi, could you please add a troubleshooting description for the app. We just installed it and unfortunately can't configure it, the page gets immediatly an HTTP 500, e.g.   "GET /en-US/splunkd/__r... See more...
Hi, could you please add a troubleshooting description for the app. We just installed it and unfortunately can't configure it, the page gets immediatly an HTTP 500, e.g.   "GET /en-US/splunkd/__raw/servicesNS/nobody/ta-mdi-health-splunk/TA_microsoft_graph_security_add_on_for_splunk_microsoft_graph_security?output_mode=json&count=-1 HTTP/1.1" 500 303 "-" "   Thanks
Hi @Steve.Williams, Thanks for asking your question on the Community. I found this doc that mentions this is supported. https://docs.appdynamics.com/appd/24.x/24.8/en/application-monitoring/inst... See more...
Hi @Steve.Williams, Thanks for asking your question on the Community. I found this doc that mentions this is supported. https://docs.appdynamics.com/appd/24.x/24.8/en/application-monitoring/install-app-server-agents/java-agent/java-supported-environments It was suggested you contact AppDynamics Support for this issue. AppDynamics is migrating our Support case handling system to Cisco Support Case Manager (SCM). Read on to learn how to manage your cases.  If you do contact Support and find a resolution, can you share your learnings as a reply to this post, please?
I'm curious about this value. "reason": "unrecognized character follows \\", Since the \\ is a literal escape is it reading the remainder of the message as text until the next naturally occurring "... See more...
I'm curious about this value. "reason": "unrecognized character follows \\", Since the \\ is a literal escape is it reading the remainder of the message as text until the next naturally occurring " on it's own?  Can you try changing the "\\" in the text portion of the message to "escape character set".  
As I am learning with Write-Once-Read-Many (WORM), there are situations where the buckets/tsidx files are re-uploaded when the indexers have hiccups during an upload. https://community.splunk.com/... See more...
As I am learning with Write-Once-Read-Many (WORM), there are situations where the buckets/tsidx files are re-uploaded when the indexers have hiccups during an upload. https://community.splunk.com/t5/Deployment-Architecture/smartstore-splunk-smartstore-and-Data-integrity/m-p/506769
Try this inside the quotes   Crestron Package Firmware version :(?<CCSFirmware>[^\s]+)
Once the base search runs with the filtered status the results are all that is left over.  You need to isolate your inputs source from your results query.  In this case 2 or more base searches are ne... See more...
Once the base search runs with the filtered status the results are all that is left over.  You need to isolate your inputs source from your results query.  In this case 2 or more base searches are needed. Things I have done/learned while doing this. - tstats search commands are much faster especially pulling single fields, use this if you can - inputs have limits on displaying unique values, enable search and wildcard options for long lists, never over 1,000 unique values if I recall correctly
Hello, I am currently working on project that involves integrating Splunk with Azure Virtual Desktop (AVD). Could you please provide me with any available documentation or resources that detail th... See more...
Hello, I am currently working on project that involves integrating Splunk with Azure Virtual Desktop (AVD). Could you please provide me with any available documentation or resources that detail the process or best practices for this integration? Any guidance or links to relevant materials would be greatly appreciated. Thank you in advance for your assistance. Best regards,