All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Team, How to replace no results found with 0 with a color in Splunk dashboard. I know that by appending below it update 'no results found' with 0 value. | appendpipe [stats count | where cou... See more...
Hi Team, How to replace no results found with 0 with a color in Splunk dashboard. I know that by appending below it update 'no results found' with 0 value. | appendpipe [stats count | where count=0] But it comes with red color as 0 value, I want to change to green color. even if I have changed Format Visualization --> Color range  from 0-5 as Green 5-max as Red   Could you please let me know how I can get green color with value as 0 when there is 'no results found' 
This is a different question - you could modify your search to use something like Component IN $componentselection$ but it depends on how your dashboard is set up
Hi Experts,  I have a question regarding our Splunk Dashboard. I want to show the logic of the calculation used in a single value panel. Specifically, I would like to display this information when a... See more...
Hi Experts,  I have a question regarding our Splunk Dashboard. I want to show the logic of the calculation used in a single value panel. Specifically, I would like to display this information when a user hovers over the panel or clicks a question mark (?) or information (i) symbol.  Is it possible to add this feature to a particular single value panel? Any guidance or examples would be greatly appreciated. Thank you
I know that rest calls don't cover the deployment server apps as they are not memory resident. But is there any way we can monitor Deployment Server which saves the output somewhere and we can monito... See more...
I know that rest calls don't cover the deployment server apps as they are not memory resident. But is there any way we can monitor Deployment Server which saves the output somewhere and we can monitor that to splunk ?
Splunk is all about time series data, so you can search data/events using various times etc, so what this means, you need to ensure you have well formatted logs with a time stamp, which is what Splun... See more...
Splunk is all about time series data, so you can search data/events using various times etc, so what this means, you need to ensure you have well formatted logs with a time stamp, which is what Splunk loves and try's to break the events based on the timestamp.  Splunk has the capability to auto detect most common log formats and timestamps, but this is not best practice for custom logs, its better to ensure you parse and format the timestamp correctly.  As you have a custom log file it  looks , you will need to create a new sourcetype for it and apply props and transforms configuration to it, which will then parse and ensure the time stamp is correct.  First try and understand the props concepts and apply that to your log file, it will requires some props code trial and error, until your get it to work as expected.  Start here:   https://lantern.splunk.com/Splunk_Platform/Product_Tips/Data_Management/Improving_data_onboarding_with_props.conf_configurations     
Hello there! To monitor Microsoft Hyper-V in customer environment, I know and use Hyper-V add-on for Splunk. But, the add-on does not include PowerShell scripts for monitoring Microsoft Hyper-V MS c... See more...
Hello there! To monitor Microsoft Hyper-V in customer environment, I know and use Hyper-V add-on for Splunk. But, the add-on does not include PowerShell scripts for monitoring Microsoft Hyper-V MS cluster and CSV (Cluster Shared Volumes) metrics and counters. Anyone using any sort of monitoring or custom scripts/apps for MS cluster and CSV monitoring?   Thanks
The forwarder is forwarding.  The information is broken up in splunk every time it comes across a line with a timestamp.   Then a new field is created after the timestamp line until it hits another t... See more...
The forwarder is forwarding.  The information is broken up in splunk every time it comes across a line with a timestamp.   Then a new field is created after the timestamp line until it hits another timestamp in the txt
get-brokersession is run via powershell and sent to a txt file.   The information is getting into splunk however, every line that has a date and time in it the event is killed and a new event begins ... See more...
get-brokersession is run via powershell and sent to a txt file.   The information is getting into splunk however, every line that has a date and time in it the event is killed and a new event begins  with the next line in splunk.   Is there a way just to have the txt file to be ingested into splunk without it chopping up the file every time it come to a timestamp in the log?
@ITWhisperer Thank you very much and you made my day to achieve the desired output. Also I would like to pass Component as a dropdown which could be either 1 or 2 or 3 comma separated values as AAAA... See more...
@ITWhisperer Thank you very much and you made my day to achieve the desired output. Also I would like to pass Component as a dropdown which could be either 1 or 2 or 3 comma separated values as AAAA, BBBB, CCCC and expecting output for each component it should display the Last Input Timestamp and Last Output Timestamp Component | Last Input Timestamp| Last Errored Timestamp AAAA             | 24-03-2024 12:23:23| 24-03-2024 08:23:12 BBBB             | 23-03-2024 10:12:44| 24-02-2024 05:45:22 CCCC             | 12-05-2024 11:01:00| 04-05-2024 01:23:12 Any help to achieve this would be really appreciated!
Hello @gcusello  Can you please share the same ? I have a similar use case.
| stats latest(eval(if(searchmatch("Error"),_time,null()))) as LastErroredTimestamp latest(eval(if(searchmatch("Input"),_time,null()))) as LastInputTimestamp by Component | fieldformat LastErroredTim... See more...
| stats latest(eval(if(searchmatch("Error"),_time,null()))) as LastErroredTimestamp latest(eval(if(searchmatch("Input"),_time,null()))) as LastInputTimestamp by Component | fieldformat LastErroredTimestamp=strftime(LastErroredTimestamp,"%F %T") | fieldformat LastInputTimestamp=strftime(LastInputTimestamp,"%F %T")
Thanks a lot!! It worked. Great help.
Hi, I would like to get the latest search record or multiple search combination. For example, if my search is as below index=myIndex ABCD AND (Input OR Error) I am expecting output as below table... See more...
Hi, I would like to get the latest search record or multiple search combination. For example, if my search is as below index=myIndex ABCD AND (Input OR Error) I am expecting output as below table format Component | Last Input Timestamp| Last Errored Timestamp ABCD             | 24-03-2024 12:23:23| 24-03-2024 08:23:12 Search should fetch the timestamp of latest log event of (ABCD and Input) and (ABCD and Error). 
The specs listed are the *minimums* specified by Splunk.  What the *actual* specs of the DS?  Which version of Splunk is the DS running?  How many apps are in the deployment-apps directory? Splunk d... See more...
The specs listed are the *minimums* specified by Splunk.  What the *actual* specs of the DS?  Which version of Splunk is the DS running?  How many apps are in the deployment-apps directory? Splunk does not use /root to store anything and Best Practice is to put $SPLUNK_HOME and $SPLUNK_DB in separate mount points not shared with the OS. Have you run du to see what files/directories are using the most storage?
The datamodel is looking for specific values in the instanceId field, however, the screenshot does not show an instanceId field exists in the data.  Therefore, the DM will return no results and the d... See more...
The datamodel is looking for specific values in the instanceId field, however, the screenshot does not show an instanceId field exists in the data.  Therefore, the DM will return no results and the dashboard will show nothing.
I guess the question can be broad, but I am coming from the following scenario: I am using the Splunk app, which has been configured and connection tested successfully in SOAR.  Recently, something ... See more...
I guess the question can be broad, but I am coming from the following scenario: I am using the Splunk app, which has been configured and connection tested successfully in SOAR.  Recently, something happened that I did not expect - the credentials to Splunk were rejected and the action to "run query" returned with an expected message of: "Unauthorized Access (401)". But then the action terminated there and did not continue with the rest of the playbook.  I have another app action for Ansible Tower to run a (Ansible) playbook (action name is "run job"), and if the Ansible playbook fails, the action in Splunk SOAR is marked as FAILED, but the SOAR playbook continues otherwise. I can't tell what the difference is between these two actions that allows one to continue, but the other to halt the SOAR playbook progression. Any advice is appreciated.  
They are coming into the HF through syslog UDP port.
Thank you so much! This worked beautifully. I have been trying to wrap my head around this for such a long time, it's so nice to see an outcome.  Really appreciate your help 
I can't add any Background images to a dashboard created in dashboard studio and i presume it is because my role is missing the correct capability. I am trying to find information relating to what c... See more...
I can't add any Background images to a dashboard created in dashboard studio and i presume it is because my role is missing the correct capability. I am trying to find information relating to what cap. i need but i could not find anything. Chat.G.P.T. answered that there is a cap. "edit_visualizations" but i could not find info about that. Can someone help me with identifying the correct capability linked to adding a background image to a Dashboard Studio dashboard? Thanks in advance, Paul
Well, all of our servers are running 9.2.2 and all of our Universal Forwarders are running 9.2.1 or 9.2.2 and we are still seeing this log message. EDIT 2024-07-23:  Never mind.  Closer inspection o... See more...
Well, all of our servers are running 9.2.2 and all of our Universal Forwarders are running 9.2.1 or 9.2.2 and we are still seeing this log message. EDIT 2024-07-23:  Never mind.  Closer inspection of the logs shows that they are working correctly in 9.2.1 and 9.2.2.  The messages with the crazy high numbers are from older systems.  The newer ones still report a number, but none larger than 1,000,000 (most around 512 kB).