All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Yes. But are those results of some searches that you want to "merge" or do you simply have two different sourcetypes from which different sets of fields are extracted? If it's the latter, your solut... See more...
Yes. But are those results of some searches that you want to "merge" or do you simply have two different sourcetypes from which different sets of fields are extracted? If it's the latter, your solution should be relatively simple <some restriction on index(es)> sourcetype IN (sourcetype1, sourcetype2) | stats values(colA) as colA values(colB) as colB values(col1) as col1 values(col2) as col2 [...] by common_column If you want all columns, you might simply go with values(*) as *
Hi @Sankar , do you want to dispay urgency of each search or to filter results by urgency? in the first case: index=notable | stats values(urgency) As urgency count BY search_name in the second c... See more...
Hi @Sankar , do you want to dispay urgency of each search or to filter results by urgency? in the first case: index=notable | stats values(urgency) As urgency count BY search_name in the second case (to have only notable with urgency=high): index=notable urgency=high | stats count BY search_name let me know if I can help you more, or, please, accept one answer for the other people of Community. Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
We have SH cluster of 3 SH, where enterprise security notable are not same on all 3 SH enterprise security. And further when we check for last 15 min internal data that also vary with significant num... See more...
We have SH cluster of 3 SH, where enterprise security notable are not same on all 3 SH enterprise security. And further when we check for last 15 min internal data that also vary with significant number (5 K to 10 k) than other 2 SH Member.
check mongod.log under $SPLUNK_HOME/var/log/splunk/
Hello, as best practise you should create and deploy an app from deployment server with your inputs.conf and script. Also make sure you include a valid timestamp at the beginning of the output in US ... See more...
Hello, as best practise you should create and deploy an app from deployment server with your inputs.conf and script. Also make sure you include a valid timestamp at the beginning of the output in US format. Follow these instructions : https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/custominputs/scriptedinputsexample/
Ah ok.  I changed the definitiion to below. Its still not working, time picker is ignoring the time. Anything else I should do?  
Solution from support : "Yes, it is still recommended to use the Deployment Server for centralized management and consistency across Heavy Forwarders. However, if local customizations are required... See more...
Solution from support : "Yes, it is still recommended to use the Deployment Server for centralized management and consistency across Heavy Forwarders. However, if local customizations are required, ensure those changes are synced back to the DS (etc/deployment-apps/<app_name>/local) to prevent overwrites. Alternatively, use 'excludeFromUpdate' in serverclass.conf to protect specific files or directories. For better scalability, avoid making direct changes on HFs and manage all configurations via the DS whenever possible."
Please don't tag uninvolved users.  If someone has an answer, they'll respond.
Hi @chrisboy68 , timekeeper works with events not with lookups. if you need to use time with a lookup, use a lookup with "Configure time-based lookup" in Lookup Definition, or better, save the valu... See more...
Hi @chrisboy68 , timekeeper works with events not with lookups. if you need to use time with a lookup, use a lookup with "Configure time-based lookup" in Lookup Definition, or better, save the values in a index. Ciao. Giuseppe
Hi,   Struggling trying to figure out what I'm doing wrong. I have the following SPL | inputlookup append=t kvstore | eval _time = strptime(start_date, "%Y-%m-%d") | eval readable_time = strftime(... See more...
Hi,   Struggling trying to figure out what I'm doing wrong. I have the following SPL | inputlookup append=t kvstore | eval _time = strptime(start_date, "%Y-%m-%d") | eval readable_time = strftime(_time, "%Y-%m-%d %H:%M:%S") start_date is YYYY-MM-DD, when I modify the _time, I can see it is changed via readable_time, but the timepicker still ignores the change. I can say search last 30 days and I get the events with _time before the range in the timepicker. Any ideas?  Thanks!
@kiran_panchavat , @PickleRick , @ITWhisperer , @isoutamo , @bowesmana 
I have a base query which yield the field result, result can be either "Pass" or "Fail" Sample query result is attached How can I create a column chart with the count of passes and fails as diffe... See more...
I have a base query which yield the field result, result can be either "Pass" or "Fail" Sample query result is attached How can I create a column chart with the count of passes and fails as different color columns?   here is my current search which yields a column chart with two columns of the same color index="sampleindex" source="samplesource" | search test_name="IR Test" | search serial_number="TC-7"| spath result | stats count by result  
Hello,   I'm using Splunk's ingest actions to aggregate logs and have created a destination and ruleset to forward copies to my S3 bucket, while sending filtered data to Splunk indexers. This setup... See more...
Hello,   I'm using Splunk's ingest actions to aggregate logs and have created a destination and ruleset to forward copies to my S3 bucket, while sending filtered data to Splunk indexers. This setup is running on a Splunk Heavy Forwarder (HF), which receives logs on port 9997 from a syslog collector that gathers data from various sources. With the ingest actions feature, I'm limited to setting up a single sourcetype (possibly "syslog") and writing rules to filter and direct data to different indexes based on the device type. However, I also want to separate the data based on sourcetypes. I'm currently stuck on how to achieve this. Has anyone tried a similar solution or have any advice?
@dural_yyz, @amahoski , @gcusello 
Thanks @gcusello  its working.  i want to filter each alert, based on Urgency like (High, Medium, Low, informational)  I tried below query but its not working. | fields Title Urgency | table... See more...
Thanks @gcusello  its working.  i want to filter each alert, based on Urgency like (High, Medium, Low, informational)  I tried below query but its not working. | fields Title Urgency | table Title Urgency  
I created .sh scripts that do the following: #!/bin/bash # Name of the service to monitor SERVICE_NAME="tomcat9" # Check if the service is running SERVICE_STATUS=$(systemctl is-active "$SERVICE_N... See more...
I created .sh scripts that do the following: #!/bin/bash # Name of the service to monitor SERVICE_NAME="tomcat9" # Check if the service is running SERVICE_STATUS=$(systemctl is-active "$SERVICE_NAME.service") # Output status for Splunk if [ "$SERVICE_STATUS" == "active" ]; then echo "$(date): Service $SERVICE_NAME is running." else echo "$(date): Service $SERVICE_NAME is NOT running." fi The above is obviously what Im using for Tomcat but I have others all doing the thing just different service names. These scripts reside in: /opt/splunkforwarder/bin/scripts Additionally I have configured these scripts to be run in /opt/splunkforwarder/etc/system/local/inputs.conf an example of what that looks like is below: [script:///opt/splunkforwarder/bin/scripts/monitor_service_<service_name>.sh] disabled = false interval = 60 index = services sourcetype = service_status As you can see I also have configured the following: index = services sourcetype = service_status These are also configured in Splunk Enterprise respectively and the index is configured for Search, in linux  Splunk is the owner and the group is also Splunk. Additionally all of the scripts are executable and successfully run when I test them, however none of this data seems to be passed from the forwarder as none of the expected data is returned including the recognition of the index and sourcetype in Search.  Additionally I have attached a screen capture of splunkd.log showing the scripts as being recognized.  
Hi,  We recently upgraded the Splunk environment to 9.2.4, and we have some of the apps that are using the Python 2.7 version we are in the process of upgrading those apps in the next 3 months. I ... See more...
Hi,  We recently upgraded the Splunk environment to 9.2.4, and we have some of the apps that are using the Python 2.7 version we are in the process of upgrading those apps in the next 3 months. I noticed that there are errors related to " splunk/bin/jp.py present_but_shouldnt_be, /splunk/bin/python2.7 present_but_shouldnt_be" Since we also use python 2.7, we do not want to delete these files in the bin. I want to understand there is any way that we can suppress these messages during the integrity check. 
Hi @isoutamo    I am currently using Splunk ingest actions feature to route the logs to S3 bucket and it doesn't have the capability to include <host>:<original sourcetype> for the events.  Thank ... See more...
Hi @isoutamo    I am currently using Splunk ingest actions feature to route the logs to S3 bucket and it doesn't have the capability to include <host>:<original sourcetype> for the events.  Thank you for taking time to reply to my query.
Hi @VatsalJagani    Yes I raised a case with Splunk support and they confirm they do not have such capability in place and I advised the to add it to their future enhancements list.  I hope this wi... See more...
Hi @VatsalJagani    Yes I raised a case with Splunk support and they confirm they do not have such capability in place and I advised the to add it to their future enhancements list.  I hope this will be considered.  Appreciate your response.
Hi @Sankar, Correlation Searches, in ES, write triggered alerts in the notable index. You can see in this index and create a statistic for search_name: index=notable | stats count BY search_name ... See more...
Hi @Sankar, Correlation Searches, in ES, write triggered alerts in the notable index. You can see in this index and create a statistic for search_name: index=notable | stats count BY search_name Ciao. Giuseppe