All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello, as best practise you should create and deploy an app from deployment server with your inputs.conf and script. Also make sure you include a valid timestamp at the beginning of the output in US ... See more...
Hello, as best practise you should create and deploy an app from deployment server with your inputs.conf and script. Also make sure you include a valid timestamp at the beginning of the output in US format. Follow these instructions : https://dev.splunk.com/enterprise/docs/developapps/manageknowledge/custominputs/scriptedinputsexample/
Ah ok.  I changed the definitiion to below. Its still not working, time picker is ignoring the time. Anything else I should do?  
Solution from support : "Yes, it is still recommended to use the Deployment Server for centralized management and consistency across Heavy Forwarders. However, if local customizations are required... See more...
Solution from support : "Yes, it is still recommended to use the Deployment Server for centralized management and consistency across Heavy Forwarders. However, if local customizations are required, ensure those changes are synced back to the DS (etc/deployment-apps/<app_name>/local) to prevent overwrites. Alternatively, use 'excludeFromUpdate' in serverclass.conf to protect specific files or directories. For better scalability, avoid making direct changes on HFs and manage all configurations via the DS whenever possible."
Please don't tag uninvolved users.  If someone has an answer, they'll respond.
Hi @chrisboy68 , timekeeper works with events not with lookups. if you need to use time with a lookup, use a lookup with "Configure time-based lookup" in Lookup Definition, or better, save the valu... See more...
Hi @chrisboy68 , timekeeper works with events not with lookups. if you need to use time with a lookup, use a lookup with "Configure time-based lookup" in Lookup Definition, or better, save the values in a index. Ciao. Giuseppe
Hi,   Struggling trying to figure out what I'm doing wrong. I have the following SPL | inputlookup append=t kvstore | eval _time = strptime(start_date, "%Y-%m-%d") | eval readable_time = strftime(... See more...
Hi,   Struggling trying to figure out what I'm doing wrong. I have the following SPL | inputlookup append=t kvstore | eval _time = strptime(start_date, "%Y-%m-%d") | eval readable_time = strftime(_time, "%Y-%m-%d %H:%M:%S") start_date is YYYY-MM-DD, when I modify the _time, I can see it is changed via readable_time, but the timepicker still ignores the change. I can say search last 30 days and I get the events with _time before the range in the timepicker. Any ideas?  Thanks!
@kiran_panchavat , @PickleRick , @ITWhisperer , @isoutamo , @bowesmana 
I have a base query which yield the field result, result can be either "Pass" or "Fail" Sample query result is attached How can I create a column chart with the count of passes and fails as diffe... See more...
I have a base query which yield the field result, result can be either "Pass" or "Fail" Sample query result is attached How can I create a column chart with the count of passes and fails as different color columns?   here is my current search which yields a column chart with two columns of the same color index="sampleindex" source="samplesource" | search test_name="IR Test" | search serial_number="TC-7"| spath result | stats count by result  
Hello,   I'm using Splunk's ingest actions to aggregate logs and have created a destination and ruleset to forward copies to my S3 bucket, while sending filtered data to Splunk indexers. This setup... See more...
Hello,   I'm using Splunk's ingest actions to aggregate logs and have created a destination and ruleset to forward copies to my S3 bucket, while sending filtered data to Splunk indexers. This setup is running on a Splunk Heavy Forwarder (HF), which receives logs on port 9997 from a syslog collector that gathers data from various sources. With the ingest actions feature, I'm limited to setting up a single sourcetype (possibly "syslog") and writing rules to filter and direct data to different indexes based on the device type. However, I also want to separate the data based on sourcetypes. I'm currently stuck on how to achieve this. Has anyone tried a similar solution or have any advice?
@dural_yyz, @amahoski , @gcusello 
Thanks @gcusello  its working.  i want to filter each alert, based on Urgency like (High, Medium, Low, informational)  I tried below query but its not working. | fields Title Urgency | table... See more...
Thanks @gcusello  its working.  i want to filter each alert, based on Urgency like (High, Medium, Low, informational)  I tried below query but its not working. | fields Title Urgency | table Title Urgency  
I created .sh scripts that do the following: #!/bin/bash # Name of the service to monitor SERVICE_NAME="tomcat9" # Check if the service is running SERVICE_STATUS=$(systemctl is-active "$SERVICE_N... See more...
I created .sh scripts that do the following: #!/bin/bash # Name of the service to monitor SERVICE_NAME="tomcat9" # Check if the service is running SERVICE_STATUS=$(systemctl is-active "$SERVICE_NAME.service") # Output status for Splunk if [ "$SERVICE_STATUS" == "active" ]; then echo "$(date): Service $SERVICE_NAME is running." else echo "$(date): Service $SERVICE_NAME is NOT running." fi The above is obviously what Im using for Tomcat but I have others all doing the thing just different service names. These scripts reside in: /opt/splunkforwarder/bin/scripts Additionally I have configured these scripts to be run in /opt/splunkforwarder/etc/system/local/inputs.conf an example of what that looks like is below: [script:///opt/splunkforwarder/bin/scripts/monitor_service_<service_name>.sh] disabled = false interval = 60 index = services sourcetype = service_status As you can see I also have configured the following: index = services sourcetype = service_status These are also configured in Splunk Enterprise respectively and the index is configured for Search, in linux  Splunk is the owner and the group is also Splunk. Additionally all of the scripts are executable and successfully run when I test them, however none of this data seems to be passed from the forwarder as none of the expected data is returned including the recognition of the index and sourcetype in Search.  Additionally I have attached a screen capture of splunkd.log showing the scripts as being recognized.  
Hi,  We recently upgraded the Splunk environment to 9.2.4, and we have some of the apps that are using the Python 2.7 version we are in the process of upgrading those apps in the next 3 months. I ... See more...
Hi,  We recently upgraded the Splunk environment to 9.2.4, and we have some of the apps that are using the Python 2.7 version we are in the process of upgrading those apps in the next 3 months. I noticed that there are errors related to " splunk/bin/jp.py present_but_shouldnt_be, /splunk/bin/python2.7 present_but_shouldnt_be" Since we also use python 2.7, we do not want to delete these files in the bin. I want to understand there is any way that we can suppress these messages during the integrity check. 
Hi @isoutamo    I am currently using Splunk ingest actions feature to route the logs to S3 bucket and it doesn't have the capability to include <host>:<original sourcetype> for the events.  Thank ... See more...
Hi @isoutamo    I am currently using Splunk ingest actions feature to route the logs to S3 bucket and it doesn't have the capability to include <host>:<original sourcetype> for the events.  Thank you for taking time to reply to my query.
Hi @VatsalJagani    Yes I raised a case with Splunk support and they confirm they do not have such capability in place and I advised the to add it to their future enhancements list.  I hope this wi... See more...
Hi @VatsalJagani    Yes I raised a case with Splunk support and they confirm they do not have such capability in place and I advised the to add it to their future enhancements list.  I hope this will be considered.  Appreciate your response.
Hi @Sankar, Correlation Searches, in ES, write triggered alerts in the notable index. You can see in this index and create a statistic for search_name: index=notable | stats count BY search_name ... See more...
Hi @Sankar, Correlation Searches, in ES, write triggered alerts in the notable index. You can see in this index and create a statistic for search_name: index=notable | stats count BY search_name Ciao. Giuseppe
Hello, team I've made script, which uses the sudo command. I've deployed it on my forwarders and I get the error: message from "/opt/splunkforwarder/etc/apps/app/bin/script.sh" sudo: effective uid ... See more...
Hello, team I've made script, which uses the sudo command. I've deployed it on my forwarders and I get the error: message from "/opt/splunkforwarder/etc/apps/app/bin/script.sh" sudo: effective uid is not 0, is /usr/bin/sudo on a file system with the 'nosuid' option set or an NFS file system without root privileges? Please help to fix this issue. 
Hi @varsh_6_8_6 , in this case, please try index="xyz" host="*" "total payment count :" | eval messagevalue=mvindex(split(messagevalue,":"),1) | appendpipe [ stats count | eval messagevalue="No Fi... See more...
Hi @varsh_6_8_6 , in this case, please try index="xyz" host="*" "total payment count :" | eval messagevalue=mvindex(split(messagevalue,":"),1) | appendpipe [ stats count | eval messagevalue="No File Found" | where count==0 | fields - count ] Ciao. Giuseppe
Hi! I recently wanted to test sending traces using the signalfx splunk-otel-collector. In general everything works as expected, however when sending spans containing links to other spans, these link... See more...
Hi! I recently wanted to test sending traces using the signalfx splunk-otel-collector. In general everything works as expected, however when sending spans containing links to other spans, these links don't show up in the waterfall UI, even though they should be working according to the documentation . When downloading the trace data, span links are not mentioned at all. The (debug) logs of the splunk-otel-collector don't seem to mention any errors/abnormalities either. Following example shows my test span. It should be linking to two other spans, but it doesn't show up as such. Additionally I tested using Jaeger All-in-one, and in there the span links properly show up.   I am thankful for any hints you can provide that might help me debug this problem    
Are those tables individual sourcetypes on index or results of your SPL queries? If last, can you share it so we can modify it to create your requested result?