All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Thanks @gcusello  its working.  i want to filter each alert, based on Urgency like (High, Medium, Low, informational)  I tried below query but its not working. | fields Title Urgency | table... See more...
Thanks @gcusello  its working.  i want to filter each alert, based on Urgency like (High, Medium, Low, informational)  I tried below query but its not working. | fields Title Urgency | table Title Urgency  
I created .sh scripts that do the following: #!/bin/bash # Name of the service to monitor SERVICE_NAME="tomcat9" # Check if the service is running SERVICE_STATUS=$(systemctl is-active "$SERVICE_N... See more...
I created .sh scripts that do the following: #!/bin/bash # Name of the service to monitor SERVICE_NAME="tomcat9" # Check if the service is running SERVICE_STATUS=$(systemctl is-active "$SERVICE_NAME.service") # Output status for Splunk if [ "$SERVICE_STATUS" == "active" ]; then echo "$(date): Service $SERVICE_NAME is running." else echo "$(date): Service $SERVICE_NAME is NOT running." fi The above is obviously what Im using for Tomcat but I have others all doing the thing just different service names. These scripts reside in: /opt/splunkforwarder/bin/scripts Additionally I have configured these scripts to be run in /opt/splunkforwarder/etc/system/local/inputs.conf an example of what that looks like is below: [script:///opt/splunkforwarder/bin/scripts/monitor_service_<service_name>.sh] disabled = false interval = 60 index = services sourcetype = service_status As you can see I also have configured the following: index = services sourcetype = service_status These are also configured in Splunk Enterprise respectively and the index is configured for Search, in linux  Splunk is the owner and the group is also Splunk. Additionally all of the scripts are executable and successfully run when I test them, however none of this data seems to be passed from the forwarder as none of the expected data is returned including the recognition of the index and sourcetype in Search.  Additionally I have attached a screen capture of splunkd.log showing the scripts as being recognized.  
Hi,  We recently upgraded the Splunk environment to 9.2.4, and we have some of the apps that are using the Python 2.7 version we are in the process of upgrading those apps in the next 3 months. I ... See more...
Hi,  We recently upgraded the Splunk environment to 9.2.4, and we have some of the apps that are using the Python 2.7 version we are in the process of upgrading those apps in the next 3 months. I noticed that there are errors related to " splunk/bin/jp.py present_but_shouldnt_be, /splunk/bin/python2.7 present_but_shouldnt_be" Since we also use python 2.7, we do not want to delete these files in the bin. I want to understand there is any way that we can suppress these messages during the integrity check. 
Hi @isoutamo    I am currently using Splunk ingest actions feature to route the logs to S3 bucket and it doesn't have the capability to include <host>:<original sourcetype> for the events.  Thank ... See more...
Hi @isoutamo    I am currently using Splunk ingest actions feature to route the logs to S3 bucket and it doesn't have the capability to include <host>:<original sourcetype> for the events.  Thank you for taking time to reply to my query.
Hi @VatsalJagani    Yes I raised a case with Splunk support and they confirm they do not have such capability in place and I advised the to add it to their future enhancements list.  I hope this wi... See more...
Hi @VatsalJagani    Yes I raised a case with Splunk support and they confirm they do not have such capability in place and I advised the to add it to their future enhancements list.  I hope this will be considered.  Appreciate your response.
Hi @Sankar, Correlation Searches, in ES, write triggered alerts in the notable index. You can see in this index and create a statistic for search_name: index=notable | stats count BY search_name ... See more...
Hi @Sankar, Correlation Searches, in ES, write triggered alerts in the notable index. You can see in this index and create a statistic for search_name: index=notable | stats count BY search_name Ciao. Giuseppe
Hello, team I've made script, which uses the sudo command. I've deployed it on my forwarders and I get the error: message from "/opt/splunkforwarder/etc/apps/app/bin/script.sh" sudo: effective uid ... See more...
Hello, team I've made script, which uses the sudo command. I've deployed it on my forwarders and I get the error: message from "/opt/splunkforwarder/etc/apps/app/bin/script.sh" sudo: effective uid is not 0, is /usr/bin/sudo on a file system with the 'nosuid' option set or an NFS file system without root privileges? Please help to fix this issue. 
Hi @varsh_6_8_6 , in this case, please try index="xyz" host="*" "total payment count :" | eval messagevalue=mvindex(split(messagevalue,":"),1) | appendpipe [ stats count | eval messagevalue="No Fi... See more...
Hi @varsh_6_8_6 , in this case, please try index="xyz" host="*" "total payment count :" | eval messagevalue=mvindex(split(messagevalue,":"),1) | appendpipe [ stats count | eval messagevalue="No File Found" | where count==0 | fields - count ] Ciao. Giuseppe
Hi! I recently wanted to test sending traces using the signalfx splunk-otel-collector. In general everything works as expected, however when sending spans containing links to other spans, these link... See more...
Hi! I recently wanted to test sending traces using the signalfx splunk-otel-collector. In general everything works as expected, however when sending spans containing links to other spans, these links don't show up in the waterfall UI, even though they should be working according to the documentation . When downloading the trace data, span links are not mentioned at all. The (debug) logs of the splunk-otel-collector don't seem to mention any errors/abnormalities either. Following example shows my test span. It should be linking to two other spans, but it doesn't show up as such. Additionally I tested using Jaeger All-in-one, and in there the span links properly show up.   I am thankful for any hints you can provide that might help me debug this problem    
Are those tables individual sourcetypes on index or results of your SPL queries? If last, can you share it so we can modify it to create your requested result?
They are actually results coming from different event types. Each event contains different fields.  
we have 100+ use cases onboarded into splunk ES. also we are receiving the alerts few of them but i want to know exact count how many use cases onboarded into the splunk in that how many triggered th... See more...
we have 100+ use cases onboarded into splunk ES. also we are receiving the alerts few of them but i want to know exact count how many use cases onboarded into the splunk in that how many triggered the alerts? much appreciated any one guide. 
OK. It seems that Field Extractor only creates inline extractions. If you want to create transform-based extractions, you need to do them from the Settings menu Settings -> Fields -> Field transform... See more...
OK. It seems that Field Extractor only creates inline extractions. If you want to create transform-based extractions, you need to do them from the Settings menu Settings -> Fields -> Field transformations - there you can create a new transform with a possibility to check a "create multivalued fields" option Then you can use the transform created here to create extraction in Settings -> Field -> Field Extractions
@VatsalJagani  Thanks for your response. Parallel to this blog post I already created the ticket and had some really good conversations with the Splunk support. They confirmed that indeed this is a ... See more...
@VatsalJagani  Thanks for your response. Parallel to this blog post I already created the ticket and had some really good conversations with the Splunk support. They confirmed that indeed this is a bug. It will be fixed with the upcoming versions of 9.3.x and 9.4.x  Appreciated your reply.
What do you mean by "table"? There are several different possible approaches depending on where those "tables" come from.
Gotcha, I'll admit, I hope you're mistaken, and the Fields Extractor can properly extract multivalue fields......I say this because I just use Splunk. Unfortunately, I don't have any access to the ac... See more...
Gotcha, I'll admit, I hope you're mistaken, and the Fields Extractor can properly extract multivalue fields......I say this because I just use Splunk. Unfortunately, I don't have any access to the actual conf files on the server outside what can be edited in the Web UI.
The field extractor is a feature which admiteddly looks good and is a "selling feature" - you can show a potential customer that you don't have to be a master of regexes to be able to extract fields ... See more...
The field extractor is a feature which admiteddly looks good and is a "selling feature" - you can show a potential customer that you don't have to be a master of regexes to be able to extract fields from data. And it might be useful if you have a Splunk Free instance at home processing negligible amounts of data and it doesn't matter to you how "pretty" and efficient the resulting extractions are. But it of course doesn't cover all possible use cases, like your multivalue fields or a tokenizer. I'd have to double-check but you might be able to reach more advanced settings either via directly editing transforms in the fields extraction section of the configuration menu or via the "all configurations" section.
Hi @gcusello  Thank you for the inputs. I have voted for the idea which is essential. Also I have both number and string. The one mentioned worked perfectly for the number.  Is there any way to d... See more...
Hi @gcusello  Thank you for the inputs. I have voted for the idea which is essential. Also I have both number and string. The one mentioned worked perfectly for the number.  Is there any way to display "No files found" in case there no latest events in a particular time. Regards, Varsh
Its difficult to answer without detailed review of the system. But based on your comment it seems its an Upgrade issue. I would recommend opening a case with Splunk, and they should be able to help yo... See more...
Its difficult to answer without detailed review of the system. But based on your comment it seems its an Upgrade issue. I would recommend opening a case with Splunk, and they should be able to help you fix the issue.
Hi all, I have the following issue. I have a table A  col1 col2 A aa B bb C aa   And a table B colA colB aa FYI bb LOL   I need to add to table A the column c... See more...
Hi all, I have the following issue. I have a table A  col1 col2 A aa B bb C aa   And a table B colA colB aa FYI bb LOL   I need to add to table A the column colB based on the matching values from col1 (table A) and colA (table B) and it should look like: col1 colB col2 A FYI aa B LOL bb C FYI aa   so basically map the values from col2 to colA and add colB based on the matches Thanks for your support,