All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @winter4 , a question: do you want to forward data to an Indexer ot to an external system via syslog? I suppose that you are meaning that you want to forward logs, that you are receiving from UF... See more...
Hi @winter4 , a question: do you want to forward data to an Indexer ot to an external system via syslog? I suppose that you are meaning that you want to forward logs, that you are receiving from UFs or syslogs or HEC, using a HF, maintaining the original host source and sourcetype. What's your issue? if you're sending to an Indexer, you have to use outputs.conf and source, host and sourcetype, by default aren't overwritten and usually remain the original ones, unless you configure overwritting. If instead your have to send to a third party it's different. Ciao. Giuseppe
Cannot communicate with task server, please check your settings.   The Task Server is currently unavailable. Please ensure it is started and listening on port 9998. See the documentation for mor... See more...
Cannot communicate with task server, please check your settings.   The Task Server is currently unavailable. Please ensure it is started and listening on port 9998. See the documentation for more details.   We are getting the above errors while trying to connect with DB Connect 3.18.1  We are running splunk 9.3.1    I've tried uninstallng our openjdk and re-installing up but am finding this:   splunk_app_db_connect# rpm -qa |grep java tzdata-java-2024b-2.el9.noarch javapackages-filesystem-6.0.0-4.el9.noarch java-11-openjdk-headless-11.0.25.0.9-3.el9.x86_64 java-11-openjdk-11.0.25.0.9-3.el9.x86_64 DIR: /opt/splunk/etc/apps/splunk_app_db_connect splunk_app_db_connect# java -version openjdk version "11.0.25" 2024-10-15 LTS OpenJDK Runtime Environment (Red_Hat-11.0.25.0.9-1) (build 11.0.25+9-LTS) OpenJDK 64-Bit Server VM (Red_Hat-11.0.25.0.9-1) (build 11.0.25+9-LTS, mixed mode, sharing)   One shows 9-1 and one shows 9-3    
Thanks, using the ACS-cli, I was able to deploy my app to my Splunk Cloud Platform instance.   For reference, here is a powershell code snippet to deploy such app:   # Set up splunk account for a... See more...
Thanks, using the ACS-cli, I was able to deploy my app to my Splunk Cloud Platform instance.   For reference, here is a powershell code snippet to deploy such app:   # Set up splunk account for app validation with appinspect.splunk.com $env:SPLUNK_USERNAME = "username@email.com" $env:SPLUNK_PASSWORD = (Get-Credential -Message a -UserName a).GetNetworkCredential().Password acs.exe config add-stack <nameofthestack> --target-sh <nameofsearchhead> acs.exe config use-stack <nameofthestack> --target-sh <nameofsearchhead> acs.exe login acs.exe --verbose apps install private --acs-legal-ack Y --app-package .\path\to\my-custom-app-latest.tar.gz  
Hi Team,  I am looking for a way to forward data from my heavy forwarders to a different source while maintaining the metadata like (host, source, sourcetype)  I have tried using the tcpout con... See more...
Hi Team,  I am looking for a way to forward data from my heavy forwarders to a different source while maintaining the metadata like (host, source, sourcetype)  I have tried using the tcpout config in outputs.conf but I do not see the metadata being transferred.  syslog config in outputs.conf does not work for me either. 
You could use the advanced time picker and select earliest as "@d-3d" and latest as "@d" The @d aligns to the beginning of the current day, then the -3d goes back a further 3 days (usually 72h but ac... See more...
You could use the advanced time picker and select earliest as "@d-3d" and latest as "@d" The @d aligns to the beginning of the current day, then the -3d goes back a further 3 days (usually 72h but across daylight saving changes, these may be slightly different. The same may go for the span, so try using 3d rather than 72h.
The rsyslog is a brand/flavour of application which is dedicated to syslog message protocol and handling.  There are alternatives which the most favorite alternative is likely syslog-ng.  So don't ge... See more...
The rsyslog is a brand/flavour of application which is dedicated to syslog message protocol and handling.  There are alternatives which the most favorite alternative is likely syslog-ng.  So don't get caught up on the term rsyslog. https://www.rsyslog.com/doc/configuration/index.html Configuring rsyslog or any syslog for your environment can be easy but planning to reduce any gotcha moments requires some for thought.  Separating technology and hosts being key things to help make Splunk ingestion much easier.  A sample thought would be to have all inbound messages to the aggregator server written to file structure such as: /logs/<vendor>/<technology>/<host>/<filename.something> ex /logs/cisco/isa/127.0.0.1/authentication.log /logs/cisco/isa/192.168.0.1/metrics.log * completely fabricated examples Have the logs rotate on a schedule (ie 15mins or 60 mins) and remove files older than 'x' amount of time.  How you do this will be based on volume of logs written and available storage.  I've worked on a x3 original file span as a working bias but again your system may dictate that.  I always keep some incase the UF goes offline for a short period of time, you can recover logs you may otherwise miss.   Once you have that in place then you need to follow the normal UF ingestion process which I wont go through here since your question was more on rsyslog than UF and this community board has far more UF answers than syslog specific examples that are easily searched.
Hi @Karthikeya , you have to configure rsyslog using the documentation that you can find at https://www.rsyslog.com/doc/index.html rsyslog writes the received syslogs in files whose names are defin... See more...
Hi @Karthikeya , you have to configure rsyslog using the documentation that you can find at https://www.rsyslog.com/doc/index.html rsyslog writes the received syslogs in files whose names are defined in the rsyslog configuration file. Usually part of the path is the hostname that sent logs so you can use it in the inputs.conf configuration. What's your issue: how to configure rsyslog, how to configure UF or both? for rsyslog, I already sent the documentation, for the UF input you can see at https://docs.splunk.com/Documentation/Splunk/9.3.2/Data/Usingforwardingagents in addition there are many videos about this. Ciao. Giuseppe
That query finds *skipped* searches, not delayed ones.  A delayed search runs late, but still runs, as opposed to a skipped search which does not run at all (at that time). index= _internal sourcety... See more...
That query finds *skipped* searches, not delayed ones.  A delayed search runs late, but still runs, as opposed to a skipped search which does not run at all (at that time). index= _internal sourcetype=scheduler savedsearch_name=* status=deferred | stats count BY savedsearch_name, app, user, reason | sort -count
Hi @anna , as @PickleRick said, it seems to be a normal line chart. So you have to create your search, visualize it as chart (choosing the Line Chart diagram) and then save it in a new dashboard. ... See more...
Hi @anna , as @PickleRick said, it seems to be a normal line chart. So you have to create your search, visualize it as chart (choosing the Line Chart diagram) and then save it in a new dashboard. What's your issue? Ciao. Giuseppe
Please help me in configuring rsyslog to Splunk. Our rsyslog server will receive the logs from network devices and our rsyslog has UF installed.  I have no idea of how to configure this and what rsy... See more...
Please help me in configuring rsyslog to Splunk. Our rsyslog server will receive the logs from network devices and our rsyslog has UF installed.  I have no idea of how to configure this and what rsyslog means? Please help me with step by step procedure of how to configure this to our deployment server or indexer?  Documentation will be highly appreciated.
Thanks for the response. The thing is this alert should trigger every day once and it should be dynamic as the result keeps changing. Based on your comment it looks like I need to redo every time I h... See more...
Thanks for the response. The thing is this alert should trigger every day once and it should be dynamic as the result keeps changing. Based on your comment it looks like I need to redo every time I have to send the reports 'Once the done_sending_email.csv and list_all_emails.csv lookup tables are almost the same size, (done_sending_email.csv will be +1 bigger if it has the filler value) then the emails are all sent out. You can then disable the alert, or you can empty the done_sending_email.csv file if you'd like to send another wave of emails.'
Hello Richgalloway, thank you for feedback. I´ve managed to set my time window with Uptime results. Now I got issue using my span so that I could see _time and Uptime in seconds in one row only. Thi... See more...
Hello Richgalloway, thank you for feedback. I´ve managed to set my time window with Uptime results. Now I got issue using my span so that I could see _time and Uptime in seconds in one row only. This I would like to achieve by setting Time picker to last 3 days and I set my span to 72 hours so that Im having one row with all the results. | bin span=72h _time My most oldest time should be then always 3 days backwards.  But when I do this my results display also time which is outside of 3 days (see attachement). My oldest results should have end 18.11.24 in the morning but instead it also shows results for 17.11.24. In this case instead of one row I will have 2 rows which will crash my search idea as I need to have one row with the results only.  Why is that can you suggest ? How exactly does span function work ?
Sorry, I have access to the files.
It's under my username, with Admin privileges.
Which user does the scheduled search run as and do they have access to the lookup files?
| eventstats values(eval(if(match="ok",match,null()))) as match by Hostname
Updated the accepted solution with the actual solution 
Try something like this | rex "Total number of records[^:]+\s*(?<records>\d+)" | rex "(?<closing>ClosingBal=[^,]+)" | rex "(?<opening>openingBal\s\S+)"
 I have a splunk query that does some comparisons and the output is as follows.  If any of the row below for the given hostname has "OK", that host should be marked as "OK" ( irrespective of IP addre... See more...
 I have a splunk query that does some comparisons and the output is as follows.  If any of the row below for the given hostname has "OK", that host should be marked as "OK" ( irrespective of IP addresses it has).  can you help me with the right query pls ?   Hostname IP_Address match esx24 1.14.40.1 missing esx24 1.14.20.1 ok ctx-01 1.9.2.4 missing ctx-01 1.2.1.5 missing ctx-01 1.2.5.26 missing ctx-01 1.2.1.27 missing ctx-01 1.1.5.7 ok ctx-01 1.2.3.1 missing ctx-01 1.2.6.1 missing ctx-01 1.2.1.1 missing w122 1.2.5.15 ok
1] Tried using Until since to pull the no of days between the expirationDateTime and system date, based on token name as we have many token names expirationDateTime eventTimestamp pickupTimesta... See more...
1] Tried using Until since to pull the no of days between the expirationDateTime and system date, based on token name as we have many token names expirationDateTime eventTimestamp pickupTimestamp 2025-07-26T23:00:03+05:30 2024-11-21T17:06:33+05:30 2024-11-21T17:06:33+05:30 Token name AppD can you suggest the query to be used such that we get value in no of days the certificate gets expired