All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Presumably, there will some sort of time element (which you have not described). Do you collect these statistics on a regular basis? Are you events timestamped accordingly? Do you want to repeatedly ... See more...
Presumably, there will some sort of time element (which you have not described). Do you collect these statistics on a regular basis? Are you events timestamped accordingly? Do you want to repeatedly search the same data to determine the last values? Have you considered running scheduled searches to collect the data in a summary index and then searching that for significant changes over time?
We are trying to watch the NIC statistics for our OS interfaces.  We are gathering data from a simple   ifconfig eth0 | grep -E 'dropped|packets' > /var/log/nic-errors.log   For my search, I have... See more...
We are trying to watch the NIC statistics for our OS interfaces.  We are gathering data from a simple   ifconfig eth0 | grep -E 'dropped|packets' > /var/log/nic-errors.log   For my search, I have:   index="myindex" host="our-hosts*" source="/var/log/nic-errors.log" | rex "RX\serrors\s(?<rxError>\d+)\s" | rex "RX\spackets\s(?<rxPackets>\d+)\s" | rex "RX\serrors\s+\d+\s+dropped\s(?<rxDrop>\d+)\s" | chart last(rxError), last(rxPackets), last(rxDrop) by host   which displays the base data.  Now I want to watch if rxError increases and flag that.  Any ideas? The input data will look something like:   RX packets 2165342 bytes 33209324712 (3.0 GiB) RX errors 0 dropped 123 overruns 0 frame 0 TX packets 1988336 bytes 2848819271 (2.6 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0  
[monitor://$SPLUNK_HOME/var/log/splunk] blacklist = metrics\.log$   metrics\.log$ is the correct regex assigned to the blacklist variable. It is possible the one provided won't work, or at least, ... See more...
[monitor://$SPLUNK_HOME/var/log/splunk] blacklist = metrics\.log$   metrics\.log$ is the correct regex assigned to the blacklist variable. It is possible the one provided won't work, or at least, it didn't work for me.
We have configured authentication extensions with Azure to enable token creation for SAML users following this link: https://docs.splunk.com/Documentation/SplunkCloud/latest/Security/Configureauthext... See more...
We have configured authentication extensions with Azure to enable token creation for SAML users following this link: https://docs.splunk.com/Documentation/SplunkCloud/latest/Security/ConfigureauthextensionsforSAMLtokens#Configure_and_activate_authentication_extensions_to_interface_with_Microsoft_Azure I can create token for myself, but cannot create tokens for others. I had another admin test and he could create a token for himself, but could not create one for me or other users. The only error Splunk is providing is "User <user> does not exist". Which is not true. The users do exist. All permissions are in place for Splunk admin and Azure side. Any ideas on what is wrong?
Hi @gcusello,  I am trying to forward the logs to both splunk and an external system via syslog.  Correct, I want to forward the logs coming into my HF to the external 3rd party syslog and main... See more...
Hi @gcusello,  I am trying to forward the logs to both splunk and an external system via syslog.  Correct, I want to forward the logs coming into my HF to the external 3rd party syslog and maintain the metadata associated with the logs. 
We try to setup Splunk Enterprise 9.3.2 cluster   All nodes working fine but Splunk Universal Forwarder isn't working - not listening Management port 8089 or 8088...   Running on Google Cloud Pla... See more...
We try to setup Splunk Enterprise 9.3.2 cluster   All nodes working fine but Splunk Universal Forwarder isn't working - not listening Management port 8089 or 8088...   Running on Google Cloud Platform using RHEL 9.5 (latest) already tried RHEL 8.10 (latest) too   Used documentation: https://docs.splunk.com/Documentation/Forwarder/9.3.2/Forwarder/Installanixuniversalforwarder#Install_the_universal_forwarder_on_Linux   using next commands to setup: cd /opt tar xzf /opt/splunkforwarder-9.3.2-d8bb32809498-Linux-x86_64.tgz adduser -d /opt/splunkforwarder splunkfwd export SPLUNK_HOME=/opt/splunkforwarder $SPLUNK_HOME/bin/splunk enable boot-start -systemd-managed 1 -user splunkfwd -group splunkfwd systemctl start SplunkForwarder     cat /etc/systemd/system/SplunkForwarder.service [Unit] Description=Systemd service file for Splunk, generated by 'splunk enable boot-start' After=network-online.target Wants=network-online.target   [Service] Type=simple Restart=always ExecStart=/opt/splunkforwarder/bin/splunk _internal_launch_under_systemd --accept-license KillMode=mixed KillSignal=SIGINT TimeoutStopSec=360 LimitNOFILE=65536 LimitRTPRIO=99 SuccessExitStatus=51 52 RestartPreventExitStatus=51 RestartForceExitStatus=52 User=splunkfwd Group=splunkfwd NoNewPrivileges=yes PermissionsStartOnly=true AmbientCapabilities=CAP_DAC_READ_SEARCH ExecStartPre=-/bin/bash -c "chown -R splunkfwd:splunkfwd /opt/splunkforwarder" ---     $ cat /etc/os-release NAME="Red Hat Enterprise Linux" VERSION="9.5 (Plow)" ID="rhel" ID_LIKE="fedora" VERSION_ID="9.5" PLATFORM_ID="platform:el9" PRETTY_NAME="Red Hat Enterprise Linux 9.5 (Plow)" ANSI_COLOR="0;31" LOGO="fedora-logo-icon" CPE_NAME="cpe:/o:redhat:enterprise_linux:9::baseos" HOME_URL="https://www.redhat.com/" DOCUMENTATION_URL="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9" BUG_REPORT_URL="https://issues.redhat.com/"   REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 9" REDHAT_BUGZILLA_PRODUCT_VERSION=9.5 REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux" REDHAT_SUPPORT_PRODUCT_VERSION="9.5" ---     $ netstat -tulpn [root@splunk-custom-image log]# netstat -tulpn Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1684/sshd: /usr/sbi tcp6       0      0 :::22                   :::*                    LISTEN      1684/sshd: /usr/sbi tcp6       0      0 :::20201                :::*                    LISTEN      2517/otelopscol udp        0      0 127.0.0.1:323           0.0.0.0:*                           652/chronyd udp6       0      0 ::1:323                 :::*                                652/chronyd ---       /var/log/messages: [root@splunk-custom-image log]# systemctl status SplunkForwarder ● SplunkForwarder.service - Systemd service file for Splunk, generated by 'splunk enable boot-start'      Loaded: loaded (/etc/systemd/system/SplunkForwarder.service; enabled; preset: disabled)      Active: active (running) since Thu 2024-11-21 09:03:55 EST; 7min ago     Process: 797 ExecStartPre=/bin/bash -c chown -R splunkfwd:splunkfwd /opt/splunkforwarder (code=exited, status=0/SUCCESS)    Main PID: 1068 (splunkd)       Tasks: 47 (limit: 100424)      Memory: 227.4M         CPU: 3.481s      CGroup: /system.slice/SplunkForwarder.service              ├─1068 splunkd --under-systemd --systemd-delegate=no -p 8089 _internal_launch_under_systemd              └─2535 "[splunkd pid=1068] splunkd --under-systemd --systemd-delegate=no -p 8089 _internal_launch_under_systemd [process-runner]"   Nov 21 09:03:55 systemd[1]: Started Systemd service file for Splunk, generated by 'splunk enable boot-start'. Nov 21 09:03:58 splunk[1068]: Warning: Attempting to revert the SPLUNK_HOME ownership Nov 21 09:03:58 splunk[1068]: Warning: Executing "chown -R splunkfwd:splunkfwd /opt/splunkforwarder" Nov 21 09:03:58 splunk[1068]:         Checking mgmt port [8089]: open Nov 21 09:03:59 splunk[1068]:         Checking conf files for problems... Nov 21 09:03:59 splunk[1068]:         Done Nov 21 09:03:59 splunk[1068]:         Checking default conf files for edits... Nov 21 09:03:59 splunk[1068]:         Validating installed files against hashes from '/opt/splunkforwarder/splunkforwarder-9.3.2-d8bb32809498-linux-2.6-x86_64-> Nov 21 09:04:00 splunk[1068]: PYTHONHTTPSVERIFY is set to 0 in splunk-launch.conf disabling certificate validation for the httplib and urllib libraries shipped> Nov 21 09:04:00 splunk[1068]: 2024-11-21 09:04:00.038 -0500 splunkd started (build d8bb32809498) pid=1068 ---     /opt/splunkforwarder/var/log/splunk/splunkd.log attached file
I have tried everything to change the node sizes in the 3D Graph Network Topology Visualization.  I am able to get all of the other options to work.  Here is the work anywhere search I am using to te... See more...
I have tried everything to change the node sizes in the 3D Graph Network Topology Visualization.  I am able to get all of the other options to work.  Here is the work anywhere search I am using to test the viz.  Pretty straightforward.  I have changed around the field order and tried all types and sizes of numbers and nothing seems to change the size of the node in the output graph .  Has anyone else seen this issue, or been able to get the node sizing to work with the weight_* attributes   | makeresults | eval src="node1", dest="node2", color_src="#008000", color_dest="#FF0000", edge_color="#008000", edge_weight=1, weight_src=1, weight_dest=8 | table src, dest, color_src, color_dest, edge_color, weight_src, weight_dest, edge_weight   and the output I am getting:    
We started seeing this recently as well.  Also the various S1 Splunk integrations do not understand or permit having the IA and App on the same instance so Victoria experience doesn't work properly. ... See more...
We started seeing this recently as well.  Also the various S1 Splunk integrations do not understand or permit having the IA and App on the same instance so Victoria experience doesn't work properly.  This is also the case for the various scalyr dataset add ons, cannot create inputs because it complains about being on a search head.  
Hi @winter4 , a question: do you want to forward data to an Indexer ot to an external system via syslog? I suppose that you are meaning that you want to forward logs, that you are receiving from UF... See more...
Hi @winter4 , a question: do you want to forward data to an Indexer ot to an external system via syslog? I suppose that you are meaning that you want to forward logs, that you are receiving from UFs or syslogs or HEC, using a HF, maintaining the original host source and sourcetype. What's your issue? if you're sending to an Indexer, you have to use outputs.conf and source, host and sourcetype, by default aren't overwritten and usually remain the original ones, unless you configure overwritting. If instead your have to send to a third party it's different. Ciao. Giuseppe
Cannot communicate with task server, please check your settings.   The Task Server is currently unavailable. Please ensure it is started and listening on port 9998. See the documentation for mor... See more...
Cannot communicate with task server, please check your settings.   The Task Server is currently unavailable. Please ensure it is started and listening on port 9998. See the documentation for more details.   We are getting the above errors while trying to connect with DB Connect 3.18.1  We are running splunk 9.3.1    I've tried uninstallng our openjdk and re-installing up but am finding this:   splunk_app_db_connect# rpm -qa |grep java tzdata-java-2024b-2.el9.noarch javapackages-filesystem-6.0.0-4.el9.noarch java-11-openjdk-headless-11.0.25.0.9-3.el9.x86_64 java-11-openjdk-11.0.25.0.9-3.el9.x86_64 DIR: /opt/splunk/etc/apps/splunk_app_db_connect splunk_app_db_connect# java -version openjdk version "11.0.25" 2024-10-15 LTS OpenJDK Runtime Environment (Red_Hat-11.0.25.0.9-1) (build 11.0.25+9-LTS) OpenJDK 64-Bit Server VM (Red_Hat-11.0.25.0.9-1) (build 11.0.25+9-LTS, mixed mode, sharing)   One shows 9-1 and one shows 9-3    
Thanks, using the ACS-cli, I was able to deploy my app to my Splunk Cloud Platform instance.   For reference, here is a powershell code snippet to deploy such app:   # Set up splunk account for a... See more...
Thanks, using the ACS-cli, I was able to deploy my app to my Splunk Cloud Platform instance.   For reference, here is a powershell code snippet to deploy such app:   # Set up splunk account for app validation with appinspect.splunk.com $env:SPLUNK_USERNAME = "username@email.com" $env:SPLUNK_PASSWORD = (Get-Credential -Message a -UserName a).GetNetworkCredential().Password acs.exe config add-stack <nameofthestack> --target-sh <nameofsearchhead> acs.exe config use-stack <nameofthestack> --target-sh <nameofsearchhead> acs.exe login acs.exe --verbose apps install private --acs-legal-ack Y --app-package .\path\to\my-custom-app-latest.tar.gz  
Hi Team,  I am looking for a way to forward data from my heavy forwarders to a different source while maintaining the metadata like (host, source, sourcetype)  I have tried using the tcpout con... See more...
Hi Team,  I am looking for a way to forward data from my heavy forwarders to a different source while maintaining the metadata like (host, source, sourcetype)  I have tried using the tcpout config in outputs.conf but I do not see the metadata being transferred.  syslog config in outputs.conf does not work for me either. 
You could use the advanced time picker and select earliest as "@d-3d" and latest as "@d" The @d aligns to the beginning of the current day, then the -3d goes back a further 3 days (usually 72h but ac... See more...
You could use the advanced time picker and select earliest as "@d-3d" and latest as "@d" The @d aligns to the beginning of the current day, then the -3d goes back a further 3 days (usually 72h but across daylight saving changes, these may be slightly different. The same may go for the span, so try using 3d rather than 72h.
The rsyslog is a brand/flavour of application which is dedicated to syslog message protocol and handling.  There are alternatives which the most favorite alternative is likely syslog-ng.  So don't ge... See more...
The rsyslog is a brand/flavour of application which is dedicated to syslog message protocol and handling.  There are alternatives which the most favorite alternative is likely syslog-ng.  So don't get caught up on the term rsyslog. https://www.rsyslog.com/doc/configuration/index.html Configuring rsyslog or any syslog for your environment can be easy but planning to reduce any gotcha moments requires some for thought.  Separating technology and hosts being key things to help make Splunk ingestion much easier.  A sample thought would be to have all inbound messages to the aggregator server written to file structure such as: /logs/<vendor>/<technology>/<host>/<filename.something> ex /logs/cisco/isa/127.0.0.1/authentication.log /logs/cisco/isa/192.168.0.1/metrics.log * completely fabricated examples Have the logs rotate on a schedule (ie 15mins or 60 mins) and remove files older than 'x' amount of time.  How you do this will be based on volume of logs written and available storage.  I've worked on a x3 original file span as a working bias but again your system may dictate that.  I always keep some incase the UF goes offline for a short period of time, you can recover logs you may otherwise miss.   Once you have that in place then you need to follow the normal UF ingestion process which I wont go through here since your question was more on rsyslog than UF and this community board has far more UF answers than syslog specific examples that are easily searched.
Hi @Karthikeya , you have to configure rsyslog using the documentation that you can find at https://www.rsyslog.com/doc/index.html rsyslog writes the received syslogs in files whose names are defin... See more...
Hi @Karthikeya , you have to configure rsyslog using the documentation that you can find at https://www.rsyslog.com/doc/index.html rsyslog writes the received syslogs in files whose names are defined in the rsyslog configuration file. Usually part of the path is the hostname that sent logs so you can use it in the inputs.conf configuration. What's your issue: how to configure rsyslog, how to configure UF or both? for rsyslog, I already sent the documentation, for the UF input you can see at https://docs.splunk.com/Documentation/Splunk/9.3.2/Data/Usingforwardingagents in addition there are many videos about this. Ciao. Giuseppe
That query finds *skipped* searches, not delayed ones.  A delayed search runs late, but still runs, as opposed to a skipped search which does not run at all (at that time). index= _internal sourcety... See more...
That query finds *skipped* searches, not delayed ones.  A delayed search runs late, but still runs, as opposed to a skipped search which does not run at all (at that time). index= _internal sourcetype=scheduler savedsearch_name=* status=deferred | stats count BY savedsearch_name, app, user, reason | sort -count
Hi @anna , as @PickleRick said, it seems to be a normal line chart. So you have to create your search, visualize it as chart (choosing the Line Chart diagram) and then save it in a new dashboard. ... See more...
Hi @anna , as @PickleRick said, it seems to be a normal line chart. So you have to create your search, visualize it as chart (choosing the Line Chart diagram) and then save it in a new dashboard. What's your issue? Ciao. Giuseppe
Please help me in configuring rsyslog to Splunk. Our rsyslog server will receive the logs from network devices and our rsyslog has UF installed.  I have no idea of how to configure this and what rsy... See more...
Please help me in configuring rsyslog to Splunk. Our rsyslog server will receive the logs from network devices and our rsyslog has UF installed.  I have no idea of how to configure this and what rsyslog means? Please help me with step by step procedure of how to configure this to our deployment server or indexer?  Documentation will be highly appreciated.
Thanks for the response. The thing is this alert should trigger every day once and it should be dynamic as the result keeps changing. Based on your comment it looks like I need to redo every time I h... See more...
Thanks for the response. The thing is this alert should trigger every day once and it should be dynamic as the result keeps changing. Based on your comment it looks like I need to redo every time I have to send the reports 'Once the done_sending_email.csv and list_all_emails.csv lookup tables are almost the same size, (done_sending_email.csv will be +1 bigger if it has the filler value) then the emails are all sent out. You can then disable the alert, or you can empty the done_sending_email.csv file if you'd like to send another wave of emails.'
Hello Richgalloway, thank you for feedback. I´ve managed to set my time window with Uptime results. Now I got issue using my span so that I could see _time and Uptime in seconds in one row only. Thi... See more...
Hello Richgalloway, thank you for feedback. I´ve managed to set my time window with Uptime results. Now I got issue using my span so that I could see _time and Uptime in seconds in one row only. This I would like to achieve by setting Time picker to last 3 days and I set my span to 72 hours so that Im having one row with all the results. | bin span=72h _time My most oldest time should be then always 3 days backwards.  But when I do this my results display also time which is outside of 3 days (see attachement). My oldest results should have end 18.11.24 in the morning but instead it also shows results for 17.11.24. In this case instead of one row I will have 2 rows which will crash my search idea as I need to have one row with the results only.  Why is that can you suggest ? How exactly does span function work ?