All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

There are some additional issues here (feel free to ignore my comments since they are a bit advanced and might be simply overkill if your case is relatively small and simple). 1. You're not using fi... See more...
There are some additional issues here (feel free to ignore my comments since they are a bit advanced and might be simply overkill if your case is relatively small and simple). 1. You're not using field extractions. You're extracting fields "manually" within your search. For a simple case it might work relatively well but it usually helps a lot if you have extractions defined in configuration - it lets you search for particular fields way faster than having to parse every single event and verifying if the value matches. 2. Your signal to noise ratio is relatively low - you have quite a lot of text which doesn't bring any additional value to your data - you don't have any dynamic fields so you don't have to dynamically name them and such. You could "squeeze" your events to leave only relevant values in some more structured but less verbose format. Again - if you just have a few hundred bytes each minute, that's probably not worth the work you'd need to put into it but if you have several thousands of hosts monitored this way, that could be worth savings on license costs. 3. And the most advanced topic here - you could prepare your data properly and ingest it to a metrics index. This way each event consumes a constant 160 bytes of license but most importantly - searching and doing statistical analyses over metrics indexes is way faster than on normal event indexes (but at the same time it's done a bit differently so you have to learn to use new commands like mstats or mpreview).
You can't easily do that. I'm not even sure you can to that at all. The problem is that the data being sent over the syslog output is simply the raw event, optionally(?) prepended by the syslog head... See more...
You can't easily do that. I'm not even sure you can to that at all. The problem is that the data being sent over the syslog output is simply the raw event, optionally(?) prepended by the syslog header. So if you wanted to include the metadata you'd have to include it in the raw event. But even if you managed to do this on a global level (like some catch-all sourcetype definition and a transform adding the metadata to the event), the same event would be sent to your splunktcp output as well which would most probably mean that the event is unusable in this format.
Well, rsyslog configuration can be as simple as *.* /var/log/all.log but can also span into several hundreds of files, with complicated processing rules and sending data to multiple destinations an... See more...
Well, rsyslog configuration can be as simple as *.* /var/log/all.log but can also span into several hundreds of files, with complicated processing rules and sending data to multiple destinations and such. Rsyslog recently had a major overhaul of its docs page  https://www.rsyslog.com/doc/v8-stable/index.html (the old docs were a bit confusing at times) and it has a relatively responsive mailing list https://lists.adiscon.net/mailman/listinfo/rsyslog
I'm not 100% sure about that. NFR licenses changed a bit over time. As far as I remember, Partner NFR's used to support distributed environments and now they don't. So the terms regarding multiple us... See more...
I'm not 100% sure about that. NFR licenses changed a bit over time. As far as I remember, Partner NFR's used to support distributed environments and now they don't. So the terms regarding multiple uses could also have changed. The main thing is however you're not supposed to use Partner NFRs for production data. It's only meant for lab/dev/testing/demo and such.
Technically, you could do a common list of CA's and bind them to all inputs (or just make one input with all those CAs) but I suppose you might not want that.  In that case you just bind one CA to on... See more...
Technically, you could do a common list of CA's and bind them to all inputs (or just make one input with all those CAs) but I suppose you might not want that.  In that case you just bind one CA to one input and another CA to another input. You can then even limit access to just allowed SANs.
I am trying to figure out how to include a lookup in my search, but only some records. My current search is below. My company has two issues: We do not log app version anywhere easy to grab, so I n... See more...
I am trying to figure out how to include a lookup in my search, but only some records. My current search is below. My company has two issues: We do not log app version anywhere easy to grab, so I need to have this pulled via rex. We manually maintain a list of clients (some are on an old version and we don't populate the "client" field for them) and what host they are on. Some clients have both their application and DB on the same host, so my search below results in some weird duplicates where the displayName is listed twice for a single record in my result set (a field containing two values somehow). I want the lookup to only include records where the "host_type" is "application", not "db". Here is my search:   `Environments(PRODUCTION)` sourcetype=appservice "updaterecords" AND "version" | eval host = lower(host) | lookup clientlist.csv hostname as host, OUTPUT clientcode as clientCode | eval displayName = IF(client!="",client,clientCode) | rex field=_raw "version: (?<AppVersion>.*)$" | eval VMVersion = replace(AppVersion,"release/","") | eval CaptureDate=strftime(_time,"%Y-%m-%d") | dedup clientCode | table displayName,AppVersion,CaptureDate    I did try including host_type right after "..hostname as host.." and using a |where clause later, but that did not work.
Hello Splunk Community,   i monitor the audit.log on RHEL8. As soon as I generate a specific log entry locally, I can find this log entry through my defined search query in Splunk. However, if a ... See more...
Hello Splunk Community,   i monitor the audit.log on RHEL8. As soon as I generate a specific log entry locally, I can find this log entry through my defined search query in Splunk. However, if a few hours pass, I can no longer find it with the same search query. Of course, I adjust the time settings accordingly. First, I search in real-time (last 30 minutes), then I switch to, for example, Today or the last 4 hours. I have noticed that this happens with searches that include "transaction msg maxspan=5m". I want to see all the related transactions. When I have the command transaction msg maxspan=5m in my search, I find all the related transactions in real-time. After a few hours, I no longer get any hits with the same search query. Only when I remove the transaction command from the search do I see the entries again, but then I don't see as much information as before. Nothing changes if i switch to transaction msg maxevent=3. Do I possibly have a wrong configuration of my environment here, or do I need to adjust something? Thanks in advance. Search Query: index="sys_linux" sourcetype="linux_audit" | transaction msg maxspan=5m | search type=SYSCALL (auid>999 OR auid=0) auid!=44444 auid!=4294967295 comm!=updatedb comm!=ls comm!=bash comm!=find comm!=crond comm!=sshd comm!="(systemd)" | rex field=msg "audit\((?P<date>[\d]+)" | convert ctime(date) | sort by date | table date, type, comm, uid, auid, host, name
For security, Splunk UFs default to not listening on a management port.  You must explicitly enable it.
I could not see anything in the partner general terms that prohibits the use of AWS (https://www.splunk.com/en_us/legal/splunk-partner-general-terms.html), but you should have a contact in the Splunk... See more...
I could not see anything in the partner general terms that prohibits the use of AWS (https://www.splunk.com/en_us/legal/splunk-partner-general-terms.html), but you should have a contact in the Splunk Build Program who can give you a more authoritative answer than this community forum, where the members are volunteers.
Yes, it is possible.  You cannot, however, deploy the same license in more than one Splunk environment.  IOW, you can't use in AWS the same license you are using on-prem.
You can also set up the search that generates done_sending_email to run once a day before the main search executes. This way the done_sending_email.csv file will be cleared and the main search will s... See more...
You can also set up the search that generates done_sending_email to run once a day before the main search executes. This way the done_sending_email.csv file will be cleared and the main search will send out emails to people every day.
Hi, I am looking into the possibiliy of deploying a private splunk instance for integration testing in AWS, can anyone tell is it possible to install an NFR licence on an instance deployed in AWS? ... See more...
Hi, I am looking into the possibiliy of deploying a private splunk instance for integration testing in AWS, can anyone tell is it possible to install an NFR licence on an instance deployed in AWS?   Thanks
Presumably, there will some sort of time element (which you have not described). Do you collect these statistics on a regular basis? Are you events timestamped accordingly? Do you want to repeatedly ... See more...
Presumably, there will some sort of time element (which you have not described). Do you collect these statistics on a regular basis? Are you events timestamped accordingly? Do you want to repeatedly search the same data to determine the last values? Have you considered running scheduled searches to collect the data in a summary index and then searching that for significant changes over time?
We are trying to watch the NIC statistics for our OS interfaces.  We are gathering data from a simple   ifconfig eth0 | grep -E 'dropped|packets' > /var/log/nic-errors.log   For my search, I have... See more...
We are trying to watch the NIC statistics for our OS interfaces.  We are gathering data from a simple   ifconfig eth0 | grep -E 'dropped|packets' > /var/log/nic-errors.log   For my search, I have:   index="myindex" host="our-hosts*" source="/var/log/nic-errors.log" | rex "RX\serrors\s(?<rxError>\d+)\s" | rex "RX\spackets\s(?<rxPackets>\d+)\s" | rex "RX\serrors\s+\d+\s+dropped\s(?<rxDrop>\d+)\s" | chart last(rxError), last(rxPackets), last(rxDrop) by host   which displays the base data.  Now I want to watch if rxError increases and flag that.  Any ideas? The input data will look something like:   RX packets 2165342 bytes 33209324712 (3.0 GiB) RX errors 0 dropped 123 overruns 0 frame 0 TX packets 1988336 bytes 2848819271 (2.6 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0  
[monitor://$SPLUNK_HOME/var/log/splunk] blacklist = metrics\.log$   metrics\.log$ is the correct regex assigned to the blacklist variable. It is possible the one provided won't work, or at least, ... See more...
[monitor://$SPLUNK_HOME/var/log/splunk] blacklist = metrics\.log$   metrics\.log$ is the correct regex assigned to the blacklist variable. It is possible the one provided won't work, or at least, it didn't work for me.
We have configured authentication extensions with Azure to enable token creation for SAML users following this link: https://docs.splunk.com/Documentation/SplunkCloud/latest/Security/Configureauthext... See more...
We have configured authentication extensions with Azure to enable token creation for SAML users following this link: https://docs.splunk.com/Documentation/SplunkCloud/latest/Security/ConfigureauthextensionsforSAMLtokens#Configure_and_activate_authentication_extensions_to_interface_with_Microsoft_Azure I can create token for myself, but cannot create tokens for others. I had another admin test and he could create a token for himself, but could not create one for me or other users. The only error Splunk is providing is "User <user> does not exist". Which is not true. The users do exist. All permissions are in place for Splunk admin and Azure side. Any ideas on what is wrong?
Hi @gcusello,  I am trying to forward the logs to both splunk and an external system via syslog.  Correct, I want to forward the logs coming into my HF to the external 3rd party syslog and main... See more...
Hi @gcusello,  I am trying to forward the logs to both splunk and an external system via syslog.  Correct, I want to forward the logs coming into my HF to the external 3rd party syslog and maintain the metadata associated with the logs. 
We try to setup Splunk Enterprise 9.3.2 cluster   All nodes working fine but Splunk Universal Forwarder isn't working - not listening Management port 8089 or 8088...   Running on Google Cloud Pla... See more...
We try to setup Splunk Enterprise 9.3.2 cluster   All nodes working fine but Splunk Universal Forwarder isn't working - not listening Management port 8089 or 8088...   Running on Google Cloud Platform using RHEL 9.5 (latest) already tried RHEL 8.10 (latest) too   Used documentation: https://docs.splunk.com/Documentation/Forwarder/9.3.2/Forwarder/Installanixuniversalforwarder#Install_the_universal_forwarder_on_Linux   using next commands to setup: cd /opt tar xzf /opt/splunkforwarder-9.3.2-d8bb32809498-Linux-x86_64.tgz adduser -d /opt/splunkforwarder splunkfwd export SPLUNK_HOME=/opt/splunkforwarder $SPLUNK_HOME/bin/splunk enable boot-start -systemd-managed 1 -user splunkfwd -group splunkfwd systemctl start SplunkForwarder     cat /etc/systemd/system/SplunkForwarder.service [Unit] Description=Systemd service file for Splunk, generated by 'splunk enable boot-start' After=network-online.target Wants=network-online.target   [Service] Type=simple Restart=always ExecStart=/opt/splunkforwarder/bin/splunk _internal_launch_under_systemd --accept-license KillMode=mixed KillSignal=SIGINT TimeoutStopSec=360 LimitNOFILE=65536 LimitRTPRIO=99 SuccessExitStatus=51 52 RestartPreventExitStatus=51 RestartForceExitStatus=52 User=splunkfwd Group=splunkfwd NoNewPrivileges=yes PermissionsStartOnly=true AmbientCapabilities=CAP_DAC_READ_SEARCH ExecStartPre=-/bin/bash -c "chown -R splunkfwd:splunkfwd /opt/splunkforwarder" ---     $ cat /etc/os-release NAME="Red Hat Enterprise Linux" VERSION="9.5 (Plow)" ID="rhel" ID_LIKE="fedora" VERSION_ID="9.5" PLATFORM_ID="platform:el9" PRETTY_NAME="Red Hat Enterprise Linux 9.5 (Plow)" ANSI_COLOR="0;31" LOGO="fedora-logo-icon" CPE_NAME="cpe:/o:redhat:enterprise_linux:9::baseos" HOME_URL="https://www.redhat.com/" DOCUMENTATION_URL="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9" BUG_REPORT_URL="https://issues.redhat.com/"   REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 9" REDHAT_BUGZILLA_PRODUCT_VERSION=9.5 REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux" REDHAT_SUPPORT_PRODUCT_VERSION="9.5" ---     $ netstat -tulpn [root@splunk-custom-image log]# netstat -tulpn Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1684/sshd: /usr/sbi tcp6       0      0 :::22                   :::*                    LISTEN      1684/sshd: /usr/sbi tcp6       0      0 :::20201                :::*                    LISTEN      2517/otelopscol udp        0      0 127.0.0.1:323           0.0.0.0:*                           652/chronyd udp6       0      0 ::1:323                 :::*                                652/chronyd ---       /var/log/messages: [root@splunk-custom-image log]# systemctl status SplunkForwarder ● SplunkForwarder.service - Systemd service file for Splunk, generated by 'splunk enable boot-start'      Loaded: loaded (/etc/systemd/system/SplunkForwarder.service; enabled; preset: disabled)      Active: active (running) since Thu 2024-11-21 09:03:55 EST; 7min ago     Process: 797 ExecStartPre=/bin/bash -c chown -R splunkfwd:splunkfwd /opt/splunkforwarder (code=exited, status=0/SUCCESS)    Main PID: 1068 (splunkd)       Tasks: 47 (limit: 100424)      Memory: 227.4M         CPU: 3.481s      CGroup: /system.slice/SplunkForwarder.service              ├─1068 splunkd --under-systemd --systemd-delegate=no -p 8089 _internal_launch_under_systemd              └─2535 "[splunkd pid=1068] splunkd --under-systemd --systemd-delegate=no -p 8089 _internal_launch_under_systemd [process-runner]"   Nov 21 09:03:55 systemd[1]: Started Systemd service file for Splunk, generated by 'splunk enable boot-start'. Nov 21 09:03:58 splunk[1068]: Warning: Attempting to revert the SPLUNK_HOME ownership Nov 21 09:03:58 splunk[1068]: Warning: Executing "chown -R splunkfwd:splunkfwd /opt/splunkforwarder" Nov 21 09:03:58 splunk[1068]:         Checking mgmt port [8089]: open Nov 21 09:03:59 splunk[1068]:         Checking conf files for problems... Nov 21 09:03:59 splunk[1068]:         Done Nov 21 09:03:59 splunk[1068]:         Checking default conf files for edits... Nov 21 09:03:59 splunk[1068]:         Validating installed files against hashes from '/opt/splunkforwarder/splunkforwarder-9.3.2-d8bb32809498-linux-2.6-x86_64-> Nov 21 09:04:00 splunk[1068]: PYTHONHTTPSVERIFY is set to 0 in splunk-launch.conf disabling certificate validation for the httplib and urllib libraries shipped> Nov 21 09:04:00 splunk[1068]: 2024-11-21 09:04:00.038 -0500 splunkd started (build d8bb32809498) pid=1068 ---     /opt/splunkforwarder/var/log/splunk/splunkd.log attached file
I have tried everything to change the node sizes in the 3D Graph Network Topology Visualization.  I am able to get all of the other options to work.  Here is the work anywhere search I am using to te... See more...
I have tried everything to change the node sizes in the 3D Graph Network Topology Visualization.  I am able to get all of the other options to work.  Here is the work anywhere search I am using to test the viz.  Pretty straightforward.  I have changed around the field order and tried all types and sizes of numbers and nothing seems to change the size of the node in the output graph .  Has anyone else seen this issue, or been able to get the node sizing to work with the weight_* attributes   | makeresults | eval src="node1", dest="node2", color_src="#008000", color_dest="#FF0000", edge_color="#008000", edge_weight=1, weight_src=1, weight_dest=8 | table src, dest, color_src, color_dest, edge_color, weight_src, weight_dest, edge_weight   and the output I am getting:    
We started seeing this recently as well.  Also the various S1 Splunk integrations do not understand or permit having the IA and App on the same instance so Victoria experience doesn't work properly. ... See more...
We started seeing this recently as well.  Also the various S1 Splunk integrations do not understand or permit having the IA and App on the same instance so Victoria experience doesn't work properly.  This is also the case for the various scalyr dataset add ons, cannot create inputs because it complains about being on a search head.