All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Also, if I vreate the directory /usr/lib/splunk-otel-collector/agent-bundle/run/collectd/global/managed_configs/ and stick the collectd.config in there, if I restart the service the directory is remo... See more...
Also, if I vreate the directory /usr/lib/splunk-otel-collector/agent-bundle/run/collectd/global/managed_configs/ and stick the collectd.config in there, if I restart the service the directory is removed.
Hi @livehybrid  I dont have a drilldown. Its simply a panel of the format: <panel id="panel1"> <title>My Panel</title> <html> <style> ... </style> </html> <html> <li... See more...
Hi @livehybrid  I dont have a drilldown. Its simply a panel of the format: <panel id="panel1"> <title>My Panel</title> <html> <style> ... </style> </html> <html> <li><a href="..." target="..."><b>Link1</b></a></li> <li><a href="..." target="..."><b>Link2</b></a></li> <li><a href="..." target="..."><b>Link3</b></a></li> </html> </panel>
Hi @livehybrid. Its a classic XML dashboard. I'm "coding" it without using Dashboard Studio. My dashboard is pretty big, so I'm afraid I can't share my code. But this panel with the list is one of... See more...
Hi @livehybrid. Its a classic XML dashboard. I'm "coding" it without using Dashboard Studio. My dashboard is pretty big, so I'm afraid I can't share my code. But this panel with the list is one of the first elements of the dashboard. Thanks, Pedro
Hi @pedropiin  You could create a field returned by the SPL of your search which has an _ (underscore) prefix, this wont be rendered in the visualisation but can be used in the drilldown. e.g. | e... See more...
Hi @pedropiin  You could create a field returned by the SPL of your search which has an _ (underscore) prefix, this wont be rendered in the visualisation but can be used in the drilldown. e.g. | eval _link="https://".$domain$."/".$path$  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @pedropiin  Is this as a classic XML dashboard or in Dashboard Studio? Are you able to share what you currently have?   Did this answer help you? If so, please consider: Adding karma to sho... See more...
Hi @pedropiin  Is this as a classic XML dashboard or in Dashboard Studio? Are you able to share what you currently have?   Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
I have installed the Splunk O11y agent via linus script.  I have the smartagent/rabbitmq receiver/pipeline per the instructions at https://help.splunk.com/en/splunk-observability-cloud/manage-data/av... See more...
I have installed the Splunk O11y agent via linus script.  I have the smartagent/rabbitmq receiver/pipeline per the instructions at https://help.splunk.com/en/splunk-observability-cloud/manage-data/available-data-sources/supported-integrations-in-splunk-observability-cloud/applications-messaging/rabbitmq Whicle restarting the service and viewing logs, I see a failure: Jul 11 14:40:06 ip-1**-**-**-**.ec2.internal otelcol[3749684]: 2025-07-11T14:40:06.105Z info collectd/logging.go:41 configfile: stat (/usr/lib/splunk-otel-collector/agent-bundle/run/collectd/global/managed_config/*.conf) failed: No such file or directory {"resource": {"service.instance.id": "418ee031-25d8-4d3c-a115-6fb7e98c4992", "service.name": "otelcol", "service.version": "v0.128.0"}, "otelcol.component.id": "smartagent/rabbitmq", "otelcol.component.kind": "receiver", "otelcol.signal": "metrics", "name": "default", "collectdInstance": "global"} Since the instructions do not mention this directory/.conf files, I would expect these to have been installed by default. Does anybody else have experience with this? Also, just a note: The RabbitMQ developers have deprecated the http api in favor of prometheus, and while the agenbt has the capacity to build promehteus receivers, the 
Hi everyone. I have a panel that contains a list of links to other dashboards. I need to create a new list item with a link that changes dinamically according to the values of three tokens evaluat... See more...
Hi everyone. I have a panel that contains a list of links to other dashboards. I need to create a new list item with a link that changes dinamically according to the values of three tokens evaluated inside "eval" blocks. But, as splunk makes it clear, I can't create "eval" blocks inside of a panel. So I wanted to know if there's a way to evaluate these tokens such that they can be used within the scope of the "panel" block. Thanks in advance,  Pedro
There currently is an issue with NF 9 and STREAM 8.1.5. I suggest downgrading until there's a newer release.
Hi @lukasmecir  I dont see this listed as a known issue at https://help.splunk.com/en/splunk-enterprise-security-8/release-notes-and-resources/8.1/splunk-enterprise-security-release-notes/splunk-ent... See more...
Hi @lukasmecir  I dont see this listed as a known issue at https://help.splunk.com/en/splunk-enterprise-security-8/release-notes-and-resources/8.1/splunk-enterprise-security-release-notes/splunk-enterprise-security-8.1.0-known-issues so I would recommend raising this as a support case and hopefully they will assign it a bug ticket reference (or match to an existing if already reported by someone), this way you'll be able to track if/when a fix for this is applied. It does seem like the sort of thing which should persist without hand-editing of conf files in the system!  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @josevg1981  It sounds like this is an rsyslog configuration issue, rather than a Splunk problem however I'll do my best to help.  Check your rsyslog configuration and verify message size limits... See more...
Hi @josevg1981  It sounds like this is an rsyslog configuration issue, rather than a Splunk problem however I'll do my best to help.  Check your rsyslog configuration and verify message size limits in /etc/rsyslog.conf - what is your $MaxMessageSize? Try increasing: $MaxMessageSize 64k Check for any template formatting that might be stripping content, does the template output the %msg% content? # Look for custom templates that only capture certain fields $template CheckPointFormat,"%timestamp% %hostname% %programname%: %msg%\n"    Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Thank you very much!
Hi @mfleitma , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated by all the contributors
@PrewinThomas  This is helpfull. It is not the solution I prefer at the beginning, but the result is acceptable for the later process of my scheduled searches. For the documentation: the appendpip... See more...
@PrewinThomas  This is helpfull. It is not the solution I prefer at the beginning, but the result is acceptable for the later process of my scheduled searches. For the documentation: the appendpipe is not working in case there is ONLY the line with the fieldnames in the csv: A_Row1: field1,field2 In case there is a LF in it like: B_Row1: field1,field2 B_Row2: it is running well. I get in the updated csv now: C_Row1: field1,field2,comment3 C_Row2: ,, I would feel better, if B_Row2 would not be necessary, but it I can handle this. Many thx for your help.
Totally forgot to post this.. At WallSec someone put up a more complete writeup: WALLSEC IT SECURITY - SIEM Your SAP Security Audit Log with SPLUNK Might be easier to understand for some people tha... See more...
Totally forgot to post this.. At WallSec someone put up a more complete writeup: WALLSEC IT SECURITY - SIEM Your SAP Security Audit Log with SPLUNK Might be easier to understand for some people than my ramblings.
Reading too fast happens to the best of us
You're right.  I took the sm21.txt file in the OP to be sample data rather than a lookup table.
Hi everyone, We have the following setup: Check Point Firewall is configured to send logs via syslog over UDP (port 514). Logs are received by a Linux server running rsyslog. rsyslog writes... See more...
Hi everyone, We have the following setup: Check Point Firewall is configured to send logs via syslog over UDP (port 514). Logs are received by a Linux server running rsyslog. rsyslog writes these logs to a local file (e.g., /var/log/CheckPoint.log). Splunk (on the same server) reads this file and indexes the logs Although the Check Point firewall sends complete logs (visible in tcpdump, including structured data and original timestamps), only a truncated version of the log is written to the file by rsyslog. Specifically: The structured message body is missing. Only the syslog header (timestamp, hostname, program name) appears in the file. Can anyonehelp !!  Ty 
Hi @bluorbank  It seems like you've got a good plan regarding the upgrade path (check out https://docs.splunk.com/Documentation/Splunk/9.4.2/Installation/HowtoupgradeSplunk if you havent already see... See more...
Hi @bluorbank  It seems like you've got a good plan regarding the upgrade path (check out https://docs.splunk.com/Documentation/Splunk/9.4.2/Installation/HowtoupgradeSplunk if you havent already seen it which might contain other useful info). Regarding the 9.0.6 version, the latest 9.0.x version is 9.0.9 - the download links are: -------- Linux -------- -- Tarball (TGZ) wget -O splunk-9.0.9-6315942c563f-Linux-x86_64.tgz 'https://download.splunk.com/products/splunk/releases/9.0.9/linux/splunk-9.0.9-6315942c563f-Linux-x86_64.tgz' wget -O splunkforwarder-9.0.9-6315942c563f-Linux-x86_64.tgz 'https://download.splunk.com/products/universalforwarder/releases/9.0.9/linux/splunkforwarder-9.0.9-6315942c563f-Linux-x86_64.tgz' -- Debian (DEB) wget -O splunk-9.0.9-6315942c563f-linux-2.6-amd64.deb 'https://download.splunk.com/products/splunk/releases/9.0.9/linux/splunk-9.0.9-6315942c563f-linux-2.6-amd64.deb' wget -O splunkforwarder-9.0.9-6315942c563f-linux-2.6-amd64.deb 'https://download.splunk.com/products/universalforwarder/releases/9.0.9/linux/splunkforwarder-9.0.9-6315942c563f-linux-2.6-amd64.deb' -- RHEL (RPM) wget -O splunk-9.0.9-6315942c563f.x86_64.rpm 'https://download.splunk.com/products/splunk/releases/9.0.9/linux/splunk-9.0.9-6315942c563f.x86_64.rpm' wget -O splunkforwarder-9.0.9-6315942c563f.x86_64.rpm 'https://download.splunk.com/products/universalforwarder/releases/9.0.9/linux/splunkforwarder-9.0.9-6315942c563f.x86_64.rpm' -------- Windows -------- -- Binary (MSI) wget -O splunk-9.0.9-6315942c563f-x64-release.msi 'https://download.splunk.com/products/splunk/releases/9.0.9/windows/splunk-9.0.9-6315942c563f-x64-release.msi' wget -O splunkforwarder-9.0.9-6315942c563f-x64-release.msi 'https://download.splunk.com/products/universalforwarder/releases/9.0.9/windows/splunkforwarder-9.0.9-6315942c563f-x64-release.msi' -- ZIP wget -O splunk-9.0.9-6315942c563f-windows-64.zip 'https://download.splunk.com/products/splunk/releases/9.0.9/windows/splunk-9.0.9-6315942c563f-windows-64.zip' wget -O splunkforwarder-9.0.9-6315942c563f-windows-64.zip 'https://download.splunk.com/products/universalforwarder/releases/9.0.9/windows/splunkforwarder-9.0.9-6315942c563f-windows-64.zip'  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Hi @mcfabrero_acn  Yes, you could set autoLBFrequency to achieve active load balancing across the output of your UFs to all Heavy Forwarders. [tcpout:HF_Group] server = HF1:9997,HF2:9997,...HF14:99... See more...
Hi @mcfabrero_acn  Yes, you could set autoLBFrequency to achieve active load balancing across the output of your UFs to all Heavy Forwarders. [tcpout:HF_Group] server = HF1:9997,HF2:9997,...HF14:9997 autoLBFrequency = 30 The other option is to use volume based LB configuration - its worth checking out https://help.splunk.com/en/splunk-enterprise/forward-and-process-data/forwarding-and-receiving-data/9.4/perform-advanced-configuration/set-up-load-balancing#:~:text=separate%20network%20port.-,Choose%20a%20load%20balancing%20method,-You%20can%20choose to see which would be more appropriate for your usecase. The potential downsides of autoLBFrequency would be the TCP connection churn: New connections created every 30 seconds, there could be a *slight* performance overhead due to Connection establishment costs however I wouldnt expect this to be too noticable. Check out https://community.splunk.com/t5/Getting-Data-In/Universal-Forwarder-not-load-balancing-to-indexers/m-p/98581 which might also help. The other thing to consider is an increased number of pipelines - but again its worth understanding the implications of this and considering your available processing resource on the UFs/HFs. Are you currently using the default of 1? See https://docs.splunk.com/Documentation/Splunk/latest/Admin/Serverconf#Remote_applications_configuration_.28e.g._SplunkBase.29:~:text=parallelIngestionPipelines%20%3D%20%3Cinteger%3E for more info. Finally - what is the datasource into your UFs? Sometimes sources like syslog can make it tricky to LB effectively.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing  
Hello, I have old version of Splunk (8.2.1, rhel7), inherited, need perform software upgrade, prior to migrate to new rhel 8 OS.  Seems for successful configuration migration I need first of all... See more...
Hello, I have old version of Splunk (8.2.1, rhel7), inherited, need perform software upgrade, prior to migrate to new rhel 8 OS.  Seems for successful configuration migration I need first of all upgrade on old rhel7 current verion of Splunk from 8.2.1 to 9.0.6 version, unfortunately 9.0.6 is out of support and I can't download from web this old version. would you please help me with it ?