All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello to everyone! I want to build a dashboard with which I can access information from config files of indexer cluster I know that the typical scenario to access config files is using REST endpoin... See more...
Hello to everyone! I want to build a dashboard with which I can access information from config files of indexer cluster I know that the typical scenario to access config files is using REST endpoints "/services/configs/conf-*" But as I understood, these endpoints show only configuration files stored under /system/local/*.conf Is it a way to access config files stored under /manager-apps/local ?
Hello everyone! I need help/hint: I tried to set up log forwarding from MacOS (ARM) to Splunk, but the logs never arrived. I followed the instructions from this video, and also installed and configur... See more...
Hello everyone! I need help/hint: I tried to set up log forwarding from MacOS (ARM) to Splunk, but the logs never arrived. I followed the instructions from this video, and also installed and configured Add-on for Unix and Linux. And what index will they appear in? Thanks! Inside /Applications/SplunkForwarder/etc/system/local i have: inputs.conf, outputs.conf, server.conf. inputs.conf     [monitor:///var/log/system.log] disabled = 0     outputs.conf     [tcpout:default-autolb-group] server = ip:9997 compressed = true [tcpout-server://ip:9997]     server.conf     [general] serverName = pass4SymmKey = [sslConfig] sslPassword = [lmpool:auto_generated_pool_forwarder] description = auto_generated_pool_forwarder peers = * quota = MAX stack_id = forwarder [lmpool:auto_generated_pool_free] description = auto_generated_pool_free peers = * quota = MAX stack_id = free        
Hello,  We have an multisite indexer cluster with Splunk Enterprise 9.1.2 running in Red-hat 7 VMs and we need to migrate them to others VMs but with Red-Hat 9. From documentation it's been require... See more...
Hello,  We have an multisite indexer cluster with Splunk Enterprise 9.1.2 running in Red-hat 7 VMs and we need to migrate them to others VMs but with Red-Hat 9. From documentation it's been required that all members of a cluster must have the same OS and version. I was thinking to simply add one new indexer (redhat 9 vm) at the time and dettach an old one forcinf the buckets count. So for a short-time the cluster would have members with different OS versions. Upgrading from Red-Hat 7 to Red.Had 9 directly in the splunk enviroment is not possible. I would like to know if there are critical issues to face while the migration is happening?  I hope the procedure won't last more than 2 days.
Hi All, How can i find the difference between 2 tables?. index=abc task="task1"|dedup component1 |table component1 |append [index=abc task="task2" |dedup component2 |table component2] |table compon... See more...
Hi All, How can i find the difference between 2 tables?. index=abc task="task1"|dedup component1 |table component1 |append [index=abc task="task2" |dedup component2 |table component2] |table component1 component2 these are the 2 tables. I want to show the extra data which are in component2 and not in component1. How can i do it?
Hi, All    I got the problem when try to  send data(metric) form Apache http server to Splunk Observability Cloud  My OS : Centos7 and I have got the Metric of CPU/ Memory  My Apache Version : Apa... See more...
Hi, All    I got the problem when try to  send data(metric) form Apache http server to Splunk Observability Cloud  My OS : Centos7 and I have got the Metric of CPU/ Memory  My Apache Version : Apache/2.4.6 and  server stats is working (http://localhost:8080/server-status?auto) I have referenced the follow document  and update the config file /etc/otle/collector/agent_config.yaml but I did not get any metrics about Apache !! https://docs.splunk.com/observability/en/gdi/opentelemetry/components/apache-receiver.html https://docs.splunk.com/observability/en/gdi/monitors-hosts/apache-httpserver.html Anybody kindly do me a favor to fix it thanks in adeavne #observability    
Hello Splunkers!! During the testing phase with demo data, the timestamps are matching accurately. However, in real-time data ingestion, there seems to be a mismatch in the timestamps. This indicate... See more...
Hello Splunkers!! During the testing phase with demo data, the timestamps are matching accurately. However, in real-time data ingestion, there seems to be a mismatch in the timestamps. This indicates a potential discrepancy in the timestamp parsing or configuration when handling live data. Could you please suggest me potential reson and cause? Additionally, it would be helpful to review the relevant props.conf configurations to ensure consistency   Sample data: {"@timestamp":"2024-11-19T12:53:16.5310804+00:00","event":{"action":"log","code":"10010","kind":"event","original":"Communication session on line {1:d}, lost.","context":{"parameter1":"12","parameter2":"2","parameter3":"6","parameter4":"0","physical_line":"12","connected_unit_type_code":"2","connect_logical_unit_number":"6","description":"A User Event message will be generated each time a communication link is lost. This message can be used to detect that an external unit no longer is connected.\nPossible Unit Type codes:\n2 Debug line\n3 ACI line\n4 CWay line","severity":"Info","vehicle_index":"0","unit_type":"NT8000","location":"0","physical_module_id":"0","event_type":"UserEvent","software_module_id":"26"}},"service":{"address":"localhost:50005","name":"Eventlog"},"agent":{"name":"ACI.SystemManager","type":"ACI SystemManager Collector","version":"3.3.0.0"},"project":{"id":"fleet_move_af_sim"},"ecs.version":"8.1.0"} Current props: DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Custom #KV_MODE = json pulldown_type = 1 TIME_PREFIX = \"@timestamp\":\" TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%7N%:z mismatch timestamp Current results :   Note : I am using http event collector token to get the data into Splunk. Inputs and props settings are arranged under search app.  
Hi, we are using a Splunk Cloud ES and we can't seem to edit the base search macro of the "Alerts" datamodel. The macro in question is, " cim_Alerts_indexes" and it appears it has an extra parameter ... See more...
Hi, we are using a Splunk Cloud ES and we can't seem to edit the base search macro of the "Alerts" datamodel. The macro in question is, " cim_Alerts_indexes" and it appears it has an extra parameter which generates an error when this macro is ran manually. Error: "Error in 'search' command: Unable to parse the search: Comparator '=' has an invalid term on the right hand side" And that is due to the fact that the macro SPL is set up as follows:   (index=(index=azure_security sourcetype="GraphSecurityAlert") OR (index=trendmicro))     The extra "index=" in the beginning is what's messing it up. It should be removed. However, when we try to go to Settings -> Advanced Search and click on this macro, we are taken to the CIM Setup interface (Splunk_SA_CIM) which shows the config settings of the macro, including the:   Indexes whitelist = azure_security,trendmicro Tags whitelist = cloud, pci   Notice, the editable configs do not include the definition which is:   (index=(index=azure_security sourcetype="GraphSecurityAlert") OR (index=trendmicro))     So can anyone assist how we can correct this? Regards  
Current version of Splunk Enterprise on Linux supports several flavors of 5.x kernel, but does not seem to support 6.x kernel per the most recent system requirements.   We are planning migration of o... See more...
Current version of Splunk Enterprise on Linux supports several flavors of 5.x kernel, but does not seem to support 6.x kernel per the most recent system requirements.   We are planning migration of our Splunk Infrastructure from Amazon Linux 2 (kernel 5.10.x) to Amazon Linux 2023 (kernel 6.1.x)  due to the approaching Operating System end of life.  Does anyone know if there are plans to support the new Amazon OS by Splunk Enterprise?  
Custom token script stopped working. Can anyone spot any obvious errors? It worked perfectly from version 6.x - 8.x  I the error "A custom JavaScript error caused an issue loading your dashboard. ... See more...
Custom token script stopped working. Can anyone spot any obvious errors? It worked perfectly from version 6.x - 8.x  I the error "A custom JavaScript error caused an issue loading your dashboard. See the developer console for more details." The console isnt very helpful.  common.js:1702 Error: Script error for: util/console http://requirejs.org/docs/errors.html#scripterror at makeError (eval at e.exports (common.js:1:1), <anonymous>:166:17) at HTMLScriptElement.onScriptError (eval at e.exports (common.js:1:1), <anonymous>:1689:36) // Tokenize.js require(['jquery', 'underscore', 'splunkjs/mvc', 'util/console'], function($, _, mvc, console) { function setToken(name, value) { var defaultTokenModel = mvc.Components.get('default'); if (defaultTokenModel) { defaultTokenModel.set(name, value); } var submittedTokenModel = mvc.Components.get('submitted'); if (submittedTokenModel) { submittedTokenModel.set(name, value); } } // Main $('.dashboard-body').on('click', '[data-on-class],[data-off-class],[data-set-token],[data-unset-token],[data-token-json]', function(e) { e.preventDefault(); console.log("Inside the click"); var target = $(e.currentTarget); console.log("here"); console.log("target.data('on-class')=" + target.data('on-class')); var cssOnClass= target.data('on-class'); var cssOffClass = target.data('off-class'); if (cssOnClass) { $("." + cssOnClass).attr('class', cssOffClass); target.attr('class', cssOnClass); } var setTokenName = target.data('set-token'); if (setTokenName) { setToken(setTokenName, target.data('value')); } var unsetTokenName = target.data('unset-token'); if (unsetTokenName) { var tokens = unsetTokenName.split(","); var arrayLength = tokens.length; for (var i = 0; i < arrayLength; i++) { setToken(tokens[i], undefined); //Do something } //setToken(unsetTokenName, undefined); } var tokenJson = target.data('token-json'); if (tokenJson) { try { if (_.isObject(tokenJson)) { _(tokenJson).each(function(value, key) { if (value == null) { // Unset the token setToken(key, undefined); } else { setToken(key, value); } }); } } catch (e) { console.warn('Cannot parse token JSON: ', e); } } }); });  
---------------------------- This is an Example (He/She) ----------------------------- Version: 21.04.812-174001 Date/time: 2024-10-18/01:00:06 (2024-10-18/05:00:06 UTC) User/aplnid: /2370 Comput... See more...
---------------------------- This is an Example (He/She) ----------------------------- Version: 21.04.812-174001 Date/time: 2024-10-18/01:00:06 (2024-10-18/05:00:06 UTC) User/aplnid: /2370 ComputerName/-user: Ann/King Windows NT version 6.2, build no. 9200 /10872/6241785241 -> Loading program ---------------------------------------------------------------------------------------------------- ---------------------------- This is an Example (He/She) ----------------------------- Version: 21.04.812-174001 Date/time: 2024-10-18/01:00:06 (2024-10-18/05:00:06 UTC) User/aplnid: /2370 ComputerName/-user: James/Bond Windows NT version 6.2, build no. 9200 /10872/6241785241 -> Start APL (pid 8484) ---------------------------------------------------------------------------------------------------- ---------------------------- This is an Example (He/She) ----------------------------- Version: 21.04.812-174001 Date/time: 2024-10-18/01:00:06 (2024-10-18/05:00:06 UTC) User/aplnid: /2370 ComputerName/-user: Martin/King Windows NT version 6.2, build no. 9200 /10872/6241785241 -> Initialising external processes ---------------------------------------------------------------------------------------------------- I am trying to break events at "This is an Example"  [mysourcetype] TIME_FORMAT = %Y-%m-%d/%H:%M:%S TIME_PREFIX = Date\/time:\s+ TZ = US/Eastern LINE_BREAKER = (.*)(This is An Example).* SHOULD_LINEMERGE = false This works when i test in "Add Data" but it is not working under props.conf. All the lines are merged into one event. What is the issue in this?
I am trying to figure out how to include a lookup in my search, but only some records. My current search is below. My company has two issues: We do not log app version anywhere easy to grab, so I n... See more...
I am trying to figure out how to include a lookup in my search, but only some records. My current search is below. My company has two issues: We do not log app version anywhere easy to grab, so I need to have this pulled via rex. We manually maintain a list of clients (some are on an old version and we don't populate the "client" field for them) and what host they are on. Some clients have both their application and DB on the same host, so my search below results in some weird duplicates where the displayName is listed twice for a single record in my result set (a field containing two values somehow). I want the lookup to only include records where the "host_type" is "application", not "db". Here is my search:   `Environments(PRODUCTION)` sourcetype=appservice "updaterecords" AND "version" | eval host = lower(host) | lookup clientlist.csv hostname as host, OUTPUT clientcode as clientCode | eval displayName = IF(client!="",client,clientCode) | rex field=_raw "version: (?<AppVersion>.*)$" | eval VMVersion = replace(AppVersion,"release/","") | eval CaptureDate=strftime(_time,"%Y-%m-%d") | dedup clientCode | table displayName,AppVersion,CaptureDate    I did try including host_type right after "..hostname as host.." and using a |where clause later, but that did not work.
Hello Splunk Community,   i monitor the audit.log on RHEL8. As soon as I generate a specific log entry locally, I can find this log entry through my defined search query in Splunk. However, if a ... See more...
Hello Splunk Community,   i monitor the audit.log on RHEL8. As soon as I generate a specific log entry locally, I can find this log entry through my defined search query in Splunk. However, if a few hours pass, I can no longer find it with the same search query. Of course, I adjust the time settings accordingly. First, I search in real-time (last 30 minutes), then I switch to, for example, Today or the last 4 hours. I have noticed that this happens with searches that include "transaction msg maxspan=5m". I want to see all the related transactions. When I have the command transaction msg maxspan=5m in my search, I find all the related transactions in real-time. After a few hours, I no longer get any hits with the same search query. Only when I remove the transaction command from the search do I see the entries again, but then I don't see as much information as before. Nothing changes if i switch to transaction msg maxevent=3. Do I possibly have a wrong configuration of my environment here, or do I need to adjust something? Thanks in advance. Search Query: index="sys_linux" sourcetype="linux_audit" | transaction msg maxspan=5m | search type=SYSCALL (auid>999 OR auid=0) auid!=44444 auid!=4294967295 comm!=updatedb comm!=ls comm!=bash comm!=find comm!=crond comm!=sshd comm!="(systemd)" | rex field=msg "audit\((?P<date>[\d]+)" | convert ctime(date) | sort by date | table date, type, comm, uid, auid, host, name
Hi, I am looking into the possibiliy of deploying a private splunk instance for integration testing in AWS, can anyone tell is it possible to install an NFR licence on an instance deployed in AWS? ... See more...
Hi, I am looking into the possibiliy of deploying a private splunk instance for integration testing in AWS, can anyone tell is it possible to install an NFR licence on an instance deployed in AWS?   Thanks
We are trying to watch the NIC statistics for our OS interfaces.  We are gathering data from a simple   ifconfig eth0 | grep -E 'dropped|packets' > /var/log/nic-errors.log   For my search, I have... See more...
We are trying to watch the NIC statistics for our OS interfaces.  We are gathering data from a simple   ifconfig eth0 | grep -E 'dropped|packets' > /var/log/nic-errors.log   For my search, I have:   index="myindex" host="our-hosts*" source="/var/log/nic-errors.log" | rex "RX\serrors\s(?<rxError>\d+)\s" | rex "RX\spackets\s(?<rxPackets>\d+)\s" | rex "RX\serrors\s+\d+\s+dropped\s(?<rxDrop>\d+)\s" | chart last(rxError), last(rxPackets), last(rxDrop) by host   which displays the base data.  Now I want to watch if rxError increases and flag that.  Any ideas? The input data will look something like:   RX packets 2165342 bytes 33209324712 (3.0 GiB) RX errors 0 dropped 123 overruns 0 frame 0 TX packets 1988336 bytes 2848819271 (2.6 GiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0  
We have configured authentication extensions with Azure to enable token creation for SAML users following this link: https://docs.splunk.com/Documentation/SplunkCloud/latest/Security/Configureauthext... See more...
We have configured authentication extensions with Azure to enable token creation for SAML users following this link: https://docs.splunk.com/Documentation/SplunkCloud/latest/Security/ConfigureauthextensionsforSAMLtokens#Configure_and_activate_authentication_extensions_to_interface_with_Microsoft_Azure I can create token for myself, but cannot create tokens for others. I had another admin test and he could create a token for himself, but could not create one for me or other users. The only error Splunk is providing is "User <user> does not exist". Which is not true. The users do exist. All permissions are in place for Splunk admin and Azure side. Any ideas on what is wrong?
We try to setup Splunk Enterprise 9.3.2 cluster   All nodes working fine but Splunk Universal Forwarder isn't working - not listening Management port 8089 or 8088...   Running on Google Cloud Pla... See more...
We try to setup Splunk Enterprise 9.3.2 cluster   All nodes working fine but Splunk Universal Forwarder isn't working - not listening Management port 8089 or 8088...   Running on Google Cloud Platform using RHEL 9.5 (latest) already tried RHEL 8.10 (latest) too   Used documentation: https://docs.splunk.com/Documentation/Forwarder/9.3.2/Forwarder/Installanixuniversalforwarder#Install_the_universal_forwarder_on_Linux   using next commands to setup: cd /opt tar xzf /opt/splunkforwarder-9.3.2-d8bb32809498-Linux-x86_64.tgz adduser -d /opt/splunkforwarder splunkfwd export SPLUNK_HOME=/opt/splunkforwarder $SPLUNK_HOME/bin/splunk enable boot-start -systemd-managed 1 -user splunkfwd -group splunkfwd systemctl start SplunkForwarder     cat /etc/systemd/system/SplunkForwarder.service [Unit] Description=Systemd service file for Splunk, generated by 'splunk enable boot-start' After=network-online.target Wants=network-online.target   [Service] Type=simple Restart=always ExecStart=/opt/splunkforwarder/bin/splunk _internal_launch_under_systemd --accept-license KillMode=mixed KillSignal=SIGINT TimeoutStopSec=360 LimitNOFILE=65536 LimitRTPRIO=99 SuccessExitStatus=51 52 RestartPreventExitStatus=51 RestartForceExitStatus=52 User=splunkfwd Group=splunkfwd NoNewPrivileges=yes PermissionsStartOnly=true AmbientCapabilities=CAP_DAC_READ_SEARCH ExecStartPre=-/bin/bash -c "chown -R splunkfwd:splunkfwd /opt/splunkforwarder" ---     $ cat /etc/os-release NAME="Red Hat Enterprise Linux" VERSION="9.5 (Plow)" ID="rhel" ID_LIKE="fedora" VERSION_ID="9.5" PLATFORM_ID="platform:el9" PRETTY_NAME="Red Hat Enterprise Linux 9.5 (Plow)" ANSI_COLOR="0;31" LOGO="fedora-logo-icon" CPE_NAME="cpe:/o:redhat:enterprise_linux:9::baseos" HOME_URL="https://www.redhat.com/" DOCUMENTATION_URL="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9" BUG_REPORT_URL="https://issues.redhat.com/"   REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 9" REDHAT_BUGZILLA_PRODUCT_VERSION=9.5 REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux" REDHAT_SUPPORT_PRODUCT_VERSION="9.5" ---     $ netstat -tulpn [root@splunk-custom-image log]# netstat -tulpn Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1684/sshd: /usr/sbi tcp6       0      0 :::22                   :::*                    LISTEN      1684/sshd: /usr/sbi tcp6       0      0 :::20201                :::*                    LISTEN      2517/otelopscol udp        0      0 127.0.0.1:323           0.0.0.0:*                           652/chronyd udp6       0      0 ::1:323                 :::*                                652/chronyd ---       /var/log/messages: [root@splunk-custom-image log]# systemctl status SplunkForwarder ● SplunkForwarder.service - Systemd service file for Splunk, generated by 'splunk enable boot-start'      Loaded: loaded (/etc/systemd/system/SplunkForwarder.service; enabled; preset: disabled)      Active: active (running) since Thu 2024-11-21 09:03:55 EST; 7min ago     Process: 797 ExecStartPre=/bin/bash -c chown -R splunkfwd:splunkfwd /opt/splunkforwarder (code=exited, status=0/SUCCESS)    Main PID: 1068 (splunkd)       Tasks: 47 (limit: 100424)      Memory: 227.4M         CPU: 3.481s      CGroup: /system.slice/SplunkForwarder.service              ├─1068 splunkd --under-systemd --systemd-delegate=no -p 8089 _internal_launch_under_systemd              └─2535 "[splunkd pid=1068] splunkd --under-systemd --systemd-delegate=no -p 8089 _internal_launch_under_systemd [process-runner]"   Nov 21 09:03:55 systemd[1]: Started Systemd service file for Splunk, generated by 'splunk enable boot-start'. Nov 21 09:03:58 splunk[1068]: Warning: Attempting to revert the SPLUNK_HOME ownership Nov 21 09:03:58 splunk[1068]: Warning: Executing "chown -R splunkfwd:splunkfwd /opt/splunkforwarder" Nov 21 09:03:58 splunk[1068]:         Checking mgmt port [8089]: open Nov 21 09:03:59 splunk[1068]:         Checking conf files for problems... Nov 21 09:03:59 splunk[1068]:         Done Nov 21 09:03:59 splunk[1068]:         Checking default conf files for edits... Nov 21 09:03:59 splunk[1068]:         Validating installed files against hashes from '/opt/splunkforwarder/splunkforwarder-9.3.2-d8bb32809498-linux-2.6-x86_64-> Nov 21 09:04:00 splunk[1068]: PYTHONHTTPSVERIFY is set to 0 in splunk-launch.conf disabling certificate validation for the httplib and urllib libraries shipped> Nov 21 09:04:00 splunk[1068]: 2024-11-21 09:04:00.038 -0500 splunkd started (build d8bb32809498) pid=1068 ---     /opt/splunkforwarder/var/log/splunk/splunkd.log attached file
I have tried everything to change the node sizes in the 3D Graph Network Topology Visualization.  I am able to get all of the other options to work.  Here is the work anywhere search I am using to te... See more...
I have tried everything to change the node sizes in the 3D Graph Network Topology Visualization.  I am able to get all of the other options to work.  Here is the work anywhere search I am using to test the viz.  Pretty straightforward.  I have changed around the field order and tried all types and sizes of numbers and nothing seems to change the size of the node in the output graph .  Has anyone else seen this issue, or been able to get the node sizing to work with the weight_* attributes   | makeresults | eval src="node1", dest="node2", color_src="#008000", color_dest="#FF0000", edge_color="#008000", edge_weight=1, weight_src=1, weight_dest=8 | table src, dest, color_src, color_dest, edge_color, weight_src, weight_dest, edge_weight   and the output I am getting:    
Cannot communicate with task server, please check your settings.   The Task Server is currently unavailable. Please ensure it is started and listening on port 9998. See the documentation for mor... See more...
Cannot communicate with task server, please check your settings.   The Task Server is currently unavailable. Please ensure it is started and listening on port 9998. See the documentation for more details.   We are getting the above errors while trying to connect with DB Connect 3.18.1  We are running splunk 9.3.1    I've tried uninstallng our openjdk and re-installing up but am finding this:   splunk_app_db_connect# rpm -qa |grep java tzdata-java-2024b-2.el9.noarch javapackages-filesystem-6.0.0-4.el9.noarch java-11-openjdk-headless-11.0.25.0.9-3.el9.x86_64 java-11-openjdk-11.0.25.0.9-3.el9.x86_64 DIR: /opt/splunk/etc/apps/splunk_app_db_connect splunk_app_db_connect# java -version openjdk version "11.0.25" 2024-10-15 LTS OpenJDK Runtime Environment (Red_Hat-11.0.25.0.9-1) (build 11.0.25+9-LTS) OpenJDK 64-Bit Server VM (Red_Hat-11.0.25.0.9-1) (build 11.0.25+9-LTS, mixed mode, sharing)   One shows 9-1 and one shows 9-3    
Hi Team,  I am looking for a way to forward data from my heavy forwarders to a different source while maintaining the metadata like (host, source, sourcetype)  I have tried using the tcpout con... See more...
Hi Team,  I am looking for a way to forward data from my heavy forwarders to a different source while maintaining the metadata like (host, source, sourcetype)  I have tried using the tcpout config in outputs.conf but I do not see the metadata being transferred.  syslog config in outputs.conf does not work for me either. 
Please help me in configuring rsyslog to Splunk. Our rsyslog server will receive the logs from network devices and our rsyslog has UF installed.  I have no idea of how to configure this and what rsy... See more...
Please help me in configuring rsyslog to Splunk. Our rsyslog server will receive the logs from network devices and our rsyslog has UF installed.  I have no idea of how to configure this and what rsyslog means? Please help me with step by step procedure of how to configure this to our deployment server or indexer?  Documentation will be highly appreciated.