All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Dear Splunkers.   I am planning to get Dashboard visualization for Citrix Netscaler . We have Splunk add-on for Citrix Netscaler to collect the logs from the Netscaler appliance. However, i can't... See more...
Dear Splunkers.   I am planning to get Dashboard visualization for Citrix Netscaler . We have Splunk add-on for Citrix Netscaler to collect the logs from the Netscaler appliance. However, i can't find any APP available for Citrix NetScaler, which leverages the logs collected by the add-on and show in visualizations. Any suggestions please. thanks.    
Hi, I’m trying to integrate dynatrace with Splunk using Dynatrace add-on for Splunk. However after the configuration, I’m getting the below error. (Caused by SSLError(SSLCertVerificationError(1, '... See more...
Hi, I’m trying to integrate dynatrace with Splunk using Dynatrace add-on for Splunk. However after the configuration, I’m getting the below error. (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1143)'))), Anyone experienced this, or know how to solve this certificate issue? FYI, I have updated ssl certificate on both Splunk and Dynatrace, but it didn’t help.
I want to block the audit.log file from a particular instance sending logs to splunk, is the stanza sufficient to accomplish that? Per matching a file: blacklist = <regular expression> * If set, fi... See more...
I want to block the audit.log file from a particular instance sending logs to splunk, is the stanza sufficient to accomplish that? Per matching a file: blacklist = <regular expression> * If set, files from this input are NOT monitored if their path matches the specified regex. * Takes precedence over the deprecated '_blacklist' setting, which functions the same way. * If a file matches the regexes in both the deny list and allow list settings, the file is NOT monitored. Deny lists take precedence over allow lists. * No default. [monitor:///logs/incoming/file.com/all-messages.log] sourcetype = something index = something_platform disabled = 0 blacklist = audit.log
  I have a Splunk 9.1.2 server running RHEL 8 with about 50 clients. This is airgapped environment. I have bunch of Linux (RHEL and Ubuntu) UFs and have configured inputs.conf to ingest files like ... See more...
  I have a Splunk 9.1.2 server running RHEL 8 with about 50 clients. This is airgapped environment. I have bunch of Linux (RHEL and Ubuntu) UFs and have configured inputs.conf to ingest files like /var/log/message; /var/log/secure; /var/log/audit/audit.log; /var/log/cron etc. Recently, I noticed that only logs from /var/log/messages and /var/log/cron being ingested; specially I don't see /var/log/secure and /var/log/audit/audit.log.  I tried restarting splunk process on one of the UF and check splunkd.log and I don’t see any errors. Here is what I see for /var/log/secure in the splunkd.log (looks normal) (I have typed it, as I can copy/paste from the air gapped machine) TailingProcessor [xxxxxx MainTailingThread] passing configuration stanza: monitor:///var/log/secure TailingProcessor [xxxxxx MainTailingThread] Adding watch on path:///var/log/secure WatchedFile [xxxxxx tailreader 0] – Will begin reading at offset=xxxx for file=`/var/log/secure` Here is my inputs.conf [default] host = <indexer> index = linux [monitor:///var/log/secure] disabled = false [monitor:///var/log/messages] disabled = false [monitor:///var/log/audit/audit.log] disabled = false [monitor:///var/log/syslog] disabled = false   File permission seems to be fine for all of those files. Please note, SELinux is enabled but file permission seems to be fine for all of those files. Initially, I did have to run "setfacl -R -m u:splunkfwd:rX /var/log"  for Splunk to get access access to send logs to the indexer.  btool also shown that I am using the correct inputs.conf. Any idea, what's not misconfigured?  
I'm using the punchcard in dashboard studio and the values on the left are getting truncated with ellipses, is there a way to display the full value or edit the truncation style?  
Hi, I have a log that tracks user changes to a specific field in a form. The process is as follows: 1. The user accesses the form, which generates a log event with "get" eventtype along with the cur... See more...
Hi, I have a log that tracks user changes to a specific field in a form. The process is as follows: 1. The user accesses the form, which generates a log event with "get" eventtype along with the current value of field1. This can occur several times as the user refreshes the page, or through code behind the scenes that generates an event based on how long the user stays on the page. 2. The user fills in the form and hits submit, which logs an event with "update" eventtype. Here's a simplified list of events: _time,         eventtype,          sessionid,         field1 10:06         update                  session2           newvalue3 10:05         get                          session2           newvalue2 09:15         update                  session1           newvalue2 09:12         get                          session1           newvalue1 09:10         get                          session1           newvalue1 09:09         update                  session1           newvalue1 09:02         get                          session1           oldvalue1 09:01         get                          session1           oldvalue1 08:59         get                          session1           oldvalue1 I'm looking to get the last value of field1 before each "update" eventtype. Basically I'd like to track what the value was before and what it was changed to, something like: _time,              Before,                      After 10:06               newvalue2              newvalue3 09:15               newvalue1              newvalue2 09:09               oldvalue1                newvalue1 I've tried this with a transaction command on the session, but I run into issues with the multiple instances "get" events in the same session, which makes it a little convoluted to extract the running values of field1.  I also tried this with a combination of the latest(field1) and earliest(field1), but then this misses any updates that might take place within the session - we sometimes have users who change the value and then change it back. I'd like to capture those events as well.   Does anyone have any tips on how to get this accomplished? Thanks!
Hello, I have a query - searchquery_oneshot = "search (index=__* ... events{}.name=ResourceCreated) | dedup \"events{}.tags.A\" | spath \"events{}.tags.A\" || lookup Map.csv \"B\" OUTPUT \"D\" | ... See more...
Hello, I have a query - searchquery_oneshot = "search (index=__* ... events{}.name=ResourceCreated) | dedup \"events{}.tags.A\" | spath \"events{}.tags.A\" || lookup Map.csv \"B\" OUTPUT \"D\" | table ... | collect ... I ran this using Python SDK in VSCode as - oneshotsearch_results = service.jobs.oneshot(searchquery_oneshot, **kwargs_oneshot) conn.cursor().execute(sql, val) I ran the above using psycopg2 and got this error- FATAL: Error in 'lookup' command: Could not construct lookup 'Map.csv, B, OUTPUT, D'. See search.log for more details. The above query works when run inside splunk enterprise i.e. map.csv is looked-up and result fetched correctly. How do I locate my search.log? It is  splunkhome/var/lib/dispatch/run I assume. What is the error above? Thanks
Hi guys  when I extract a selected event it doesn't show all data in event that I need to extracted
Have a nice day, everyone! I came across some unexpected behavior while trying to move some unwanted events to the nullQueue. I have the sourcetype named 'exch_file_trans-front-recv'. Events relat... See more...
Have a nice day, everyone! I came across some unexpected behavior while trying to move some unwanted events to the nullQueue. I have the sourcetype named 'exch_file_trans-front-recv'. Events related to this sourcetype are ingested by a universal forwarder with the settings below: props.conf   [exch_file_trans-front-recv] ANNOTATE_PUNCT = false FIELD_HEADER_REGEX = ^#Fields:\s+(.*) SHOULD_LINEMERGE = false INDEXED_EXTRACTIONS = csv TIMESTAMP_FIELDS = date_time BREAK_ONLY_BEFORE_DATE = true MAX_TIMESTAMP_LOOKAHEAD = 24 initCrcLength = 256 TRANSFORMS-no_column_headers = no_column_headers   transforms.conf   [no_column_headers] REGEX = ^#.* DEST_KEY = queue FORMAT = nullQueue   In this sourcetype I have some events that I want to delete before indexing. You can see an example below:   2024-08-22T12:58:31.274Z,Sever01\Domain Infrastructure Sever01,08DCC212EB386972,6,172.25.57.26:25,172.21.255.8:29635,-,,Local   So, I'm interested in deleting events with the pattern '...172.21.225.8:....,'. To do it, I created some settings on the indexer cluster layer: props.conf   [exch_file_trans-front-recv] TRANSFORMS-remove_trash = exch_file_trans-front-recv_rt0   transforms.conf   [exch_file_trans-front-recv_rt0] REGEX = ^.*?,.*?,.*?,.*?,.*?,172.21.255.8:\d+, DEST_KEY = queue FORMAT = nullQueue   After applying this configuration across the indexer cluster, I still observe new events with the presented pattern. What am I doing wrong?
Hi Team, Am trying to instrument .NET 4.8 application, which is using the Asp.net SignalR application that is using websocket. When accessing this application, the AppD profiler is getting loaded s... See more...
Hi Team, Am trying to instrument .NET 4.8 application, which is using the Asp.net SignalR application that is using websocket. When accessing this application, the AppD profiler is getting loaded successfully. But I didn't see any measures in the controller. Please check the below URL (XHR Request) I am expecting in the controller. GET /signalr/connect transport=webSockets&clientProtocol=2.1&connectionToken=VRaaiTyPGpv6nQYxV59QI3x6IGjDEvSSCf1ANWpXALK0c6DjkOh9vFnl5MPGlMl4qJWFSAYWcx0HIpiIHBb0HOGSeawT%2FofowF35o5aqOAgrzeYeaAs9spjxBBg6qknK&connectionData=%5B%7B%22name%22%3A%22chathub%22%7D%5D&tid=10 Apart from custom instrumentation since I don't know the application class and function information, is there a way to capture this transaction? 
Hi Splunkers, I’m new to React development and currently working on a React app that handles creating, updating, cloning, and deleting users for a specific Splunk app. The app is working well, but ... See more...
Hi Splunkers, I’m new to React development and currently working on a React app that handles creating, updating, cloning, and deleting users for a specific Splunk app. The app is working well, but for development purposes, I’ve hardcoded the REST API URL, username, and password. Now, I want to enhance the app so it dynamically uses the current session’s user authentication rather than relying on hardcoded credentials. Here’s the idea I’m aiming for: When a user (e.g., "user1" with admin roles) logs into Splunk, their session credentials (like session key or authentication token) are stored somewhere, right? I need to capture those credentials in my React app. Does this approach make sense? I’m looking for advice on how to retrieve and use the session credentials, token, or session key for the logged-in user in my "User Management" React app. Here’s the current code I’m using to fetch all users (with hardcoded credentials):   // Fetch user data from the API const fetchAllUsers = async () => { try { const response = await axios.get('https://localhost:8089/services/authentication/users', { auth: { username: 'admin', password: 'changeme' }, headers: { 'Content-Type': 'application/xml' } }); // Handle response } catch (error) { console.error('Error fetching users:', error); } }; I also tried retrieving the session key using this cURL command: curl -k https://localhost:8089/services/auth/login --data-urlencode username=admin --data-urlencode password=changeme However, I’m still hardcoding the username and password, which isn’t ideal. My goal is for the React app to automatically use the logged-in user’s session credentials (session key or authentication token) and retrieve the hostname of the deployed environment. Additionally, I’m interested in understanding how core Splunk user management operates and handles authorizations. My current approach might be off, so I’m open to learning the right way to do this. Can anyone guide me on how to achieve this in the "User Management" React app? Any advice or best practices would be greatly appreciated! Thanks in advance!
I'm new on splunk ihave this error when finish installation : [root@rhel tmp]# systemctl restart splunk-otel-collector [root@rhel tmp]# systemctl status splunk-otel-collector ● splunk-otel-co... See more...
I'm new on splunk ihave this error when finish installation : [root@rhel tmp]# systemctl restart splunk-otel-collector [root@rhel tmp]# systemctl status splunk-otel-collector ● splunk-otel-collector.service - Splunk OpenTelemetry Collector Loaded: loaded (/usr/lib/systemd/system/splunk-otel-collector.service; enabled; vendor preset: disabled) Drop-In: /etc/systemd/system/splunk-otel-collector.service.d └─service-owner.conf Active: failed (Result: exit-code) since Thu 2024-08-22 16:30:11 WIB; 273ms ago Process: 2760714 ExecStart=/usr/bin/otelcol $OTELCOL_OPTIONS (code=exited, status=1/FAILURE) Main PID: 2760714 (code=exited, status=1/FAILURE) Aug 22 16:30:11 rhel systemd[1]: splunk-otel-collector.service: Service RestartSec=100ms expired, scheduling restart. Aug 22 16:30:11 rhel systemd[1]: splunk-otel-collector.service: Scheduled restart job, restart counter is at 5. Aug 22 16:30:11 rhel systemd[1]: Stopped Splunk OpenTelemetry Collector. Aug 22 16:30:11 rhel systemd[1]: splunk-otel-collector.service: Start request repeated too quickly. Aug 22 16:30:11 rhel systemd[1]: splunk-otel-collector.service: Failed with result 'exit-code'. Aug 22 16:30:11 rhel systemd[1]: Failed to start Splunk OpenTelemetry Collector.
Hello Im trying to create a DB Connect input to log the result of a query inside an index. The query returns data as I can see when I execute it from Splunk however when I go to the Search I cant f... See more...
Hello Im trying to create a DB Connect input to log the result of a query inside an index. The query returns data as I can see when I execute it from Splunk however when I go to the Search I cant find anything in the index that I configured it. 1 - From the "DB Connect Input Health" I see no errors and it shows events from the input I created every x minutes (exactly as I configured it). It also shows this metric that also confirm that there are data been returned in the execution: DBX - Input Performance - HEC Median Throughput Search is completed 0.0465 MB   2 - From "index=_internal pg6 source="/opt/splunk/var/log/splunk/splunk_app_db_connect_server.log"" I can see that it: Job 'my_input_name' started Job 'my_input_name' stopping Job 'my_input_name' finished with status: COMPLETED 3 - If I search the index I created for it, it is empty. 4 - splunk_app_db_connect 3.9.0 Thanks for any light!
Hi, I am currently working on a ticket reporting.  Each ticket has a lastUpdateDate field which gets updates multiple times leading to duplicates. I only need the first lastUpdateDate and latest las... See more...
Hi, I am currently working on a ticket reporting.  Each ticket has a lastUpdateDate field which gets updates multiple times leading to duplicates. I only need the first lastUpdateDate and latest lastUpdateDate to determine when the ticket has entered the pipe and the latest to see if changes were made in the specific period range of the reporting. I tried using | stats first(_raw) as first_entry last(_raw) as last_entry by ticket_id but it shows me the same lastUpdateDate for both. I have read to use min and max but do not gain results from that either.  Thanks in advance for any hints and tips!
Hi all,  I am integrating a Splunk form/dashboard with SOAR, where I use "sendtophantom" to create a container on which a playbook needs to run.  However, what I am noticing is that when the contai... See more...
Hi all,  I am integrating a Splunk form/dashboard with SOAR, where I use "sendtophantom" to create a container on which a playbook needs to run.  However, what I am noticing is that when the container has multiple artifacts, the playbook takes all the artifacts' CEF fields and combines them into one, which then causes havoc with my playbooks. I have considered changing the ingest settings to send MV fields as a list instead of creating new artifacts, but this will break too many other playbooks, so it isn't an option right now.  My flow is basically as follows:  Container gets created with information coming from splunk artifact(s) contain subject and sender email information Playbook needs to run through each artifact to get the subject and sender info  Playbook processes these values Is there a way to specify that a playbook must run against each artifact in a container individually, or another way to alter the datapaths in the VPE to run through each artifact? 
Currently on Splunk ES 7.3.2 Splunk Enterprise Security  where i can see users, who used to be part of the organisation, but are now deleted/disabled (in Splunk) are still populating when i try to as... See more...
Currently on Splunk ES 7.3.2 Splunk Enterprise Security  where i can see users, who used to be part of the organisation, but are now deleted/disabled (in Splunk) are still populating when i try to assign new investigations to other current members of the organisation For instance, Incident Review -> Notable -> Create Investigation In the investigation panel, when i try to assign the investigation to other members of the team, i can also see disabled/deleted accounts/users/members as an option to assign the investigation to. Any way we can remove these members from populating so that the list of investigators replicate the current numbers we have in the team.
Hello, why I cant see on my dashboard Studio, option export csv ? Our version splunk version is 9.2.1 Thks.
Can I ask a question about Splunk? I am using the feature that allows me to embed report jobs into HTML using iFrame. However, even though I have 140 job results in Splunk, only 20 are being displa... See more...
Can I ask a question about Splunk? I am using the feature that allows me to embed report jobs into HTML using iFrame. However, even though I have 140 job results in Splunk, only 20 are being displayed on the embedded HTML. Does anyone know how to solve this issue?
Missing indexes Any one have a way to investigate what causes indexes to suddenly disappear? Running a btool and indexes list… my primary indexes with all my security logs are just not there. I also... See more...
Missing indexes Any one have a way to investigate what causes indexes to suddenly disappear? Running a btool and indexes list… my primary indexes with all my security logs are just not there. I also have a NFS mount for archival and the logs are missing from there too. Going to the /opt/splunk/var/lib/splunk directory I see the last hot bucket was collected around 9am. I am trying to parse through whatever logs to find out what happened and how to recover.
On Splunk Enterprise 9.2 and  DBConnect 3.17.2 I'm in the process of replacing our old Splunk instance, and with the new version of DBConnect, I seem to be unable to disable SSL ecryption on connect... See more...
On Splunk Enterprise 9.2 and  DBConnect 3.17.2 I'm in the process of replacing our old Splunk instance, and with the new version of DBConnect, I seem to be unable to disable SSL ecryption on connection to the database. It's a Microsoft MS-SQL database. I connect using the generic MS SQL driver. I do not have "Enable SSL" checked, I have encrypt=false in the jdbc URL:       jdbc:sqlserver://phmcmdb01:1433;databaseName=CM_PHE;selectMethod=cursor;encrypt=false       and yet, it cannot connect, throwing the error       "encrypt" property is set to "false" and "trustServerCertificate" property is set to "false" but the driver could not establish a secure connection to SQL Server by using Secure Sockets Layer (SSL) encryption: Error: SQL Server did not return a response.       The old system running DBConnect 3.1.4 on Splunk Enterprise 7.3.2 can connect just fine without ssl enabled.  Why is DBConnect insisting on attempting an SSL connection? The SQL server is obviously not requiring it, or the old server would not work. Or is this a false error message and diverting me from some other problem?