All Topics

Top

All Topics

Hi Splunkers,    I'm having a lookup country_categorization, which have the keyword and its equivalent country, we need to use this info for the main search asset when the country field from index i... See more...
Hi Splunkers,    I'm having a lookup country_categorization, which have the keyword and its equivalent country, we need to use this info for the main search asset when the country field from index is "not available" or "Unknown", we need to use this keyword from lookup, need to compare with asset name with index, usually keyqords are set of prefix of asset name with multiple entries and it should match with equivalent country. Index- Asset, country braiskdidi001, Britain breliudusfidf002, Unknown bruliwhdcjn001, not available lookup keyword, country bru - Britain bre - Britain   the output should be   braiskdidi001, Britain breliudusfidf002, Britain bruliwhdcjn001, Britain. Thanks in Advance! Manoj Kumar S
Hello, I'd like to know how to locate the correlation searches that XSOAR is monitoring, rather than the incident review panel in the ES. Could you please check if there's a REST API Search availabl... See more...
Hello, I'd like to know how to locate the correlation searches that XSOAR is monitoring, rather than the incident review panel in the ES. Could you please check if there's a REST API Search available for this? Thanks!
I have alert configure in Splunk and alert search query is generating the events but am not receiving any email alerts  other alerts are working fine in my environment . I have selected "send email" ... See more...
I have alert configure in Splunk and alert search query is generating the events but am not receiving any email alerts  other alerts are working fine in my environment . I have selected "send email" in alert action In splunk . 
Hello Splunkers!!  While accessing the advance search setting macro page is not visible anymore. We have a macro folder that is present under the app default folder but is not visible on the UI.   ... See more...
Hello Splunkers!!  While accessing the advance search setting macro page is not visible anymore. We have a macro folder that is present under the app default folder but is not visible on the UI.        
Hi, We want to automate creation of Some common Health Rules & Policies for multiple Applications at a time, could you please help/Suggest us how we can implement without manual creation for each Ap... See more...
Hi, We want to automate creation of Some common Health Rules & Policies for multiple Applications at a time, could you please help/Suggest us how we can implement without manual creation for each Application individually?
Hi Splunkers, I'm performing some test on my test environment and I'm curious about observed behavior. I want to add some network inputs, so tcp and udp ones, to my env. I found easily on doc how t... See more...
Hi Splunkers, I'm performing some test on my test environment and I'm curious about observed behavior. I want to add some network inputs, so tcp and udp ones, to my env. I found easily on doc how to achieve this: Monitornetworkports and it works fine, with no issues. Inputs are correctly added to my Splunk. I can confirm this with no problem on both web GUI and from CLI using btool. My wonder is: if I use the command in the above link, inputs are added on inputs.conf located in SPLUNK_HOME\etc\apps\search\local. For example, if I use: splunk add tcp 3514 -index network -soucetype checkpoint   And then, I digit  splunk btool inputs list --debug | findstr 3514   The output is: C:\Program Files\Splunk\etc\apps\search\local\inputs.conf [tcp://3514]   And, checking manually the file, confs related to my add command are exactly on it. So, I assume that search is the default app if no additional parameter are provided. Now, I know well that if I want edit another inputs.conf file, I can simply manually edit it. But what about if I want edit another inputs.conf from CLI? In other words: I want to know if I can use the splunk add command and specify which inputs.conf file modify. Is it possible? 
Hi! I have a saved historical data of some metric, from even before the agent got installed, is there any way to load it into the controller? So I can see this metric even for the time that I didn't... See more...
Hi! I have a saved historical data of some metric, from even before the agent got installed, is there any way to load it into the controller? So I can see this metric even for the time that I didn't have an agent installed? I don't see any API for that...   Thanks! -Dimitri )
  Subject: Issue with Splunk server not starting after configuring TLS Description: I'm encountering an issue with my Splunk server after configuring TLS. Here's a summary of the steps I've taken:... See more...
  Subject: Issue with Splunk server not starting after configuring TLS Description: I'm encountering an issue with my Splunk server after configuring TLS. Here's a summary of the steps I've taken: Placed the certificate files (cert.pem, cacert.pem, key.pem) in the directory: /opt/splunk/etc/auth/mycerts/. Modified the /opt/splunk/etc/system/local/server.conf file with the following configurations: ⁠[sslConfig] enableSplunkdSSL = true sslVersions = tls1.2,tls1.3 serverCert = /opt/splunk/etc/auth/mycerts/cert.pem sslRootCAPath = /opt/splunk/etc/auth/mycerts/cacert.pem sslKeysfile = /opt/splunk/etc/auth/mycerts/key.pem   After restarting the Splunk server using the command ./splunk restart, the following messages were displayed: ⁠Starting splunk server daemon (splunkd)... Done  Waiting for web server at http://127.0.0.1:8000 to be available.... WARNING: web interface does not seem to be available!   Additionally, when checking the status with ./splunk status, the result is: splunkd is not running. Could someone assist me in troubleshooting this issue? I'm unsure why the Splunk server is not starting properly after enabling TLS. Thank you for your help!          
From Splunk, can I see the queries that have been implemented in the database? like update, delete, insert, etc.?
Hello Team, as we delve into Splunk Attack Range 3.0, we're interested in understanding the MITRE ATT&CK tactics and techniques that can be simulated within this environment. If you have information ... See more...
Hello Team, as we delve into Splunk Attack Range 3.0, we're interested in understanding the MITRE ATT&CK tactics and techniques that can be simulated within this environment. If you have information on this, kindly share it with us. Thank you!
I have this query which is working as expected. There are two different body axs_event_txn_visa_req_parsedbody and axs_event_txn_visa_rsp_formatting and common between two is F62_2 (eventtype =axs_e... See more...
I have this query which is working as expected. There are two different body axs_event_txn_visa_req_parsedbody and axs_event_txn_visa_rsp_formatting and common between two is F62_2 (eventtype =axs_event_txn_visa_req_parsedbody "++EXT-ID[C0] FLD[Authentication Program..] FRMT[TLV] LL[1] LEN[2] DATA[01]") OR eventtype=axs_event_txn_visa_rsp_formatting | rex field=_raw "(?s)(.*?FLD\[Acquiring Institution.*?DATA\[(?<F19>[^\]]*).*)" | rex field=_raw "(?s)(.*?FLD\[Authentication Program.*?DATA\[(?<FCO>[^\]]*).*)" | rex field=_raw "(?s)(.*?FLD\[62-2 Transaction Ident.*?DATA\[(?<F62_2>[^\]]*).*)" | rex field=_raw "(?s)(.*?FLD\[Response Code.*?DATA\[(?<VRC>[^\]]*).*)" | stats values(txn_uid) as txn_uid, values(txn_timestamp) as txn_timestamp, values(F19) as F19, values(FCO) as FCO, values(VRC) as VRC by F62_2 | where F19!=036 AND FCO=01 now lets say i want to rewrite this query using appendcol/substring. something like this. TID from axs_event_txn_visa_req_parsedbody the resulted output should be passing to another query so i can corresponding log For example Table -1  Name Emp-id Jayesh 12345 Table Designation Emp-id Engineer 12345 use Emp-id from table-1 and get the destination from table-2, similarly TID is the common field between two index, i want to fetch VRC using TID from Table-1 index=au_axs_common_log source=*Visa* "++EXT-ID[C0] FLD[Authentication Program..] FRMT[TLV] LL[1] LEN[2] DATA[01]" | rex field=_raw "(?s)(.*?FLD\[62-2 Transaction Ident.*?DATA\[(?<TID>[^\]]*).*)" |appendcols search [ index=au_axs_common_log source=*Visa* "FORMATTING:" | rex field=_raw "(?s)(.*?FLD\[62-2 Transaction Ident.*?DATA\[(?<TID>[^\]]*).*)" |rex field=_raw "(?s)(.*?FLD\[Response Code.*?DATA\[(?<VRC>[^\]]*).*)" | stats values(VRC) as VRC by TID ]
Hello, I have a dashboard where the drop down list is working for me as i have splunk admin access where as the same drop down list is not populating for a user with splunk user level access. How d... See more...
Hello, I have a dashboard where the drop down list is working for me as i have splunk admin access where as the same drop down list is not populating for a user with splunk user level access. How do i need to troubleshoot this issue? Thanks
Hi, I am creating a Dashboard and using the Dashboard Studio template, and previously I developed a Splunk Visualization. How can I define a Splunk Visualization on Dashboard Studio? Because by d... See more...
Hi, I am creating a Dashboard and using the Dashboard Studio template, and previously I developed a Splunk Visualization. How can I define a Splunk Visualization on Dashboard Studio? Because by default, I can only choose from the available Splunk Visualizations that Splunk has provided.
Hi, Does anyone out there use any archiving software to monitor, report and manage frozen bucket storage in an on-prem archive storage location? I have found https://www.conducivesi.com/splunk-arch... See more...
Hi, Does anyone out there use any archiving software to monitor, report and manage frozen bucket storage in an on-prem archive storage location? I have found https://www.conducivesi.com/splunk-archiver-vsl to fit our requirements but we are interested in looking at other options.  Thanks 
The Microsoft Teams Add-On for Splunk shows a Microsoft Teams tab for Microsoft 365 App for Splunk, however I do not see that on the app. Has it been removed, or am I missing something? Possibly some... See more...
The Microsoft Teams Add-On for Splunk shows a Microsoft Teams tab for Microsoft 365 App for Splunk, however I do not see that on the app. Has it been removed, or am I missing something? Possibly something is available in On Prem but not Cloud?
I am looking to represent stats for the 5 minutes before and after the hour for an entire day/timeperiod.  The search below will work but still breaks up the times into 5 minute chunks as it crosses ... See more...
I am looking to represent stats for the 5 minutes before and after the hour for an entire day/timeperiod.  The search below will work but still breaks up the times into 5 minute chunks as it crosses the top of the hour. Is there a better way to search? index=main (earliest=01/08/2024:08:55:00 latest=01/08/2024:09:05:00) OR (earliest=01/08/2024:09:55:00 latest=01/08/2024:10:05:00) OR (earliest=01/08/2024:10:55:00 latest=01/08/2024:11:05:00) | bin _time span=10m | stats count by _time Received results  
This started out as a question, but is now just an FYI.  Similar to this post, this week I received a old vulnerability notice from Tenable about my Splunk instance.  We'd previously remediated this ... See more...
This started out as a question, but is now just an FYI.  Similar to this post, this week I received a old vulnerability notice from Tenable about my Splunk instance.  We'd previously remediated this issue, so it was weird that it showed up again suddenly. Vulnerability details: https://packetstormsecurity.com/files/144879/Splunk-6.6.x-Local-Privilege-Escalation.html https://advisory.splunk.com/advisories/SP-CAAAP3M?301=/view/SP-CAAAP3M https://www.tenable.com/plugins/nessus/104498 The details in the articles are light, except saying to review the directions here for running Splunk as non-root: https://docs.splunk.com/Documentation/Splunk/9.1.2/Installation/RunSplunkasadifferentornon-rootuser Tenable also doesn't give details about exactly what it saw...it just says, "The current configuration of the host running Splunk was found to be vulnerable to a local privilege escalation vulnerability."   My OS is RHEL 7.x.  I'm launching Splunk using systemd with a non-root user and I have no init.d related files for Splunk.   My understanding is that launching with systemd eliminates the issue, since this way, Splunk never starts with root credentials anyway. Per Splunk's own advisory, any Splunk system is vulnerable, if: Satisfied one of the following conditions a. A Splunk init script created via $SPLUNK_HOME/bin/splunk enable boot-start –user on Splunk 6.1.x or later. b. A line with SPLUNK_OS_USER= exists in $SPLUNK_HOME/etc/splunk-launch.conf In my case, this is an old server and at one point, we did run the boot start command, which made changes to the $SPLUNK_HOME/etc/splunk-launch.conf line that sets the SPLUNK_OS_USER.  Although we had commented out the launch line, the Tenable regex is apparently broken and doesn't realize the line was disabled with a hash.  Removing the line entirely made Tenable stop reporting the vulnerability.  I assume their regex was only looking for "SPLUNK_OS_USER=<something>" so it missed the hash. Anyway, hope this helps someone.        
I need to migrate our cluster master to a new machine. It currently has these roles: Cluster Master Deployment Server Indexer License Master Search Head SHC Deployer I already migrated the L... See more...
I need to migrate our cluster master to a new machine. It currently has these roles: Cluster Master Deployment Server Indexer License Master Search Head SHC Deployer I already migrated the License master role to the new server and it's working fine. I've been trying to follow the documentation here: https://docs.splunk.com/Documentation/Splunk/8.2.2/Indexer/Handlemanagernodefailure From what I gather, I need to copy all the files in /opt/splunk/etc/deployment-apps, /opt/splunk/etc/shcluster and /opt/splunk/etc/master-apps, plus anything that's in /opt/splunk/etc/system/local. Then add the passwords in plain text to the server.conf in the  local folder, restart Splunk on the new host and point all peers and search heads to the new master in their respective local server.conf files.  Is there anything else that needs done or would this take care of switching the cluster master entirely? And is there a specific order in which to do things?
Hello, We set HEC http input for several flows of data and related tokens, and we added ACK feature to this configuration. (following https://docs.splunk.com/Documentation/Splunk/9.1.2/Data/AboutHE... See more...
Hello, We set HEC http input for several flows of data and related tokens, and we added ACK feature to this configuration. (following https://docs.splunk.com/Documentation/Splunk/9.1.2/Data/AboutHECIDXAck) We work with a distributed infra, 1 Search Head, two indexers (no cluster) All was Ok with HEC but after some time we got our first error event : ERROR HttpInputDataHandler [2576842 HttpDedicatedIoThread-0] - Failed processing http input, token name=XXXX [...] reply=9, events_processed=0 INFO HttpInputDataHandler [2576844 HttpDedicatedIoThread-2] - HttpInputAckService not in healthy state. The maximum number of ACKed requests pending query has been reached. Server busy error (reply=9) leads to unavailability of HEC, but only for the token(s) where maximum number of ACKed requests pending query have been reached. Restarting the indexer is enough to get rid of the problem, but after many logs have been lost. We did some search and tried to customize some settings, but we only succeeded in delaying the 'server busy' problem (1 week to 1 month). Has anyone experienced the same problem ? How can we avoid increasing those pending query counter ? Thanks a lot for any help. etc/system/local/limits.conf [http_input] # The max number of ACK channels. max_number_of_ack_channel = 1000000 # The max number of acked requests pending query. max_number_of_acked_requests_pending_query = 10000000 # The max number of acked requests pending query per ACK channel. max_number_of_acked_requests_pending_query_per_ack_channel = 4000000 etc/system/local/server.conf [queue=parsingQueue] maxSize=10MB maxEventSize = 20MB maxIdleTime = 400 channel_cookie = AppGwAffinity (this one because we are using load balancer, so cookie is also set on LB)
Hi Team, We are running Splunk v9.1.1 and need to upgrade PCI app from v4.0.0 to v5.3.0 I am trying to find out the upgrade path i.e to which version it has to be before it upgraded to 5.3.0