All Topics

Top

All Topics

Can you please help me to build eval query Condition-1 ABC=Match XYZ=Match then output of ABC compare to XYZ is Match Condition-2 ABC=Match XYZ=NO_Match then output of ABC compare... See more...
Can you please help me to build eval query Condition-1 ABC=Match XYZ=Match then output of ABC compare to XYZ is Match Condition-2 ABC=Match XYZ=NO_Match then output of ABC compare to XYZ is No_Match Condition-3 ABC=NO_Match XYZ=Match then output of ABC compare to XYZ is No_Match Condition-4 ABC=NO_Match XYZ=NO_Match then output of ABC compare to XYZ is No_Match
Hello I have a DBConnect query that gets data from a database and then send it to a Splunk index. Below are the query and also how it looks in Splunk. The data is being indexed as key=value pair wit... See more...
Hello I have a DBConnect query that gets data from a database and then send it to a Splunk index. Below are the query and also how it looks in Splunk. The data is being indexed as key=value pair with dobbel quotes around "value". I have plenty of other data that is not using DBConnect and they dont have dobbel quotes around value.   Maybe the quotes is there because im using DBConnect? Is it possible to index data from DBConnect without adding the quotes? When i try to searc the data in Splunk i just dont get any data. I think it may have to do with the dobbel quotes? I'm not sure. Here are the search string. The air_temp is defined in the Climate datamodel. The TA(air temperature) in the data is defined in props.conf with the right sourcetype TU_CLM_Time. | tstats avg(Climate.air_temp) as air_temp from datamodel="Climate" where sourcetype="TU_CLM_Time" host=TU_CLM_1 by host _time span=60m ```Fetching relevant fields from CLM sourcetype in CLM datamodel.```      
Hey Splunk team, I’m facing an issue where Splunk fails to search for certain key-value pairs in some events unless I use wildcards (*) in the value. Here's an example to illustrate the problem: ... See more...
Hey Splunk team, I’m facing an issue where Splunk fails to search for certain key-value pairs in some events unless I use wildcards (*) in the value. Here's an example to illustrate the problem: { "xxxx_ID": "78901", "ERROR": "Apples mangos lemons. Banana blackberry blackcurrant blueberry.", "yyyy_NUM": "123456", "PROCESS": "orange", "timestamp": "yyyy-mm-ddThh:mm:ss" } Query Examples: This works (using wildcards): index="idx_xxxx" *apples mangos lemons* These don’t work: -> index="idx_xxxx"  ERROR="Apples mangos lemons. Banana blackberry blackcurrant blueberry." -> index="idx_xxxx"  ERROR=*apples mangos lemons* -> The query below, using regex, does not include all error values trying to find any value after the ERROR key:  index="idx_xxxx" | rex field=_raw "ERROR:\s*(?<error_detail>.+?)(?=;|$)" | table error_detail Observations: Non-Latin characters are not the issue because of other events, for example, Greek text in the ERROR field is searchable without wildcards. This behavior is inconsistent: some events allow exact matches, but others don’t. Questions: Could this issue stem from inconsistencies in the field extraction process? Are there common pitfalls or misconfigurations during indexing or source-type assignments that might cause such behavior? How can I debug and verify that the keys and values are properly extracted/indexed? Any help would be greatly appreciated! Thank you!  ‌‌
I have an index in which data is coming DB_connect , but it showing NO EVENTS as it is showing this error "Invalid database connection"  and Everything is fine from database side.
Hi Splunk Community, I’ve set up Azure Firewall logging, selecting all firewall logs and archiving them to a storage account (Event Hub was avoided due to cost concerns). The configuration steps tak... See more...
Hi Splunk Community, I’ve set up Azure Firewall logging, selecting all firewall logs and archiving them to a storage account (Event Hub was avoided due to cost concerns). The configuration steps taken are as follows: Log Archival: All Azure Firewall logs are set to archive in a storage account Microsoft Cloud Add-On I added the storage account to the Microsoft Cloud Add-On using the secret key with the following permissions: Input/Action API Permissions Role (IAM) Default Sourcetype(s) / Sources Azure Storage Table Azure Storage Blob N/A Access key  OR Shared Access Signature:   - Allowed services: Blob, Table   - Allowed resource types: Service, Container, Object   - Allowed permissions: Read, List N/A mscs:storage:blob (Received this) mscs:storage:blob:json mscs:storage:blob:xml mscs:storage:table We are receiving events from the source files in JSON format, but there are two issues: Field Extraction: Critical fields such as protocol, action, source, destination, etc., are not being identified. Incomplete Logs: Logs appear truncated, starting with partial data (e.g., “urceID:…”) and missing “Reso,” which implies dropped or incomplete events (As far as I understand) Few logs were received compared to the traffic on Azure Firewall. Attached is a piece of logs showing errors as mentioned in the question. ________________________________________________________________ Environment Details:  • Log Collector: Heavy Forwarder (HF) hosted in Azure. • Data Flow: Logs are being forwarded to Splunk Cloud    Questions: Can it be an issue with using storage accounts and not event-hub? Could the incomplete logs be due to a configuration issue with the Microsoft Cloud Add-On or possibly related to the data transfer between the storage account and Splunk? Has anyone encountered similar issues with field extraction from Azure Firewall JSON logs? Ultimate Goal: Receive Azure Firewall Logs with fields extracted as any other firewall logs received by Syslog (Fortinet for example) Any guidance or troubleshooting suggestions would be much appreciated!  
I would like to seek advice from experienced professionals. I want to add another heavy forwarder to my environment as a backup in case the primary one fails (on a different network and not necessari... See more...
I would like to seek advice from experienced professionals. I want to add another heavy forwarder to my environment as a backup in case the primary one fails (on a different network and not necessarily active-active).  * I have splunk cloud and 1 Heavy Forwarder, 1  Deployment server on premise. 1. If I copy a heavy forwarder (VM) from one vCenter to another, change the IP, and generate new credentials from Splunk Cloud, will it work immediately? (I want to preserve my existing configurations.) 2. I have a deployment server. Can I use it to configure two heavy forwarders? If so, what would be the implications? (Would there be data duplication, or is there a way to prioritize data? Or is there a better way I should do this? Please advise.
Hello Splunk Community, I’m working on a project to implement a Security Information and Event Management (SIEM) solution for a small-to-medium-sized enterprise that provides IT support and managed ... See more...
Hello Splunk Community, I’m working on a project to implement a Security Information and Event Management (SIEM) solution for a small-to-medium-sized enterprise that provides IT support and managed services. We're exploring options within the Splunk product line for effective log collection and analysis from endpoint devices, as well as vulnerability detection. Could you recommend the most suitable Splunk product(s) for this scope, along with pricing information or guidance on how to estimate the costs? Any advice on best practices or additional tools to enhance incident response would also be greatly appreciated. Thank you!
We are experiencing issues configuring RADIUS authentication within Splunk. Despite following all required steps and configurations, authentication via RADIUS is not working as expected, and users ar... See more...
We are experiencing issues configuring RADIUS authentication within Splunk. Despite following all required steps and configurations, authentication via RADIUS is not working as expected, and users are unable to authenticate through the RADIUS server. - Installed radius client on splunk machine and configure the radiusclient.conf file with radius server data - Updated the authentication.conf file located in $SPLUNK_HOME/etc/system/local/, as well as updates to web.confto support RADIUS authentication requests in Splunk Web. - Used the radtest tool to validate the connection between the Splunk RADIUS client - Monitored the Splunk authentication logs in $SPLUNK_HOME/var/log/splunk/splunkd.log to identify any errors, and consistently encountered the following error: Could not find [externalTwoFactorAuthSettings] in authentication stanza. - Integrated radiusScripted.py to assist with RADIUS authentication, configuring it to work with the authentication settings. It appears that Splunk is unable to successfully authenticate with the RADIUS server, with repeated errors indicating missing configuration stanzas or settings that are not recognized. Environment Details: Splunk Version: 9.1.5 Authentication Configuration Files: authentication.conf, web.conf Additional Scripts: radiusScripted.py Please advise on troubleshooting steps or configuration adjustments needed to resolve this issue. Any insights or documentation on RADIUS integration best practices with Splunk would be highly appreciated. thanks   
Upgraded the Machine Agent from 22 to v24.8.0.4467, after the changes, seeing below errors, [DBAgent-7] 06 Nov 2024 17:24:28,994 ERROR ADBMonitorConfigResolver - [XXXXXXXXXXXX] Failed to resolve DB ... See more...
Upgraded the Machine Agent from 22 to v24.8.0.4467, after the changes, seeing below errors, [DBAgent-7] 06 Nov 2024 17:24:28,994 ERROR ADBMonitorConfigResolver - [XXXXXXXXXXXX] Failed to resolve DB topological structure. java.sql.SQLException: ORA-01005: null password given; logon denied I have provided the userid and password in the configuration -> controller setting, Any one faced this kind of issue ?
Hi All  I would like to add reset button in the dashboard however i am not able to see the option to add in dashboard studio.   Thanks
How can I get a list of all libraries included in this app? I will need that to get this through our security review.
Hello, Splunk doesn't display extra spaces on variables that I assigned. Please see below example I used Google Chrome and Microsoft Edge, it gave me same results.  If I exported the CSV, the data ... See more...
Hello, Splunk doesn't display extra spaces on variables that I assigned. Please see below example I used Google Chrome and Microsoft Edge, it gave me same results.  If I exported the CSV, the data have correct number of spaces. Please suggest. Thank you   | makeresults | fields - _time | eval One Space = "One space Test" | eval Two Spaces = "Two spaces Test" | eval Three Spaces = "Three spaces Test"        
I have an index with events containing a src_ip but not a username for the event.   I have another index of VPN auth logs that has the assigned IP and username.  But the VPN IPs are randomly assigned... See more...
I have an index with events containing a src_ip but not a username for the event.   I have another index of VPN auth logs that has the assigned IP and username.  But the VPN IPs are randomly assigned. I need to get the username from the VPN logs where vpn.client_ip matches event.src_ip.  But I need to make sure that the returned username is the one that was assigned during the event.  In short, I need to get the last vpn client_ip assignment to match the event.src_ip BEFORE the event so the vpn.username would be the correct one for event.src_ip. Here's a generic representation of my current query but I get nothing back. index=event ... | join left=event right=vpn where event.src_ip=vpn.client_ip max=1 usetime=true earlier=true [search index=vpn]   
Please advise as to whether a specific license is needed to support indexing on a heavy forwarder; Like an indexing license?
Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data insights, key use cases, and tips on managing Splunk more efficiently. We also host Getti... See more...
Splunk Lantern is a Splunk customer success center that provides advice from Splunk experts on valuable data insights, key use cases, and tips on managing Splunk more efficiently. We also host Getting Started Guides for a range of Splunk products, a library of Product Tips, and Data Descriptor articles that help you see everything that’s possible with data sources and data types in Splunk. This month, we’re excited to share some big updates to the Financial Services section of our Use Case Explorer for the Splunk Platform. We’re also sharing the rest of the new articles we’ve published this month, featuring some new updates to our Definitive Guide to Best Practices for IT Service Intelligence (ITSI) and many more new articles that you can find towards the end of this article. Read on to find out more.   Finessing Splunk for Financial Services The Lantern team has been busy working with Splunk’s industry experts to update our Use Case Explorer for the Splunk Platform with brand-new use cases. The Use Case Explorer is a great tool to help you implement new use cases using either Splunk Enterprise or Splunk Cloud Platform, containing use cases that have been developed for seven key industries - Financial Services, Healthcare, Retail, Technology Communications and Media, Public Sector, Manufacturing, and Energy. This month, we’ve launched a new Deployment Guide for Detecting and preventing fraud with the Splunk App for Fraud Analytics. This new guide introduces you to ways you can use the Spunk App for Fraud Analytics to enable detections for account takeovers, wire transfer fraud, credit card fraud, and new account fraud. We’ve also published a number of new use cases that give you even more options for ways you can use the Splunk platform and Splunk apps to detect fraud within financial services settings. The following articles show you how you can set up basic detections in the platform to detect account abuse, account takeovers, or money laundering. Alternatively, you can choose to use the Splunk App for Behavioral Analytics to create advanced techniques leveraging user behavioral analytics, helping you to stay ahead of these emerging threats. Monitoring for account abuse with the Splunk platform Monitoring for account takeover with the Splunk platform Monitoring money laundering activities with the Splunk platform Monitoring for account abuse with the Splunk App for Behavioral Analytics Monitoring for account takeover with the Splunk App for Behavioral Analytics Monitoring money laundering activities with the Splunk App for Behavioral Analytics   ITSI Best Practices We’re constantly adding to and updating the Definitive Guide to Best Practices for IT Service Intelligence, and this month we’ve added even more new articles for ITSI users to explore. Using the Content Pack for ITSI Monitoring and Alerting for policy management shows you how to use correlation searches and notable event aggregation policies that will save you time and administrative effort. Understanding the less exposed elements of ITSI provides helpful information on the macros and lookups that ship with ITSI, which can provide you quick access to valuable information about your environment. Understanding anomaly detection in ITSI teaches you how to best use detection algorithms in ITSI in order to deploy them effectively to the right use cases.  These new articles are just some of many articles in the Definitive Guide to Best Practices for IT Service Intelligence, so if you’re looking to improve how you work with ITSI then don’t miss this helpful resource!   Everything Else That’s New Here’s everything else we’ve published over the month: Using the MITRE map in Mission Control Installing and upgrading to Splunk Enterprise Security 8x Using federated search for Amazon S3 (FS-S3) to filter, enrich, and retrieve data from Amazon S3 Finding, deploying, and managing security detections Demonstrating ROI from SOAR Ingesting VPC flow logs into Edge Processor via Amazon Data Firehose We hope you’ve found this update helpful. Thanks for reading! Kaye Chapman, Senior Lantern Content Specialist for Splunk Lantern
After  Splunk forwarder version got upgrade from 9.0.5.0 to 9.3.1.0 windows server are having issue in forwarding the data to Splunk. Splunkd is stopping often in different server after restarting... See more...
After  Splunk forwarder version got upgrade from 9.0.5.0 to 9.3.1.0 windows server are having issue in forwarding the data to Splunk. Splunkd is stopping often in different server after restarting splund it start forwarding the data but issue comes again after 2,3 days  what actions to be taken to make the logs flow easily to Splunk  
Hi All, Our current setup involves Splunk Search Heads hosted in Splunk Cloud and managed by Support. The existing Deployment Master server is hosted on Azure, where it has been operating smoothly, ... See more...
Hi All, Our current setup involves Splunk Search Heads hosted in Splunk Cloud and managed by Support. The existing Deployment Master server is hosted on Azure, where it has been operating smoothly, supporting around 900+ clients that send logs to Splunk through it. Now, we’re planning to migrate the Deployment Master from Azure to an on-premises Nutanix environment. We’ve built a new server on-premises with the necessary hardware specifications and are preparing to install the latest Splunk Enterprise package (version 9.3.1) downloaded from the Splunk website. We’ll place this package in the `/tmp` directory on the new server, extract it in the `/opt` directory, accept the license agreement, and start Splunk services. Once up, we’ll access the GUI to import the Enterprise licenses. Next, I’ll download the Splunk Universal Forwarder Credential package (Splunkclouduf app) from the Splunk Cloud Search Head. Could you confirm whether this downloaded app should be placed in the `/opt/splunk/etc/apps`, `/opt/splunk/etc/deployment-apps`, or `/tmp` directory on the new server? From there, we can proceed with the installation. Please confirm. Once installed, the Splunkclouduf app will create a `100_splunkcloud` folder in the `/opt/splunk/etc/apps` directory. Should I then copy the `100_splunkcloud` folder to the `/opt/splunk/etc/deployment-apps` directory? Also can we rename the folder name from "100_splunkcloud" to some custom name  Additionally, the next step will involve transferring all deployment apps from the `deployment-apps` directory on the old server (`/opt/splunk/etc/deployment-apps`) to the new server in the same location—please confirm if this is correct. Finally: - Update the `deploymentclient` app on both the old and new Deployment Master servers with the new server name. - Reload the server class on the old Deployment Master server. - Verify that all clients are reporting to the new Deployment Master server.   Want to get it clarified whether these steps are correct or if i missed out anything kindly let me know. So that my new DM server should be running fine post migration.
Can someone suggest if we can configure Cluster Master to work as License Master also ?   I tried to configure but it's throwing error   reason='Unable to connect to license manager=https://xx.xx... See more...
Can someone suggest if we can configure Cluster Master to work as License Master also ?   I tried to configure but it's throwing error   reason='Unable to connect to license manager=https://xx.xx.xx.xx:8089 Read Timeout'
  We have plan to migrate the old physical server to new physical server and the server is a Search Head component in Splunk Environment. for the new physical server we will be receiving new IP add... See more...
  We have plan to migrate the old physical server to new physical server and the server is a Search Head component in Splunk Environment. for the new physical server we will be receiving new IP address, my query is how to configure new IP to the existing Splunk Server Environment Our Splunk Environment has 1 - Cluster master 4 - indexer 1 - deployment server 1- Search Head 1- monitoring console 1- License Master DR Servers 1 - Search Head 1- Indexer
I have a custom command that I call that populates a lookup but when I run the command, it only runs the script 5-20 times (it changes every time) while getting 20,000+ results. I'm wanting to run a ... See more...
I have a custom command that I call that populates a lookup but when I run the command, it only runs the script 5-20 times (it changes every time) while getting 20,000+ results. I'm wanting to run a query that sends the information into a custom script, to then populate a lookup, almost as if it's recursive. I'm thinking this is a performance issue of the script (it is a Python script so it's not the fastest). This is an example command of what it looks like:  index="*" host="example.org" | map search="| customcommand \"$src$\""