All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

In Red hat OpenShift on premises cluster i need to collect logs, metrics, and traces of the cluster  , when there is no internet connection on the on prime cloud how can i do this ?
Currently using an customize App to connect to a case / monitoring system and retrieve data. I found out that, Splunk has the ability to detect if the data has already been indexed.  But if I have ... See more...
Currently using an customize App to connect to a case / monitoring system and retrieve data. I found out that, Splunk has the ability to detect if the data has already been indexed.  But if I have the following scenario? will it consider as a duplicate or new data? since it has a new close case timing for the update close case. One of the previously closed cases has been reopened and closed again with a new case closed time. will Splunk enterprise consider as a new data to index?
indexes.conf [volume:hot] path=/mnt/splunk/hot maxVolumeDataSizeMB = 40 [volume:cold] path = /mnt/splunk/cold maxVolumeDataSizeMB = 40 [A] homePath = volume:hot/A/db coldPath = volume:cold/A/coldd... See more...
indexes.conf [volume:hot] path=/mnt/splunk/hot maxVolumeDataSizeMB = 40 [volume:cold] path = /mnt/splunk/cold maxVolumeDataSizeMB = 40 [A] homePath = volume:hot/A/db coldPath = volume:cold/A/colddb maxDataSize = 1 maxTotalDataSizeMB = 90 thawedPath = $SPLUNK_DB/A/thaweddb [_internal] homePath = volume:cold/_internaldb/db coldPath = volume:cold/_internaldb/colddb thawedPath = $SPLUNK_DB/_internaldb/thaweddb maxDataSize = 1 maxTotalDataSizeMB = 90 I collected data from each index, and the percentage stored in cold volume was A=30MB, _internaldb/db=10MB. This was understood to account for a larger percentage because the data volume and collection speed of the A index was larger and faster than that of _internal collection. If you stop collecting data from the A index and maintain data collection only for the _internal index, the old buckets in _internaldb/db will be moved to _internaldb/colddb in the order they were loaded in _internaldb/db, and will not be maintained in colddb in the order in which they were loaded in _internaldb/db, but will be immediately deleted. Additionally, data that existed in A/colddb is deleted in oldest order. I understood that the cold volume is limited to 40 and the cold volume is already full, so it will not be maintained in _internaldb/colddb and will be immediately deleted. However, why is the data in A/colddb deleted? Afterwards, when the A/colddb capacity reaches 20, A/colddb is not deleted. The behavior I expected was that if A/colddb capacity is deleted until it becomes 0, the old buckets in _internaldb/db would be moved to _internaldb/colddb and then maintained. I'm curious why the results are different from what I expected, and if maxTotalDataSizeMB is the same, the Volume maintains the same ratio.
Hello all, We have a requirement to have a common dashboard for all applications. For a application we have max 2 indexes (one for non-prod env FQDNs and one for prod env FQDNs) and users are restri... See more...
Hello all, We have a requirement to have a common dashboard for all applications. For a application we have max 2 indexes (one for non-prod env FQDNs and one for prod env FQDNs) and users are restricted based on index. My doubt is -  1. Can we create common dashboard for all applications (nearly 200+ indexes are there) by giving index=* in base search. My question is we have A to Z indexes but User A has access to only A index. Here if user A gives index=* will Splunk look for A to Z indexes or only A index which they have access to. (because I am afraid that splunk resource wastage.) 2. We have seperate role called test engineer who has access to all indexes (A to Z). Is this a good idea to have common dashboard for all because if engineer loads the data all indexes will be loaded which in return cause performance issue for users? 3. We have app_name in place. Can I exclude index=* in base search and give app_name = "*app_name*" and app_name is dropdown... so by default * will not be given. Once user selects app_name dashboard will be populated? 4. Or having separate dashboard for separate applications work? But the ask for them is to have common dashboard. Not sure is this a good practice? Please enlighten me with your thoughts and the best approach.
I have created a custom extension that captures the status of the Scheduled Job (e.g., Ready, Running, Others) and sends the data to AppDynamics as 1, 2, etc. respectively. Do we have any feasibilit... See more...
I have created a custom extension that captures the status of the Scheduled Job (e.g., Ready, Running, Others) and sends the data to AppDynamics as 1, 2, etc. respectively. Do we have any feasibility of configuring Health Rules to accommodate the following conditions: TaskName Triggers Failure_PushData_ToWellnessCore_CRON Start at 12 AM and execute every 3 Hrs PushToWellness fromDbtoConsoleApp Start at 5 PM and execute every 3 Hrs Wellnessdatacron starts at 12.01Pm and execute every 1 hr WellnessFailureCRON 9am, 12am, 3pm, 6pm, 10pm, 1am, 5 am NoiseDataSyncCron Start at 11Am and execute every 1hr NoiseWebhookProcessor Start at 11Am and execute every 2hrs I tried configuring Cron with Start time as 0 17/3 * * * and End time 59 23 30 4 * as to accommodate "Start at 5 PM and execute every 3 Hrs " condition as a Health rule schedule but I am getting error as Error creating schedule: Test. Cause: Unexpected end of expression.  Can anyone help me with this?
How can I migrate SmartStore's local storage to a new storage device with no interruption to search and indexing functionality? Could it be as simple as updating homePath one index at a time, restar... See more...
How can I migrate SmartStore's local storage to a new storage device with no interruption to search and indexing functionality? Could it be as simple as updating homePath one index at a time, restarting the indexers, and allowing the cache manager to do the rest? 
I'm trying to install the Qualys Technology Add-on (TA) (https://splunkbase.splunk.com/app/2964)  into Splunk Cloud.  I tried downloading from splunkbase and uploading to splunkcloud but received an ... See more...
I'm trying to install the Qualys Technology Add-on (TA) (https://splunkbase.splunk.com/app/2964)  into Splunk Cloud.  I tried downloading from splunkbase and uploading to splunkcloud but received an error stating "This app is available for installation directly from Splunkbase. To install this app, use the App Browser page in Splunk Web." When I try the "Browse More Apps" method, I cannot locate the Qualys TA.  I DO see other Qualys apps such as Qualys FIM, Qualys VM, Qualys CSAM, etc., but I don't see the TA.   What am I missing?
Hi, Im trying to use an OR function in the below query trying to combine two indexes and then use stats function like an alternate for join command (index=serverdata sourcetype="server:stats" | rex... See more...
Hi, Im trying to use an OR function in the below query trying to combine two indexes and then use stats function like an alternate for join command (index=serverdata sourcetype="server:stats" | rex "app_code=\"(?<application_code>[|w.\"]*)" ) OR (index="hostapp" source=hostDB_Table dataasset="*host_Data*") i have tried to use escape characters but its still not working thanks     
Hello, I have 2 questions about Splunk AI Assistant(Cloud Version). If Customers sign the EULA and receive notification that app can be installed on their Stack, is the app possible for install on ... See more...
Hello, I have 2 questions about Splunk AI Assistant(Cloud Version). If Customers sign the EULA and receive notification that app can be installed on their Stack, is the app possible for install on all the customer stacks. In case Stack has premium products like ITSI and ES can app be used from Premium Search heads or it needs to be installed only on Adhoc SH and used only from there for its purposes? Thanks! Regards Darina Stoyanova-Mateva
Hello I have a search like     index=index1 | rename Number__c as EventId | append [search index=index2 sourcetype="api" ] | stats count by EventId | search count < 2     What it does is sear... See more...
Hello I have a search like     index=index1 | rename Number__c as EventId | append [search index=index2 sourcetype="api" ] | stats count by EventId | search count < 2     What it does is search 2 indexes for ids and counts them, expecting 2(1 from each index). What I would like to ensure is that when the count is less than the expected 2 that its only source is from the first search. Meaning that if there is only 1 record it is from the first portion of the search and not found in the second. In the table however I only want to show the EventId.   Thanks for the assistance!
Splunk is there a way to dump out all ServiceNow add on setup for each/all alert? trying to grab all alerts that has this action and put in a table with all the setup it has : state, CI, contact  ... See more...
Splunk is there a way to dump out all ServiceNow add on setup for each/all alert? trying to grab all alerts that has this action and put in a table with all the setup it has : state, CI, contact  type, assignment group ,....ect    
Hello everyone! We have some exciting news -The Splunk AppDynamics Mobile Real User Monitoring (MRUM) Session Replay preview is now available (for MRUM users)! It's a feature within AppDynamics MRU... See more...
Hello everyone! We have some exciting news -The Splunk AppDynamics Mobile Real User Monitoring (MRUM) Session Replay preview is now available (for MRUM users)! It's a feature within AppDynamics MRUM that allows you to visually replay recordings of actual user interactions within your mobile application. Key Details: • Requires controller version 25.1 and MRUM agent upgrade • Free preview starts Feb 2025 (20,000 session replays/month) • Helps teams reduce MTTR, optimize UX, and understand user behavior Check out these FAQs for additional details:   FAQs General Overview Questions What is AppDynamics MRUM Session Replay? AppDynamics MRUM Session Replay is a feature within AppDynamics Mobile Real User Monitoring that allows you to visually replay recordings of actual user interactions within your mobile application. It captures a user's journey through the app, showing their taps, swipes, and other actions, providing a video-like representation of their experience. This feature is currently in public preview, with general availability expected soon. What problems does MRUM Session Replay solve? AppDynamics MRUM Session Replay solves several key problems related to mobile app development, troubleshooting, and user experience optimization: Faster Troubleshooting (Reduced MTTR): Session replay helps developers and DevOps teams quickly identify the root cause of crashes, Application Not Responding (ANR) errors, performance issues, and other errors. By visualizing the user's actions leading up to the problem, they can pinpoint the exact moment the issue occurred and understand the context, significantly reducing mean time to resolution (MTTR). Improved User Experience (UI/UX Optimization): Product managers, developers, and designers can use session replay to understand how users actually interact with the app. By observing real user behavior, they can identify friction points, confusing navigation, or areas where the UI/UX could be improved. This data-driven approach helps optimize the user experience, leading to increased engagement and satisfaction.   Understanding User Behavior: Session replay provides valuable insights into how users navigate and use the app. This understanding can inform design decisions, feature prioritization, and overall app strategy. Seeing the app through the user's eyes helps teams understand what's working and what's not.   Reproducing Issues: Replicating user-reported bugs can be challenging. Session replay eliminates this difficulty by providing a clear, visual record of the user's actions, making it easier to reproduce and fix the issue. What are the benefits of MRUM Session Replay, and why should you care about it? MRUM Session Replay offers two key benefits that directly impact customer satisfaction and business outcomes: Enable Faster Troubleshooting: Session replay drastically reduces the time it takes to diagnose and fix issues in your mobile app. By providing a visual recording of the user's actions leading up to a crash, error, or performance bottleneck, developers can quickly pinpoint the root cause. This eliminates the guesswork and back-and-forth communication often associated with traditional debugging methods. Faster troubleshooting translates to quicker resolution times for bugs and issues. This means less disruption for users, fewer negative app store reviews, and ultimately, a more stable and reliable app experience. A happy user is more likely to continue using your app and recommend it to others. Optimize the End-User Experience on Mobile Application: Session replay offers invaluable insights into how users actually interact with your app. By watching real user sessions, you can identify friction points, confusing navigation, and areas where the UI/UX could be improved. This data-driven approach to optimization allows you to make informed decisions about design changes and feature prioritization. A seamless and intuitive user experience is crucial for app success. By optimizing the user experience, you can increase user engagement, reduce churn, and improve customer satisfaction. A positive user experience is a key differentiator in the competitive mobile app market. Ultimately, a better user experience can lead to increased app usage, higher conversion rates, and improved business outcomes.   Product Specific Questions What controller version is required?   You need controller version 25.1 to use MRUM Session Replay. Do mobile agents need to be upgraded to use this feature? Yes MRUM agents must be upgraded to 25.1 to use Session Replay. Are admin rights needed to enable Session Replay? Yes, users with admin permission to configure MRUM can enable session replay How is the preview enabled?  Prerequisite Mobile Session replay (Early preview) will be available for customers with controller version 25.1 or above. ◦ Upgrade the agent SDK ◦ Provide blob service endpoint ◦ Provide session replay module dependency (Only for Android) Configuration Enable Session replay in Mobile App Group Configuration -> Session replay. (Need admin permission for mobile configuration) How long is the preview available? The MRUM Session Replay free preview will be available for all active MRUM customers starting in February 2025. During the free trial, each account will get 20,000 session replays per month. What happens after the preview is over? After the preview ends, the feature will be available only for those with a Session Replay license. How much will the Session Replay feature cost? Pricing is not finalized yet for this feature. Will I lose my data after the free preview? Yes, you may. Your Session Replay data will be available for 8 days.  After 8 days, that data will be lost. When the GA version is available, you can purchase and extend storage to lengthen the duration of data availability.  
I see multiple Tenable Apps and TAs in Splunkbase, which one should I use to get Tenable data in?   
Hello, I have been trying to migrate elk data to splunk, we have elk data dating back 2 years and I have attempted to use the elastic integrator app from splunk base. I was able to set it up with SSL... See more...
Hello, I have been trying to migrate elk data to splunk, we have elk data dating back 2 years and I have attempted to use the elastic integrator app from splunk base. I was able to set it up with SSL and its bringing logs in from the past 30 days. The issue I have is that if I try to change the timeframe in the inputs.conf it will not work, and if I try to use a wildcard for the indice it will not work as well. Has anyone found a way around this? I am also open to hearing any other suggestions to get old elk data into splunk, thank you.  #https://splunkbase.splunk.com/app/4175
if we have to allow or whitelist the Splunk cloud IP's somewhere how to get the Splunk Cloud IP's for whitelisting ?  are these IP's are static ? is there any fix range of IP's Splunk uses for ... See more...
if we have to allow or whitelist the Splunk cloud IP's somewhere how to get the Splunk Cloud IP's for whitelisting ?  are these IP's are static ? is there any fix range of IP's Splunk uses for Splunk Cloud so we can use those for whitelisting
Hi i have a complex base search where iam comparing data from two indexes using left join and getting the results in a table query is working fine but its very slow so i have now decided to split it... See more...
Hi i have a complex base search where iam comparing data from two indexes using left join and getting the results in a table query is working fine but its very slow so i have now decided to split it into two base searches and then combine them in the panel  index=serverdata | rex "host_name=\"(?&lt;server_host_name&gt;[^\"]*)" | lookup servers_businessgroup_appcode.csv appcode output Business_Group as New_Business_Group |chart dc(host_name) over appcode by host_environment | eval TOTAL_servers=DEV+PAT+PROD | table appcode DEV PAT PROD TOTAL_servers   2nd Base search  index=abc | rex field=data "\|(?<server_name>[^\.|]+)?\|(?<appcode>[^\|]+)?\|" | lookup servers_businessgroup_appcode.csv appcode output Business_Group as New_Business_Group  i want to use this in third panel  combine both the searches using a left join and get the list of servers details in both the index  question how can i use two base searches in a single search   
Hi - I need to be able to send copies of logs to both Splunk Cloud and an AWS Cloudwatch Log Group.  Is it possible to configure the Universal Forwarder to send logs from the same source to both loca... See more...
Hi - I need to be able to send copies of logs to both Splunk Cloud and an AWS Cloudwatch Log Group.  Is it possible to configure the Universal Forwarder to send logs from the same source to both locations?  If not, has anybody use UF and the Cloudwatch Agent to monitor the same log file - I'm worried about two products watching the same file.
Hi every one I have a schedule search which will run every day .But some times it going into failed state .Is there any way or setting to re Run that schedule search as soon as it goes into failed s... See more...
Hi every one I have a schedule search which will run every day .But some times it going into failed state .Is there any way or setting to re Run that schedule search as soon as it goes into failed state?
I have an installation where I am trying to leverage an intermediate forwarder (IF) to send logs to my indexers. I have approximately 3000 Universal Forwarders (UFs) that I want to send through the I... See more...
I have an installation where I am trying to leverage an intermediate forwarder (IF) to send logs to my indexers. I have approximately 3000 Universal Forwarders (UFs) that I want to send through the IF, but something is limiting the IF to around 1000 connections. The IF is a Windows Server 2019. I am monitoring the connections with this PowerShell command: netstat -an | findstr 9997 | measure | select count. I never see more than ~1000 connections, even though I have several thousand UFs configured to connect to this IF. I have already tried increasing the max user ports, but there was no change: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\MaxUserPort HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\TcpTimedWaitDelay I have validated the network by creating a simple client and server to test the maximum connections. It reached the expected maximum of 16,000 connections from the client network to the IF. I can also configure a server to listen on port 9997 and see several thousand clients trying to connect to the port. I believe there must be something wrong with the Splunk IF configuration, but I am at a loss as to what it could be. There are no limits.conf configurations, and the setup is generally very basic. My official Splunk support is advising me to build more IFs and limit the clients to less than 1000, which I consider a suboptimal solution. Everything I’ve read indicates that an IF should be capable of handling several thousand UFs. Any help would be greatly appreciated.
HI Team  Is it possible to use the inputlookup of csv file with 7 column and fill the details in those 7 columns using the search command that fetches the data from splunk ??  Examples:  My csv ... See more...
HI Team  Is it possible to use the inputlookup of csv file with 7 column and fill the details in those 7 columns using the search command that fetches the data from splunk ??  Examples:  My csv looks like this:  Column1 , Column2  Value A1 , Value B1 Value A2 , Value B2 Value A3 , Value B3 Value A4, Value B4 I need output like below :  Column1 , Column2 , Column3 , Column4 Value A1 , Value B1 , Value C1 , Value D1 Value A2 , Value B2 , Value C2 , Value D2 Value A3 , Value B3 , Value C3 , Value D3 Value A4, Value B4 , Value C4 , Value D4 Values of Column 3 and Column4 are fetched from Splunk using search command and using the key value of Column1.  I've tried to use the below search, but it is not working:  | inputlookup File.csv | join Column1 type=left  [ | tstats latest(Column3) as START_TIME ,                    latest(Column4) as END_TIME  where index = main source = xyz  ] | table Column1 , Column2 , START_TIME , END_TIME