All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Splunk is there a way to dump out all ServiceNow add on setup for each/all alert? trying to grab all alerts that has this action and put in a table with all the setup it has : state, CI, contact  ... See more...
Splunk is there a way to dump out all ServiceNow add on setup for each/all alert? trying to grab all alerts that has this action and put in a table with all the setup it has : state, CI, contact  type, assignment group ,....ect    
Hello everyone! We have some exciting news -The Splunk AppDynamics Mobile Real User Monitoring (MRUM) Session Replay preview is now available (for MRUM users)! It's a feature within AppDynamics MRU... See more...
Hello everyone! We have some exciting news -The Splunk AppDynamics Mobile Real User Monitoring (MRUM) Session Replay preview is now available (for MRUM users)! It's a feature within AppDynamics MRUM that allows you to visually replay recordings of actual user interactions within your mobile application. Key Details: • Requires controller version 25.1 and MRUM agent upgrade • Free preview starts Feb 2025 (20,000 session replays/month) • Helps teams reduce MTTR, optimize UX, and understand user behavior Check out these FAQs for additional details:   FAQs General Overview Questions What is AppDynamics MRUM Session Replay? AppDynamics MRUM Session Replay is a feature within AppDynamics Mobile Real User Monitoring that allows you to visually replay recordings of actual user interactions within your mobile application. It captures a user's journey through the app, showing their taps, swipes, and other actions, providing a video-like representation of their experience. This feature is currently in public preview, with general availability expected soon. What problems does MRUM Session Replay solve? AppDynamics MRUM Session Replay solves several key problems related to mobile app development, troubleshooting, and user experience optimization: Faster Troubleshooting (Reduced MTTR): Session replay helps developers and DevOps teams quickly identify the root cause of crashes, Application Not Responding (ANR) errors, performance issues, and other errors. By visualizing the user's actions leading up to the problem, they can pinpoint the exact moment the issue occurred and understand the context, significantly reducing mean time to resolution (MTTR). Improved User Experience (UI/UX Optimization): Product managers, developers, and designers can use session replay to understand how users actually interact with the app. By observing real user behavior, they can identify friction points, confusing navigation, or areas where the UI/UX could be improved. This data-driven approach helps optimize the user experience, leading to increased engagement and satisfaction.   Understanding User Behavior: Session replay provides valuable insights into how users navigate and use the app. This understanding can inform design decisions, feature prioritization, and overall app strategy. Seeing the app through the user's eyes helps teams understand what's working and what's not.   Reproducing Issues: Replicating user-reported bugs can be challenging. Session replay eliminates this difficulty by providing a clear, visual record of the user's actions, making it easier to reproduce and fix the issue. What are the benefits of MRUM Session Replay, and why should you care about it? MRUM Session Replay offers two key benefits that directly impact customer satisfaction and business outcomes: Enable Faster Troubleshooting: Session replay drastically reduces the time it takes to diagnose and fix issues in your mobile app. By providing a visual recording of the user's actions leading up to a crash, error, or performance bottleneck, developers can quickly pinpoint the root cause. This eliminates the guesswork and back-and-forth communication often associated with traditional debugging methods. Faster troubleshooting translates to quicker resolution times for bugs and issues. This means less disruption for users, fewer negative app store reviews, and ultimately, a more stable and reliable app experience. A happy user is more likely to continue using your app and recommend it to others. Optimize the End-User Experience on Mobile Application: Session replay offers invaluable insights into how users actually interact with your app. By watching real user sessions, you can identify friction points, confusing navigation, and areas where the UI/UX could be improved. This data-driven approach to optimization allows you to make informed decisions about design changes and feature prioritization. A seamless and intuitive user experience is crucial for app success. By optimizing the user experience, you can increase user engagement, reduce churn, and improve customer satisfaction. A positive user experience is a key differentiator in the competitive mobile app market. Ultimately, a better user experience can lead to increased app usage, higher conversion rates, and improved business outcomes.   Product Specific Questions What controller version is required?   You need controller version 25.1 to use MRUM Session Replay. Do mobile agents need to be upgraded to use this feature? Yes MRUM agents must be upgraded to 25.1 to use Session Replay. Are admin rights needed to enable Session Replay? Yes, users with admin permission to configure MRUM can enable session replay How is the preview enabled?  Prerequisite Mobile Session replay (Early preview) will be available for customers with controller version 25.1 or above. ◦ Upgrade the agent SDK ◦ Provide blob service endpoint ◦ Provide session replay module dependency (Only for Android) Configuration Enable Session replay in Mobile App Group Configuration -> Session replay. (Need admin permission for mobile configuration) How long is the preview available? The MRUM Session Replay free preview will be available for all active MRUM customers starting in February 2025. During the free trial, each account will get 20,000 session replays per month. What happens after the preview is over? After the preview ends, the feature will be available only for those with a Session Replay license. How much will the Session Replay feature cost? Pricing is not finalized yet for this feature. Will I lose my data after the free preview? Yes, you may. Your Session Replay data will be available for 8 days.  After 8 days, that data will be lost. When the GA version is available, you can purchase and extend storage to lengthen the duration of data availability.  
I see multiple Tenable Apps and TAs in Splunkbase, which one should I use to get Tenable data in?   
Hi @Jeewan , downloading the UniversalForwarder App from your Splunk Cloud instance, there's the outputs.conf file in which you should find the Splunk Cloud IPs of your instance. Ciao. Giuseppe
Hello, I have been trying to migrate elk data to splunk, we have elk data dating back 2 years and I have attempted to use the elastic integrator app from splunk base. I was able to set it up with SSL... See more...
Hello, I have been trying to migrate elk data to splunk, we have elk data dating back 2 years and I have attempted to use the elastic integrator app from splunk base. I was able to set it up with SSL and its bringing logs in from the past 30 days. The issue I have is that if I try to change the timeframe in the inputs.conf it will not work, and if I try to use a wildcard for the indice it will not work as well. Has anyone found a way around this? I am also open to hearing any other suggestions to get old elk data into splunk, thank you.  #https://splunkbase.splunk.com/app/4175
Hi @Sec-Bolognese , I don't know how AWS Cloudwatch runs, but, it's possible to dend logs from a Forwarder to Splunk Cloud and to a third party, following the instructions at  https://docs.splunk.c... See more...
Hi @Sec-Bolognese , I don't know how AWS Cloudwatch runs, but, it's possible to dend logs from a Forwarder to Splunk Cloud and to a third party, following the instructions at  https://docs.splunk.com/Documentation/Splunk/9.4.0/Forwarding/Routeandfilterdatad#Replicate_a_subset_of_data_to_a_third-party_system and https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Forwarddatatothird-partysystemsd Ciao. Giuseppe
Hi @secure , in a dashboard it's possible to define more base searches, but in each panel, it's possible to use only one base search, not more. Ciao. Giuseppe
if we have to allow or whitelist the Splunk cloud IP's somewhere how to get the Splunk Cloud IP's for whitelisting ?  are these IP's are static ? is there any fix range of IP's Splunk uses for ... See more...
if we have to allow or whitelist the Splunk cloud IP's somewhere how to get the Splunk Cloud IP's for whitelisting ?  are these IP's are static ? is there any fix range of IP's Splunk uses for Splunk Cloud so we can use those for whitelisting
Hi i have a complex base search where iam comparing data from two indexes using left join and getting the results in a table query is working fine but its very slow so i have now decided to split it... See more...
Hi i have a complex base search where iam comparing data from two indexes using left join and getting the results in a table query is working fine but its very slow so i have now decided to split it into two base searches and then combine them in the panel  index=serverdata | rex "host_name=\"(?&lt;server_host_name&gt;[^\"]*)" | lookup servers_businessgroup_appcode.csv appcode output Business_Group as New_Business_Group |chart dc(host_name) over appcode by host_environment | eval TOTAL_servers=DEV+PAT+PROD | table appcode DEV PAT PROD TOTAL_servers   2nd Base search  index=abc | rex field=data "\|(?<server_name>[^\.|]+)?\|(?<appcode>[^\|]+)?\|" | lookup servers_businessgroup_appcode.csv appcode output Business_Group as New_Business_Group  i want to use this in third panel  combine both the searches using a left join and get the list of servers details in both the index  question how can i use two base searches in a single search   
Hi - I need to be able to send copies of logs to both Splunk Cloud and an AWS Cloudwatch Log Group.  Is it possible to configure the Universal Forwarder to send logs from the same source to both loca... See more...
Hi - I need to be able to send copies of logs to both Splunk Cloud and an AWS Cloudwatch Log Group.  Is it possible to configure the Universal Forwarder to send logs from the same source to both locations?  If not, has anybody use UF and the Cloudwatch Agent to monitor the same log file - I'm worried about two products watching the same file.
I'm searching for a method for general use that I can apply as needed. Currently for testing I'm using a simple tstats search counting events by ip, in spans of 4 hours. I need a way to adjust the st... See more...
I'm searching for a method for general use that I can apply as needed. Currently for testing I'm using a simple tstats search counting events by ip, in spans of 4 hours. I need a way to adjust the starting point of the spans but as shown above, it's not actually shifting where it's searching the data. It's just changing the time labels in the table. 
Hi every one I have a schedule search which will run every day .But some times it going into failed state .Is there any way or setting to re Run that schedule search as soon as it goes into failed s... See more...
Hi every one I have a schedule search which will run every day .But some times it going into failed state .Is there any way or setting to re Run that schedule search as soon as it goes into failed state?
I have an installation where I am trying to leverage an intermediate forwarder (IF) to send logs to my indexers. I have approximately 3000 Universal Forwarders (UFs) that I want to send through the I... See more...
I have an installation where I am trying to leverage an intermediate forwarder (IF) to send logs to my indexers. I have approximately 3000 Universal Forwarders (UFs) that I want to send through the IF, but something is limiting the IF to around 1000 connections. The IF is a Windows Server 2019. I am monitoring the connections with this PowerShell command: netstat -an | findstr 9997 | measure | select count. I never see more than ~1000 connections, even though I have several thousand UFs configured to connect to this IF. I have already tried increasing the max user ports, but there was no change: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\MaxUserPort HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\TcpTimedWaitDelay I have validated the network by creating a simple client and server to test the maximum connections. It reached the expected maximum of 16,000 connections from the client network to the IF. I can also configure a server to listen on port 9997 and see several thousand clients trying to connect to the port. I believe there must be something wrong with the Splunk IF configuration, but I am at a loss as to what it could be. There are no limits.conf configurations, and the setup is generally very basic. My official Splunk support is advising me to build more IFs and limit the clients to less than 1000, which I consider a suboptimal solution. Everything I’ve read indicates that an IF should be capable of handling several thousand UFs. Any help would be greatly appreciated.
Hi @PickleRick , I tried but I am unable to create SPL query can you please help me with the accurate query?
HI Team  Is it possible to use the inputlookup of csv file with 7 column and fill the details in those 7 columns using the search command that fetches the data from splunk ??  Examples:  My csv ... See more...
HI Team  Is it possible to use the inputlookup of csv file with 7 column and fill the details in those 7 columns using the search command that fetches the data from splunk ??  Examples:  My csv looks like this:  Column1 , Column2  Value A1 , Value B1 Value A2 , Value B2 Value A3 , Value B3 Value A4, Value B4 I need output like below :  Column1 , Column2 , Column3 , Column4 Value A1 , Value B1 , Value C1 , Value D1 Value A2 , Value B2 , Value C2 , Value D2 Value A3 , Value B3 , Value C3 , Value D3 Value A4, Value B4 , Value C4 , Value D4 Values of Column 3 and Column4 are fetched from Splunk using search command and using the key value of Column1.  I've tried to use the below search, but it is not working:  | inputlookup File.csv | join Column1 type=left  [ | tstats latest(Column3) as START_TIME ,                    latest(Column4) as END_TIME  where index = main source = xyz  ] | table Column1 , Column2 , START_TIME , END_TIME 
Hi livehybrid Thank you for answering my question. The PSC is already installed. Path : /opt/splunk/etc/apps/Splunk_SA_Scientific_Python_linux_x86_64/bin/linux_x86_64/4_2_2/lib/python3.9/site-pack... See more...
Hi livehybrid Thank you for answering my question. The PSC is already installed. Path : /opt/splunk/etc/apps/Splunk_SA_Scientific_Python_linux_x86_64/bin/linux_x86_64/4_2_2/lib/python3.9/site-packages/pandas What I'm curious about is Why does the Pandas error occur when I run ARIMA.py in the /opt/splunk/etc/apps/Splunk_ML_Toolkit/bin/algos path as below? [root@master algos]# python3 ARIMA.py Traceback (most recent call last): File "ARIMA.py", line 5, in <module> import pandas as pd ModuleNotFoundError: No module named 'pandas'
index=cim_modactions source=/opt/splunk/var/log/splunk/incident_ticket_creation_modalert.log host=sh* search_name=* source=* sourcetype=modular_alerts:incident_ticket_creation user=* action_mode=* ac... See more...
index=cim_modactions source=/opt/splunk/var/log/splunk/incident_ticket_creation_modalert.log host=sh* search_name=* source=* sourcetype=modular_alerts:incident_ticket_creation user=* action_mode=* action_status=* search_name=kafka* [| rest /servicesNS/-/-/saved/searches | search title=kafka* | stats count by actions | table actions] | table user search_name action_status date_month date_year _time
It is still not clear what data you are dealing with. For example, does each job run at most once for each app in each country each day? Which day do you want to use, the day from _time or the day fr... See more...
It is still not clear what data you are dealing with. For example, does each job run at most once for each app in each country each day? Which day do you want to use, the day from _time or the day from the RUNSTARTTIMESTAMP or the day from the RUNENDTIMESTAMP? Your original table doesn't show App, is this not required? Please provide a mock up of your expected results using events like the one you have shared. Also, please explain how the data in the events is related to the expected results.
Fresh installation 9.4.0 errors showing for kvstore provider , http connection error
Thank you so much for the response and yes it worked.