All Topics

Top

All Topics

Hi , I want to connect live data of various applications from Appdynamics to splunk  itsi in csv format how to achieve this . Can anyone help me.It will be greatful if some guidance i get from this ... See more...
Hi , I want to connect live data of various applications from Appdynamics to splunk  itsi in csv format how to achieve this . Can anyone help me.It will be greatful if some guidance i get from this community. Thanks and Regards, Abhigyan.
What do I need to know about creating an AppDynamics Trial account, finding help during the trial period, and navigating the transition once the trial concludes? In this article... ... See more...
What do I need to know about creating an AppDynamics Trial account, finding help during the trial period, and navigating the transition once the trial concludes? In this article... How do I get started with an AppDynamics trial?  Where can I get technical help during my trial?  How do I move from a trial license to a paid one?  What happens when a 30-day trial account expires?  Useful Resources How do I get started with an AppDynamics Trial license?  The AppDynamics Trial license is available free of charge for a 30-day period. To sign up for a trial account, click the Start Free Trial button on the top right of this page.  NOTE |The AppDynamics Trial license is not available as an On-Premises deployment. Where can I get technical help during my trial?  During the trial period, we encourage you to explore this Community to find the answers to your questions. You’ll find pertinent technical articles and discussions. Your trial account includes the ability to log into this Community, which will allow you to comment and start your own discussion threads.     Community content includes:  Forums – Participate in discussions and initiate your own threads  Knowledge Base – Read technical content authored by Cisco AppDynamics experts — and don’t miss the Support article filter  Welcome Center – Read short articles about how this Community and the platform works  Please don't hesitate to post your questions and insights. The Community is most helpful as an interactive space where users help each other by sharing their knowledge and experiences. You'll also find that our team regularly monitors the forum and provides guidance where necessary.  How do I move from a Trial license to a paid one?  To upgrade to a paid license, contact your sales representative. See the Cisco AppDynamics pricing page for more purchase information.  What happens when a 30-day AppDynamics Trial expires? If the license isn’t upgraded by the end of the trial, your AppDynamics Trial account will expire.   When a trial period ends and AppDynamics detects an expired AppDynamics Trial license, the Controller resets all agents. Useful Resources Request a Free Trial here  Contact your sales representative here  See AppDynamics pricing    Read:  Getting Started in the Documentation  How do I register and sign into the Community? in the Welcome Center
I have a table and a couple of panels on my dashboard. I would like to click a table row and display/hide certain panels depending on the value of a specific column. name gender age Alice f... See more...
I have a table and a couple of panels on my dashboard. I would like to click a table row and display/hide certain panels depending on the value of a specific column. name gender age Alice female 18 Bob male 22 For instance, I have the above table. I would like to display panel A and hide panel B when I click a row with gender=female, and display panel B and hide panel A when I click a row with gender=male. Let's say panel A depends on token panelA and panel B depends on token panelB. How should I do that? I am thinking about doing that in the drilldown setting but I do not know how to set or unset with a condition.
Splunk sirs,  I am trying to add a boolean column to my data called 'new_IP_detected' which will tell me whether an answer IP is new compared to answer IPs from a previous time range. Both searche... See more...
Splunk sirs,  I am trying to add a boolean column to my data called 'new_IP_detected' which will tell me whether an answer IP is new compared to answer IPs from a previous time range. Both searches are from the same index and sourcetype, and I only want to compare whether or not an answer IP from -24h to now is in the list of answer IPs from -30d to -24h. My search so far: index=[sample index] sourcetype=[sample sourcetype] earliest=-24h latest=now NOT [ search index=[sample index] sourcetype=[sample sourcetype] earliest=-30d latest=-24h | stats count by answer | table answer] | stats count by answer | table answer As of right now I am getting no results which I believe is expected (meaning there are no new IPs in the last 24 hrs). How would I add 'new_IP_detected' column over the last 30 days?
Hi, So I’m working on creating an alert in Splunk, but I’m having some issues with setting up the query. The goal of the alert is to trigger when a shared drive or folder in Google Drive has been sh... See more...
Hi, So I’m working on creating an alert in Splunk, but I’m having some issues with setting up the query. The goal of the alert is to trigger when a shared drive or folder in Google Drive has been shared externally for longer than a set period of time. I’ve seen some mentions of using the poolPeriod and fschange functions, but those seem to be better suited for system directories rather than Google Drive.   Any advice on how to start setting up this query?
Hello,   I am trying to count how many days out of the last 12 months our users logged into two of our servers.  And in the end I want it to display the days out of the 12 months the users logged... See more...
Hello,   I am trying to count how many days out of the last 12 months our users logged into two of our servers.  And in the end I want it to display the days out of the 12 months the users logged in. SO if a user logged in 4 time in one day it should count it as 1 day.   I have tried the "timechart span=1d count by Account_Name"    this looked promising but timechart groups Account_names in OTHER field that is misleading because there are other accounts in that field.   index=windows source="WinEventLog:Security" EventCode=4624 host IN (Server1, Server2) Logon_Type IN (10, 7) | eval Account_Name = mvindex(Account_Name,1) | timechart span=1d count by Account_Name | untable _time Account_Name count  
Hi team,   I am working with Splunk Cloud "Classic" experience, and I installed one official app with different configurations. In my case, my focus is with props.conf in the stanza "sourcetype" wh... See more...
Hi team,   I am working with Splunk Cloud "Classic" experience, and I installed one official app with different configurations. In my case, my focus is with props.conf in the stanza "sourcetype" where this props.conf creates a field called "action". I need to change this field "action" without modifying the official app, so, I created a new custom app with new props and stanza sourcetype with field "action" adding other characteristics. After that, splunk cloud continues applying the old configuration from the official app and it didn't take the new attributes. Note: I am using default folder in each apps for locate the props file because Splunk Cloud doesn't allow using local folder Someone know, how can I do that? priority precedence fields by sourcetype
Register here. This thread is for the Community Office Hours session on Observability: Application Performance Monitoring (APM) on Wed, March 20, 2024 at 1pm PT / 4pm ET.    This is your opportunit... See more...
Register here. This thread is for the Community Office Hours session on Observability: Application Performance Monitoring (APM) on Wed, March 20, 2024 at 1pm PT / 4pm ET.    This is your opportunity to ask questions about your current Observability APM challenge or use case, including: Sending traces to APM Tracking service performance with dashboards Setting up deployment environments AutoDetect detectors Enabling Database Query Performance Setting up business workflows Implementing high-value features (Tag Spotlight, Trace View, Service Map) Anything else you'd like to learn!   Please submit your questions at registration or as comments below. You can also head to the #office-hours user Slack channel to ask questions (request access here).    Pre-submitted questions will be prioritized. After that, we will open the floor up to live Q&A with meeting participants.   Look forward to connecting!
We just installed the forwarder on one of our VIOS systems to ensure we could get this working, however each time we try to start it up we see the below in our splunkd.log   02-09-2024 13:28:54.797... See more...
We just installed the forwarder on one of our VIOS systems to ensure we could get this working, however each time we try to start it up we see the below in our splunkd.log   02-09-2024 13:28:54.797 -0600 WARN ulimit [80544161 MainThread] - A system resource limit on this machine is below the minimum recommended value: system_resource = Data segment size (ulimit -d); current_limit = 134217728; recommended_minimum_value = 536870912. Change the operating system resource limits to meet the minimum recommended values for Splunk Enterprise. 02-09-2024 13:28:54.797 -0600 INFO ulimit [80544161 MainThread] - Limit: data file size: unlimited 02-09-2024 13:28:55.258 -0600 WARN Thread [86376799 HTTPDispatch] - HTTPDispatch: about to throw a ThreadException: pthread_create: Not enough space; 43 threads active. Trying to create batchreader0   We issued the ulimit -d command to update this to unlimited, however still seeing the issue.    
Hi I am planning to migrate Splunk Cloud to On-Premises Platform. Looking for road map and potential challenges . Any one?
Hi community, I'm using rex to get some strings. The log is like \"submission_id\":337901 The regex I'm using is: \"submission_id\\\":(?<subID>\d+) It works well on regex101: https://regex101.... See more...
Hi community, I'm using rex to get some strings. The log is like \"submission_id\":337901 The regex I'm using is: \"submission_id\\\":(?<subID>\d+) It works well on regex101: https://regex101.com/r/Usr7Ki/1 However, in Splunk, it doesn't find anything. The command is (just added double quotes to wrap the regex) rex "\"submission_id\\\":(?<subID>\d+)"  Any ideas and suggestions are appreciated!
Hello!  I am trying to send syslogs to splunk from network devices using udp. I have one heavy forwarder and two indexers, does it matter which indexer i set up to listen for the data?
What is the most elegant way of searching for events where a field is not in a list of values?   For example: index=foo | iplocation foo_src_ip | search Country IN ("France", "United States") wo... See more...
What is the most elegant way of searching for events where a field is not in a list of values?   For example: index=foo | iplocation foo_src_ip | search Country IN ("France", "United States") works great.    But what if I want all events where the IP was not from those countries (the  inverse answer), like "Canada", "Mexico". Thanks for any assistance. Bob
How can we integrate Atlassian tools like Jira with Splunk. What are the technical details that we need to have in order to connect Jira with Splunk.   
Hi, I am very new to this environment and i was having a trouble in login as I have forgot the password and admin detail is there any way, I can reset it.    thanks  
Sending Email as an action for an Alert and includes the result as table. _time field is one of the columns for this table and is showing this type of format "DDD MMM 24hh:mm:ss YYYY". Op... See more...
Sending Email as an action for an Alert and includes the result as table. _time field is one of the columns for this table and is showing this type of format "DDD MMM 24hh:mm:ss YYYY". Opening the Alert in Search shows a different format. "YYYY-MM-DD 24hh:mm:ss.sss" Is there a way to format _time field in the email's inline table?  
In a SmartStore configuration, there are a significant number of deletes/writes as buckets are evicted and copied to the indexer's volume.  To improve performance, SSD disks are being used.  In this ... See more...
In a SmartStore configuration, there are a significant number of deletes/writes as buckets are evicted and copied to the indexer's volume.  To improve performance, SSD disks are being used.  In this case, how often should one run the TRIM command to help with SSD garbage collection?
I have the following SPL search.   index="cloudflare" | top ClientRequestPath by ClientRequestHost | eval percent = round(percent,2) | rename count as "Events", ClientRequestPath as "Path", percent... See more...
I have the following SPL search.   index="cloudflare" | top ClientRequestPath by ClientRequestHost | eval percent = round(percent,2) | rename count as "Events", ClientRequestPath as "Path", percent as "%"    Wich give me this result. I also need to group it by 10m time range and calculate the difference in percents between 2 previous time ranges for every line. Help me figure out how do that, thx.
I Am having Hf and it is configured to send data via sourcetype A After sometime it stops sending data to A Then i move the data to diffrent HF in sourcetype : test ( to test if it is working)  th... See more...
I Am having Hf and it is configured to send data via sourcetype A After sometime it stops sending data to A Then i move the data to diffrent HF in sourcetype : test ( to test if it is working)  then from new HF I am routing the data to Source type A itself Will it reingest the data or checkpoint from the data it is left off, will it ignore the data which was sent to sourcetype :test?? need help and clear explanation
I am using below query to get the index sizes and consumed space and frozenTimePeriodInSecs details. | rest /services/data/indexes splunk_server="ABC" | stats min(minTime) as MINUTC max(maxTime) as... See more...
I am using below query to get the index sizes and consumed space and frozenTimePeriodInSecs details. | rest /services/data/indexes splunk_server="ABC" | stats min(minTime) as MINUTC max(maxTime) as MAXUTC max(totalEventCount) as MaxEvents max(currentDBSizeMB) as CurrentMB max(maxTotalDataSizeMB) as MaxMB max(frozenTimePeriodInSecs) as frozenTimePeriodInSecs by title | eval MBDiff=MaxMB-CurrentMB | eval MINTIME=strptime(MINUTC,"%FT%T%z") | eval MAXTIME=strptime(MAXUTC,"%FT%T%z") | eval MINUTC=strftime(MINTIME,"%F %T") | eval MAXUTC=strftime(MAXTIME,"%F %T") | eval DAYS_AGO=round((MAXTIME-MINTIME)/86400,2) | eval YRS_AGO=round(DAYS_AGO/365.2425,2) | eval frozenTimePeriodInDAYS=round(frozenTimePeriodInSecs/86400,2) | eval DAYS_LEFT=frozenTimePeriodInDAYS-DAYS_AGO | rename frozenTimePeriodInDAYS as frznTimeDAYS | table title MINUTC MAXUTC frznTimeDAYS DAYS_LEFT DAYS_AGO YRS_AGO MaxEvents CurrentMB MaxMB MBDiff title MINUTC MAXUTC frznTimeDAYS DAYS_LEFT DAYS_AGO YRS_AGO MaxEvents CurrentMB MaxMB MBDiff XYZ 24-06-2018 01:24 10-02-2024 21:11 62 -1995.87 2057.87 5.63 13115066 6463 8192 1729   For index 'XYZ' I can see frozenTimePeriod are showing 62 days so as per the set condition it should just show last 2 months of data but my MINTIME is still showing very old date as '24-06-2018 01:24'. When I checked the event counts in Splunk for older than 62 days then it shows very few counts compare to past 62 days events counts. (Current events counts are very high) So why still these older events are showing in Splunk and also why very few not all). I want to understand this scenario to increase the frozentime period.