All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@renjith_nair Thank you so much It works!
Hi @raja.beeravolu, I don't have an answer for you, but I wanted to share this link to our AppD Docs site about Flow Maps. https://docs.appdynamics.com/appd/onprem/latest/en/application-monitoring/... See more...
Hi @raja.beeravolu, I don't have an answer for you, but I wanted to share this link to our AppD Docs site about Flow Maps. https://docs.appdynamics.com/appd/onprem/latest/en/application-monitoring/business-applications/flow-maps
We are closer but not there yet. 
The Bloodhound Enterprise TA is run on the HF and generates an updated KV file every 4 hours.  I wrote a script that runs and turns the kvstore entries into alerts.  Due to some weirdness in the data... See more...
The Bloodhound Enterprise TA is run on the HF and generates an updated KV file every 4 hours.  I wrote a script that runs and turns the kvstore entries into alerts.  Due to some weirdness in the data, the question has come up, can the kvstore on the HF be copied to a SH.  I haven't found a suggestion that I think will work.  We are running Splunk Enterprise 9.1.1 on prem on servers running RHEL 8.8. TIA, Joe
Hello I changed the search to timechart span=1d count by country   and it does give me the seperate county per day by country but instead of listing out all the countries it lists the top ten the lu... See more...
Hello I changed the search to timechart span=1d count by country   and it does give me the seperate county per day by country but instead of listing out all the countries it lists the top ten the lumps all the other hits into an "other"  field. Is there a way to change that so the timechart will list everthing?   Thanks  
Hi @kiran_panchavat , thank you for all the information. I was already able to list HF info in MC/Forwarders menu. What I need is to have HF also listed in MC/Resource Usage, where right now I have... See more...
Hi @kiran_panchavat , thank you for all the information. I was already able to list HF info in MC/Forwarders menu. What I need is to have HF also listed in MC/Resource Usage, where right now I have only Cluster Manager and Indexers nodes.   Kind regards, Andrea  
Maybe you can share sample search results (full text, anonymize as needed) and let us know which XML nodes represent "entities"?  Is there anything wrong with the search you posted?  If the search pr... See more...
Maybe you can share sample search results (full text, anonymize as needed) and let us know which XML nodes represent "entities"?  Is there anything wrong with the search you posted?  If the search produces what you need, what kind of change do you want? Based on that posted search, I speculate that there is no inherent structure in search results that tells user what are key-value pairs of an "entity".  In other words, the so-called "entity" is a construct invented by whoever posted this search.  Without knowing actual data structure, it is fruitless for volunteers to try simplification.
How can we check if thyere is any throttling in Splunk when ingesting events via AWS Kinesis AddOn? What metrics are available for this addon?
Your search should have given you the results.  Anything unexpected happens when you run the search?  The most I can think of is to search for only scheduled and enabled searches. | rest /services/s... See more...
Your search should have given you the results.  Anything unexpected happens when you run the search?  The most I can think of is to search for only scheduled and enabled searches. | rest /services/saved/searches | seach eai:acl.app = myapp eai.acl.sharing = app is_scheduled = 1 disabled = 0 | fields eai:acl.owner eai:acl.app eai:acl.sharing search title cron_schedule description  
Splunk Studio show a message or icon for a Pie chart which returns no data:   I am looking to display an icon or message if no results are found on a pie chart in place of the grey pie image on a d... See more...
Splunk Studio show a message or icon for a Pie chart which returns no data:   I am looking to display an icon or message if no results are found on a pie chart in place of the grey pie image on a dashboard if no results are found. Index=test (Eventcode=4010 OR Evnetcode=4011) | stats latest (eventcode) as latest_event_code by Site | eval Site= upper(site) | where latest_event_code=4010     I have been trying appends like the following: | stats count |eval NoResult="0" | Where count=0 | appendpipe [stats count | eval Noresult="0" | eval test="test Message"]
any solution for the issue?
Hi Splunkers, I have a doubt about License Consumption. I'm not here to ask how to calculate daily ingestion and/or license consumption in a Splunk Envrinonment. Community is full of topic about th... See more...
Hi Splunkers, I have a doubt about License Consumption. I'm not here to ask how to calculate daily ingestion and/or license consumption in a Splunk Envrinonment. Community is full of topic about this and I have my search I use when no Monitor Console is configured. The point is the following: on a LM, I have 3 different environment, each one with a set of SH, indexers and so on. The only "point of contact" is the LM itself, so, in a schematic way: Env A (SHs, IDX cluster, others hosts) ---> LM "X" Env B (SHs, IDX cluster, others hosts) ---> LM "X" Env C (SHs, IDX cluster, others hosts) ---> LM "X" Question is: what about if I have to search daily license consumption for only one of above ENVs? For example, I want calculate license consumption only for Env A. First thing I thought: Ok, I have two options: Use MC Use my search on _internal logs, based on license consumption data, and specify, as idx parameter, only indexes subset for desiderd ENV. PROBLEM: ENVs have not totally different indexes. For example, index "linux_audit" is set on all 3 env. So, if I try to differentiate cluster based on their own indexes, I'm not able to do this.
The best way is to send your Microsoft Entra ID (formerly Azure AD) data to an event hub.  Then, use the Splunk Add-on for Microsoft Cloud Services to ingest the data (hint: use the azure:monitor:aad... See more...
The best way is to send your Microsoft Entra ID (formerly Azure AD) data to an event hub.  Then, use the Splunk Add-on for Microsoft Cloud Services to ingest the data (hint: use the azure:monitor:aad sourcetype).  Here's a Lantern article for setting up the add-on => https://lantern.splunk.com/Data_Descriptors/Microsoft/Getting_started_with_Microsoft_Azure_Event_Hub_data   Alternatively, you can use Splunk Add-on for Microsoft Azure.  Use the "Azure Active Directory Interactive Sign-ins" input to get the data.  Depending on your environment size, you may hit some throttling limitations with the REST API this add-on uses => https://github.com/splunk/splunk-add-on-microsoft-azure/wiki/Configure-Azure-Active-Directory-inputs-for-the-Splunk-Add-on-for-Microsoft-Azure#throttling-guidance
Install the Splunk Add-on for Microsoft Cloud Services and configure the Azure Resource input.  Choose "Snapshot Data" as the resource type (see screenshot).  
Install the Splunk Add-on for Microsoft Cloud Services and configure the Azure Resource input.  Choose "Disk Data" as the resource type (see screenshot). Then, you can use this search to find unatta... See more...
Install the Splunk Add-on for Microsoft Cloud Services and configure the Azure Resource input.  Choose "Disk Data" as the resource type (see screenshot). Then, you can use this search to find unattached (orphaned) disks: index=main sourcetype="mscs:resource:disk" properties.diskState="unattached"    
@snowee  We recommend that you raise a ticket and that you refer to the resources below for your information. Solved: splunkd using too much RAM - Splunk Community  Troubleshooting high resource u... See more...
@snowee  We recommend that you raise a ticket and that you refer to the resources below for your information. Solved: splunkd using too much RAM - Splunk Community  Troubleshooting high resource usage in Splunk Enterprise - Splunk Lantern Limit search process memory usage - Splunk Documentation  
First thing would be to change simple | stats count by country to | timechart span=1d count by country This will give you a separate count for each day and each country. Now you can either use ... See more...
First thing would be to change simple | stats count by country to | timechart span=1d count by country This will give you a separate count for each day and each country. Now you can either use | timewrap 1day to get a... not very pretty vector which is not very nice to work with or - which I'd do probably - use | transpose 0 To get a list of fields called "row 1", "row 2" (and possibly more if you had more days in your search) from which you can calculate your delta.
Hi team, I am using AppDynamics SaaS version 23.11 and monitoring my on-premise servers and applications. Some volumetrics data on agents used,  Agent wise  Prod DTA Machine Agent 4612 3... See more...
Hi team, I am using AppDynamics SaaS version 23.11 and monitoring my on-premise servers and applications. Some volumetrics data on agents used,  Agent wise  Prod DTA Machine Agent 4612 3211 App Agent 884 414 DB Agent 13 10 Analytics Agent 12 14 I would like to know amount of traffic sent from my on-premise AppD agents ( machine, app, DB and analytics) to controller. If there is a way to get those numbers ( not expecting exact at least approx should be fine) then please let me know. Please note, we are not using any proxy in-between agents and controller.
Can I retrieve list of alerts shared in App level, Is it possible? |rest /services/saved/searches | search eai:acl.app=my_app eai:acl.sharing=app | fields eai:acl.owner eai:acl.app eai:acl.sharin... See more...
Can I retrieve list of alerts shared in App level, Is it possible? |rest /services/saved/searches | search eai:acl.app=my_app eai:acl.sharing=app | fields eai:acl.owner eai:acl.app eai:acl.sharing search title cron_schedule description
Sorry I should have be a bit more clear:  Here is the search I am getting that gives me to total number of hits to my website on any give day from a specific country.. For example this search mig... See more...
Sorry I should have be a bit more clear:  Here is the search I am getting that gives me to total number of hits to my website on any give day from a specific country.. For example this search might return:  Canda 10 Mexico 30 index=data sourcetype=access | ip="*" | iplocation allfields=true ip | where country ! United States | stats count by country I would like to set up a search to show me if traffic from any given country drops by 10% or more and the list the countries that have the drop in traffic...   Thanks