All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, Splunk community!  I have created a correlation search with the following search string:  index="kali2_over_syslog" ((PWD=/etc AND cmd=*shadow) OR (PWD=* cmd=*/etc/shadow)) OR ((PWD=/... See more...
Hello, Splunk community!  I have created a correlation search with the following search string:  index="kali2_over_syslog" ((PWD=/etc AND cmd=*shadow) OR (PWD=* cmd=*/etc/shadow)) OR ((PWD=/etc AND cmd=*passwd) OR (PWD=* cmd=*/etc/passwd)) | eval time=strftime(_time, "%D %H:%M") | stats values(host) as "host" values(time) as "access time" values(executer) as "user" count by cmd | where 'count'>0 When I use it in search and reporting app and executing "sudo cat /etc/shadow" on the monitored linux machine it catches this event.  The rest of the settings of that correlation search are the same as in my other correlation search, which I used as a template. That another correlation search works well and notable events are getting generated as well as the email notification is sent.  The only difference is that I am not using any datamodel in my search because I have a small test lab and I only have one machine on which I want to monitor the following activity. Can it be that I must use the CIM-validated data models in my search, so that correlation search actually works fine and generates notable events? I am new to Splunk, so I am sorry if my question is a bit unclear or weird, let me know if you need additional information
Hi all, 5 days ago we got an issue with delayed searches.  This is fixed and we did not have skipped or delayed searches since. However, the warning is not disappearing.  Is there a way to manuall... See more...
Hi all, 5 days ago we got an issue with delayed searches.  This is fixed and we did not have skipped or delayed searches since. However, the warning is not disappearing.  Is there a way to manually trigger a recheck, mark this as acknowledged or any other way of making this warning go away?   
Hi All, I have a field called File1 and File2  and I combined in coalesce .In the table but the value is not getting in the table.But if i use File1 directly the value is showing.what is the issue.H... See more...
Hi All, I have a field called File1 and File2  and I combined in coalesce .In the table but the value is not getting in the table.But if i use File1 directly the value is showing.what is the issue.How to check this not null or something else.   |eval FileList=coalesce(File1,File2)
I'm currently building my own home instance and I'm having some trouble with my UF.   So far I've : installed the latest / correct version for my Ubuntu - Linux system sudo chown -RP splunk:splun... See more...
I'm currently building my own home instance and I'm having some trouble with my UF.   So far I've : installed the latest / correct version for my Ubuntu - Linux system sudo chown -RP splunk:splunk /opt/splunkforwarder/ searched through SplunkForwarder.service to see if the correct user is applied (which it is) tried re-installing and running   ./splunk enable boot-start​ as splunk user, and as root.   When using the splunk user, I have to authenticate as root anyway but i get the same results for both   ./splunk start   results in "Done" after authentication   ./splunk status   results in: Warning: Attempting to revert the SPLUNK_HOME ownership Warning: Executing "chown -R splunk:splunkfwd /opt/splunkforwarder" Couldn't change ownership for /opt/splunkforwarder/etc : Operation not permitted splunkd is not running.   ./splunk enable boot-start   results in: " A systemd unit file already exists at path ="/etc/systemd/system/SplunkForwarder.service". To add a Splunk generated systemd unit file, run 'splunk disable boot-start' before running this command. If there are custom settings that have been added to the unit file, create a backup copy first. It seems no matter which account I use or which user has permissions, I'm unable to have access to any of the files under "/opt/splunkforwarder" nor am I able to start the UF itself or configure boot-start.
Hello everyone, I am looking for a Splunk search query to get the duration time of three sequential response code 200. It is not about average time or duration of one message but if three Success me... See more...
Hello everyone, I am looking for a Splunk search query to get the duration time of three sequential response code 200. It is not about average time or duration of one message but if three Success message responses taken totally more than 10 seconds. Thanks in advance.
Hi Team, I am trying to setup an alert if the count of errors are in range of  between 10 to19(more then 10 and less than 19).  for example: index=abc sourcetype=xyz "errors" only if count ... See more...
Hi Team, I am trying to setup an alert if the count of errors are in range of  between 10 to19(more then 10 and less than 19).  for example: index=abc sourcetype=xyz "errors" only if count >= 10 AND count <=19, should only trigger alert. please help thank you
I really need splunk to update my name. I have raised ticket twice and both time I was told 'it's not their job and please visit splunk support page', which ends up in an infinite loop. For context ... See more...
I really need splunk to update my name. I have raised ticket twice and both time I was told 'it's not their job and please visit splunk support page', which ends up in an infinite loop. For context I recently changed my offical name, e.g. name on my passport. Without updating it in my splunk profiles I won't be able to take exams as ID's don't match.
Hi  Can someone help me to find a way to create a Dropdown Input on the field which is extracted using a REX command. Example: For the below search, I want to add a new dropdown Input with the 3 va... See more...
Hi  Can someone help me to find a way to create a Dropdown Input on the field which is extracted using a REX command. Example: For the below search, I want to add a new dropdown Input with the 3 values :  a) Incoming b) Outgoing c) Both  If user select Incoming, only those records with the direction as incoming will be displayed. If user select Outgoing, only those records with the direction as Outgoing will be displayed. If user select Both, all the records (Direction as incoming or outgoing) will be displayed.   Query:  index=events_prod_cdp_penalty_esa source="SYSLOG" sourcetype=zOS-SYSLOG-Console (TERM(VV537UP) OR TERM(VVF119P) ) ("- ENDED" OR "- STARTED" OR "PURGED --") | rex field=TEXT "((VV537UP -)|(VVF119P -))(?<Func>[^\-]+)" | fillnull Func value=" PURGED" | eval Function=trim(Func) | eval DAT = strftime(relative_time(_time, "+0h"), "%d/%m/%Y") | rename DAT as Date_of_reception | eval {Function}_TIME=_time | stats values(Date_of_reception) as Date_of_reception values(*_TIME) as *_TIME by JOBNAME | eval Description= case('JOBNAME' == "$VVF119P", "Reception of the CFI file from EB and trigger planning PVVZJH." , 'JOBNAME' == "$VV537UP", "Unload of VVA537 for Infocentre." , 1=1,"NA") | eval DIRECTION= case('JOBNAME' == "$VVF119P", "INCOMING" , 'JOBNAME' == "$VV537UP", "OUTGOING" , 1=1,"NA") | eval Diff=ENDED_TIME-STARTED_TIME | eval TimeDiff=now() - STARTED_TIME | eval Status = if(isnotnull(ENDED_TIME) AND (Diff<=120),"OK",if(isnotnull(ENDED_TIME) AND (Diff>120),"BREACHED", if(isnull(ENDED_TIME) AND isnull(STARTED_TIME),"PLANNED",if(isnull(ENDED_TIME) AND isnotnull(STARTED_TIME) AND (TimeDiff>1000),"FAILED", if(isnull(ENDED_TIME) AND isnotnull(STARTED_TIME) and (TimeDiff>1000),"RUNNING","WARNING"))))) | fieldformat STARTED_TIME=strftime((STARTED_TIME),"%H:%M:%S") | fieldformat ENDED_TIME=strftime((ENDED_TIME),"%H:%M:%S") | fieldformat PURGED_TIME=strftime( PURGED_TIME,"%H:%M:%S") | eval diff_time = tostring(Diff , "duration") | eval diff_time_1=substr(diff_time,1,8) | rename diff_time_1 as EXECUTION_TIME | table JOBNAME,Description,DIRECTION , Date_of_reception ,STARTED_TIME , ENDED_TIME , PURGED_TIME , EXECUTION_TIME , Status | sort -STARTED_TIME      
Hi Team  Can someone help me to create a dashboard panel which will highlight an alert when the  Incoming > 0 and Outgoing = 0 in last 30 mins.  Requirement is to highlight an alert in the dashboa... See more...
Hi Team  Can someone help me to create a dashboard panel which will highlight an alert when the  Incoming > 0 and Outgoing = 0 in last 30 mins.  Requirement is to highlight an alert in the dashboard when the processing is not done in last 30 mins.  Query to find the incoming (IN_per_24h) and outgoing (OUT_per_24h) is fetched using the below query:  |tstats count(PREFIX(nidf=)) as t where index=events_prod_cdp_penalty_esa source="SYSLOG" (TERM(NIDF=RPWARDA) OR TERM(NIDF=SPWARAA) OR TERM(NIDF=SPWARRA) ) by _time PREFIX(nidf=) span=5m | rename nidf= as NIDF | eval NIDF=UPPER(NIDF) | eval DIR = if(NIDF="RPWARDA" ,"IN","OUT") | timechart span=5m sum(t) by DIR | eval DAT_rel = relative_time(_time, "+3h") | eval day_of_week=lower(strftime(DAT_rel, "%a")) | eval DAT_rel = if(day_of_week = "sun", relative_time(DAT_rel, "+1d"),DAT_rel) | eval DAT_rel = if(day_of_week = "sat", relative_time(DAT_rel, "+2d"),DAT_rel) | eval DAT = strftime(DAT_rel, "%Y/%m/%d")  | streamstats sum(*) as *_per_24h by DAT reset_on_change=true | eval backlog = (IN_per_24h - OUT_per_24h ) | table _time IN_per_24h OUT_per_24h backlog
Are developer licenses not being issued anymore? It's been well over a week since I applied (reapplied). I've also emailed the dev account to inquire.   Thanks
Hi, I have a background with T-SQL and reading the forums I start to realize that "join" is not so good to use with Splunk.  I have found similar forum posts addressing my questions, but still do... See more...
Hi, I have a background with T-SQL and reading the forums I start to realize that "join" is not so good to use with Splunk.  I have found similar forum posts addressing my questions, but still don't seem to get it, perhaps it's just a learning thing.  But I'll share my case and see if anyone can point me in the right direction, preferably explaining it like you're talking to a three year old So.  I want to output data about an "Order" in a Table in a Dashboard. I have my initial search that grabs an order by Properties.OrderReference.  In an order I have transactions. A transaction has a Properties.TransactionReference. Transactions in an order will have status updates as the order is processed in our system.  The Properties.OrderStatus contains an enum, like "InProgesss", "Error", "Complete" and so on.  My goal is to show in a table, the transactions in an order and the _latest_ OrderStatus. I am not interested in the previous statuses for a transaciton, just the latest one based on _time. I have played around a bit and this is giving me what I want (sorry for any n00b stuff in here):    index="my_index" | spath input=Properties | where RenderedMessage="Created a new transaction" AND 'Properties.OrderReference'="289e272f-2677-409b-9576-f28b2763c658" AND 'Properties.EnvironmentName'="Development" | join Properties.TransactionRef AND Properties.OrderReference [search index="my_index" | where MessageTemplate="Publishing transaction status"] | eval Time=strftime(_time, "%Y-%m-%d %H:%M:%S") | rename Properties.TransactionReference as Reference, Properties.Amount as Amount, Properties.Currency as Currency, Properties.TransactionType as Type, Properties.TransactionStatus as Status | table Time, Reference, Type, Amount, Currency, Status   However this is pretty slow, and it uses join that I am starting to realize is not a good option. I have also played around, for the second "enriching" search, to use something like:    | sort - _time | head 1   in order to just grab the latest occurence. But no luck switching to "stats" or similar.  Any help would be appreciated, please let me know if more background info is needed. Edit:  Here are events from the two different searches. First one, showing transactions in the order:   {"Level":"Information","MessageTemplate":"Created a new transaction","RenderedMessage":"Created a new transaction","Properties":{"SourceContext":"ApiGateway.Controllers.OrdersController","TransactionReference":"e4dfbba0-90cf-4e1d-9ca3-e661ace5fe1d","TransactionType":"Transfer","Amount":901,"Currency":"SEK","ExecutionDate":"2023-11-15T14:32:00.0000000+02:00","OrderReference":"289e272f-2677-409b-9576-f28b2763c658","ActionId":"9a240462-d4c7-485e-a974-8229f2520c6c","ActionName":"ApiGateway.Controllers.OrdersController.PostOrder (ApiGateway)","RequestId":"0HN34CGT9KPCS:00000004","RequestPath":"/orders","ConnectionId":"0HN34CGT9KPCS","EnvironmentName":"Development"}} {"Level":"Information","MessageTemplate":"Created a new transaction","RenderedMessage":"Created a new transaction","Properties":{"SourceContext":"ApiGateway.Controllers.OrdersController","TransactionReference":"7ced831c-f8fd-41a2-88b1-6b564259539b","TransactionType":"Transfer","Amount":567,"Currency":"SEK","ExecutionDate":"2023-11-15T14:32:00.0000000+02:00","OrderReference":"289e272f-2677-409b-9576-f28b2763c658","ActionId":"9a240462-d4c7-485e-a974-8229f2520c6c","ActionName":"ApiGateway.Controllers.OrdersController.PostOrder (ApiGateway)","RequestId":"0HN34CGT9KPCS:00000004","RequestPath":"/orders","ConnectionId":"0HN34CGT9KPCS","EnvironmentName":"Development"}} {"Level":"Information","MessageTemplate":"Created a new transaction","RenderedMessage":"Created a new transaction","Properties":{"SourceContext":"ApiGateway.Controllers.OrdersController","TransactionReference":"9f7742e7-0350-420a-9f6b-79d7bd024bc5","TransactionType":"Transfer","Amount":234,"Currency":"SEK","ExecutionDate":"2023-11-15T14:32:00.0000000+02:00","OrderReference":"289e272f-2677-409b-9576-f28b2763c658","ActionId":"9a240462-d4c7-485e-a974-8229f2520c6c","ActionName":"ApiGateway.Controllers.OrdersController.PostOrder (ApiGateway)","RequestId":"0HN34CGT9KPCS:00000004","RequestPath":"/orders","ConnectionId":"0HN34CGT9KPCS","EnvironmentName":"Development"}}   Second one, showing status updates for transactions in the order:   {"Level":"Information","MessageTemplate":"Publishing transaction status","RenderedMessage":"Publishing transaction status","Properties":{"SourceContext":"ApiGateway.Services.StatusUpdateService","Debtor":"CommonTypeLibrary.DomainModel.AccountHolder","Creditor":"CommonTypeLibrary.DomainModel.AccountHolder","Prefunding":null,"Type":"Transfer","PaymentProcessType":"Internal","TransactionReference":"9f7742e7-0350-420a-9f6b-79d7bd024bc5","Suti":"CommonTypeLibrary.DomainModel.Suti","ExecutionDate":"CommonTypeLibrary.DomainModel.ExecutionDate","Amount":"SEK234.00","ResponsibleLedger":"CommonTypeLibrary.DomainModel.Ledger","RemittanceInformation":"None","OriginalTransactionReference":"None","SuppressedStatuses":[],"TransactionStatus":"Complete","Messages":null,"OrderReference":"289e272f-2677-409b-9576-f28b2763c658","TransactionIdentifier":"9f7742e7-0350-420a-9f6b-79d7bd024bc5","JobType":"TransactionStatusUpdateTask","JobRetries":0,"ProcessInstanceId":2251799813733043,"EnvironmentName":"Development"}} {"Level":"Information","MessageTemplate":"Publishing transaction status","RenderedMessage":"Publishing transaction status","Properties":{"SourceContext":"ApiGateway.Services.StatusUpdateService","Debtor":"CommonTypeLibrary.DomainModel.AccountHolder","Creditor":"CommonTypeLibrary.DomainModel.AccountHolder","Prefunding":null,"Type":"Transfer","PaymentProcessType":"Internal","TransactionReference":"e4dfbba0-90cf-4e1d-9ca3-e661ace5fe1d","Suti":"CommonTypeLibrary.DomainModel.Suti","ExecutionDate":"CommonTypeLibrary.DomainModel.ExecutionDate","Amount":"SEK901.00","ResponsibleLedger":"CommonTypeLibrary.DomainModel.Ledger","RemittanceInformation":"None","OriginalTransactionReference":"None","SuppressedStatuses":[],"TransactionStatus":"Complete","Messages":null,"OrderReference":"289e272f-2677-409b-9576-f28b2763c658","TransactionIdentifier":"e4dfbba0-90cf-4e1d-9ca3-e661ace5fe1d","JobType":"TransactionStatusUpdateTask","JobRetries":0,"ProcessInstanceId":2251799813733043,"EnvironmentName":"Development"}} {"Level":"Information","MessageTemplate":"Publishing transaction status","RenderedMessage":"Publishing transaction status","Properties":{"SourceContext":"ApiGateway.Services.StatusUpdateService","Debtor":"CommonTypeLibrary.DomainModel.AccountHolder","Creditor":"CommonTypeLibrary.DomainModel.AccountHolder","Prefunding":null,"Type":"Transfer","PaymentProcessType":"Internal","TransactionReference":"7ced831c-f8fd-41a2-88b1-6b564259539b","Suti":"CommonTypeLibrary.DomainModel.Suti","ExecutionDate":"CommonTypeLibrary.DomainModel.ExecutionDate","Amount":"SEK567.00","ResponsibleLedger":"CommonTypeLibrary.DomainModel.Ledger","RemittanceInformation":"None","OriginalTransactionReference":"None","SuppressedStatuses":[],"TransactionStatus":"Complete","Messages":null,"OrderReference":"289e272f-2677-409b-9576-f28b2763c658","TransactionIdentifier":"7ced831c-f8fd-41a2-88b1-6b564259539b","JobType":"TransactionStatusUpdateTask","JobRetries":0,"ProcessInstanceId":2251799813733043,"EnvironmentName":"Development"}} {"Level":"Information","MessageTemplate":"Publishing transaction status","RenderedMessage":"Publishing transaction status","Properties":{"SourceContext":"ApiGateway.Services.StatusUpdateService","Debtor":"CommonTypeLibrary.DomainModel.AccountHolder","Creditor":"CommonTypeLibrary.DomainModel.AccountHolder","Prefunding":null,"Type":"Transfer","PaymentProcessType":"Internal","TransactionReference":"9f7742e7-0350-420a-9f6b-79d7bd024bc5","Suti":"CommonTypeLibrary.DomainModel.Suti","ExecutionDate":"CommonTypeLibrary.DomainModel.ExecutionDate","Amount":"SEK234.00","ResponsibleLedger":"CommonTypeLibrary.DomainModel.Ledger","RemittanceInformation":"None","OriginalTransactionReference":"None","SuppressedStatuses":[],"TransactionStatus":"InProgress","Messages":[],"OrderReference":"289e272f-2677-409b-9576-f28b2763c658","TransactionIdentifier":"9f7742e7-0350-420a-9f6b-79d7bd024bc5","JobType":"TransactionStatusUpdateTask","JobRetries":0,"ProcessInstanceId":2251799813733043,"EnvironmentName":"Development"}} {"Level":"Information","MessageTemplate":"Publishing transaction status","RenderedMessage":"Publishing transaction status","Properties":{"SourceContext":"ApiGateway.Services.StatusUpdateService","Debtor":"CommonTypeLibrary.DomainModel.AccountHolder","Creditor":"CommonTypeLibrary.DomainModel.AccountHolder","Prefunding":null,"Type":"Transfer","PaymentProcessType":"Internal","TransactionReference":"e4dfbba0-90cf-4e1d-9ca3-e661ace5fe1d","Suti":"CommonTypeLibrary.DomainModel.Suti","ExecutionDate":"CommonTypeLibrary.DomainModel.ExecutionDate","Amount":"SEK901.00","ResponsibleLedger":"CommonTypeLibrary.DomainModel.Ledger","RemittanceInformation":"None","OriginalTransactionReference":"None","SuppressedStatuses":[],"TransactionStatus":"InProgress","Messages":[],"OrderReference":"289e272f-2677-409b-9576-f28b2763c658","TransactionIdentifier":"e4dfbba0-90cf-4e1d-9ca3-e661ace5fe1d","JobType":"TransactionStatusUpdateTask","JobRetries":0,"ProcessInstanceId":2251799813733043,"EnvironmentName":"Development"}} {"Level":"Information","MessageTemplate":"Publishing transaction status","RenderedMessage":"Publishing transaction status","Properties":{"SourceContext":"ApiGateway.Services.StatusUpdateService","Debtor":"CommonTypeLibrary.DomainModel.AccountHolder","Creditor":"CommonTypeLibrary.DomainModel.AccountHolder","Prefunding":null,"Type":"Transfer","PaymentProcessType":"Internal","TransactionReference":"7ced831c-f8fd-41a2-88b1-6b564259539b","Suti":"CommonTypeLibrary.DomainModel.Suti","ExecutionDate":"CommonTypeLibrary.DomainModel.ExecutionDate","Amount":"SEK567.00","ResponsibleLedger":"CommonTypeLibrary.DomainModel.Ledger","RemittanceInformation":"None","OriginalTransactionReference":"None","SuppressedStatuses":[],"TransactionStatus":"InProgress","Messages":[],"OrderReference":"289e272f-2677-409b-9576-f28b2763c658","TransactionIdentifier":"7ced831c-f8fd-41a2-88b1-6b564259539b","JobType":"TransactionStatusUpdateTask","JobRetries":0,"ProcessInstanceId":2251799813733043,"EnvironmentName":"Development"}} {"Level":"Information","MessageTemplate":"Publishing transaction status","RenderedMessage":"Publishing transaction status","Properties":{"SourceContext":"ApiGateway.Services.StatusUpdateService","TransactionReference":"e4dfbba0-90cf-4e1d-9ca3-e661ace5fe1d","TransactionStatus":"Registered","OrderStatus":"Registered","Messages":null,"OrderReference":"289e272f-2677-409b-9576-f28b2763c658","JobType":"OrderStatusUpdateTask","JobRetries":0,"ProcessInstanceId":2251799813733043,"EnvironmentName":"Development"}} {"Level":"Information","MessageTemplate":"Publishing transaction status","RenderedMessage":"Publishing transaction status","Properties":{"SourceContext":"ApiGateway.Services.StatusUpdateService","TransactionReference":"7ced831c-f8fd-41a2-88b1-6b564259539b","TransactionStatus":"Registered","OrderStatus":"Registered","Messages":null,"OrderReference":"289e272f-2677-409b-9576-f28b2763c658","JobType":"OrderStatusUpdateTask","JobRetries":0,"ProcessInstanceId":2251799813733043,"EnvironmentName":"Development"}} {"Level":"Information","MessageTemplate":"Publishing transaction status","RenderedMessage":"Publishing transaction status","Properties":{"SourceContext":"ApiGateway.Services.StatusUpdateService","TransactionReference":"9f7742e7-0350-420a-9f6b-79d7bd024bc5","TransactionStatus":"Registered","OrderStatus":"Registered","Messages":null,"OrderReference":"289e272f-2677-409b-9576-f28b2763c658","JobType":"OrderStatusUpdateTask","JobRetries":0,"ProcessInstanceId":2251799813733043,"EnvironmentName":"Development"}}   KR Daniel
when I run below query I am not able to get the sla_violation_count index=* execution-time=* uri="v1/validatetoken"  | stats count as total_calls, count(eval(execution-time > SLA)) as sla_violation... See more...
when I run below query I am not able to get the sla_violation_count index=* execution-time=* uri="v1/validatetoken"  | stats count as total_calls, count(eval(execution-time > SLA)) as sla_violation_count total_calls are displaying as 1 but not able to get sla_violation_count pasting the results below for the reference { datacenter: aus env: qa execution-time: 2145 thread: http-nio-8080-exec-2 uri: v1/validatetoken uriTemplate: v1/validatetoken }   Thanks in advance
I have a fairly common Splunk deployment, 1 SH, 1 DS and two Indexers. I want to upgrade from one Linux distro to another. Any experiences? I only have this  https://docs.splunk.com/Documentation/... See more...
I have a fairly common Splunk deployment, 1 SH, 1 DS and two Indexers. I want to upgrade from one Linux distro to another. Any experiences? I only have this  https://docs.splunk.com/Documentation/Splunk/9.1.4/Installation/MigrateaSplunkinstance A documnetation which is certainly lacking!
I'm currently experiencing difficulties integrating my Node.js application with AppDynamics. Despite following the setup instructions, I'm encountering issues with connecting my application to the Ap... See more...
I'm currently experiencing difficulties integrating my Node.js application with AppDynamics. Despite following the setup instructions, I'm encountering issues with connecting my application to the AppDynamics Controller.
Has anyone implemented OCSF model in your Splunk security practise. Got a rough idea in this and about to start the adoption of OCSF in our platform. As of now only identified the fields as per OCSF ... See more...
Has anyone implemented OCSF model in your Splunk security practise. Got a rough idea in this and about to start the adoption of OCSF in our platform. As of now only identified the fields as per OCSF MODEL. Yet few fields are missing still. How to check for new fields if it is yet to introduce in OCSF model. Any pros and cons on implementing this. And any tips would be helpful based on real time implementation. Thanks in advance
Hello,   Can anyone please provide me a search query to display the cpu usage regarding splunk instances like indexer, search head , deployment server etc?
Hello I'm  facing an issue with my AppDynamics Controller where I'm not able to retrieve certain data metrics due to an internal server error. When I try  to access performance metrics for a specifi... See more...
Hello I'm  facing an issue with my AppDynamics Controller where I'm not able to retrieve certain data metrics due to an internal server error. When I try  to access performance metrics for a specific application or tier, I encounter the following error message: Internal Server Error: Unable to Retrieve Data Metrics This error is causing our ability to effectively monitor and troubleshoot performance issues within our application. After reviewing the Controller logs and configuration settings, I haven't been able to pinpoint the exact cause of this issue. To provide some info , we're running AppDynamics Controller version 4.10.3 on a Linux-based server environment. We have multiple applications and tiers instrumented, and this error seems to be there consistently across all of them. Could someone with experience in the AppDynamics Controller offer help to  resolve this issue.  Any suggestion can provide would be greatly appreciated. Thank you for your time and support! Thank you stevediaz
I tried setting up cluster map, I have 93 country codes to display. However, the map shows only 10 colors and after that it shows only repeated Violet color. Further, I have used "| geostats latfield... See more...
I tried setting up cluster map, I have 93 country codes to display. However, the map shows only 10 colors and after that it shows only repeated Violet color. Further, I have used "| geostats latfield=Lat longfield=Long count by Country" count by country and it show's only 10 countries and for other countries it shows "Others".
I am able to push cloudwatch metrics by selecting streaming and selecting json as datatype of output. Additionally I used inbuilt lambda transformation for cloudwatch metrics. I selected source type ... See more...
I am able to push cloudwatch metrics by selecting streaming and selecting json as datatype of output. Additionally I used inbuilt lambda transformation for cloudwatch metrics. I selected source type aws:firehose:json. The data is seen in Splunk as json data which is not searchable. How to get the data in searchable format?
We want to migrate cluster indexers data from default location that is from (opt/splunk/var/lib/splunk) to customize location as warm/hot and cold.  Example : /opt/warm_hot  and opt/cold. How c... See more...
We want to migrate cluster indexers data from default location that is from (opt/splunk/var/lib/splunk) to customize location as warm/hot and cold.  Example : /opt/warm_hot  and opt/cold. How can achieve this goal Thank you