All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi All, What are the licenses and subscription required for Lambda Monitoring in AppDynamics. Our requirement is to monitor Microservices in Lambda. The technology used is Node Js. As per below com... See more...
Hi All, What are the licenses and subscription required for Lambda Monitoring in AppDynamics. Our requirement is to monitor Microservices in Lambda. The technology used is Node Js. As per below community answer this doesn't require APM license and only requires AppDynamics Serverless APM for AWS Lambda https://community.appdynamics.com/t5/Licensing-including-Trial/How-does-licensing-work-when-instrumenting-AppD-and-lambda/m-p/38605#M545 But, I also could find the below comment in documentation (https://docs.appdynamics.com/appd/23.x/latest/en/application-monitoring/install-app-server-agents/serverless-apm-for-aws-lambda/subscribe-to-serverless-apm-for-aws-lambda) An AppDynamics Premium or Enterprise license, using either the Agent-based Licensing model or the Infrastructure-based Licensing model. Please provide clarity on this, If APM license is required or not. Thanks Fadil
Actually I already evals all field and made fillnull with "Unknonwn" strings all the fields. However some queries show same amount of event, but some field filled "Unknonwn" even it actually have val... See more...
Actually I already evals all field and made fillnull with "Unknonwn" strings all the fields. However some queries show same amount of event, but some field filled "Unknonwn" even it actually have values.  Or rebuild the datamodel is needed?
Have you checked the Search behind your dashboard panels? Have you verified that all used fields in these searches are still existing?
Hi @thellmann , We have our hosted apps on Splunk Enterprise and vetting is also completed and passed successfully. How can I unit test that app  over splunk cloud without license or using any Dev li... See more...
Hi @thellmann , We have our hosted apps on Splunk Enterprise and vetting is also completed and passed successfully. How can I unit test that app  over splunk cloud without license or using any Dev license before release. Any workaround for this?
| stats max(avg_io_wait_time) as avg_io_wait_time by host | sort avg_io_wait_time | streamstats c as severity | eval host = printf("%*s", len(host) + severity, host) | stats max(avg_io_wait_time)... See more...
| stats max(avg_io_wait_time) as avg_io_wait_time by host | sort avg_io_wait_time | streamstats c as severity | eval host = printf("%*s", len(host) + severity, host) | stats max(avg_io_wait_time) as avg_io_wait_time by host
This worked smooth: | stats max(avg_io_wait_time) as avg_io_wait_time by host | sort avg_io_wait_time | streamstats c as severity | eval host = printf("%*s", len(host) + severity, host) | stats ... See more...
This worked smooth: | stats max(avg_io_wait_time) as avg_io_wait_time by host | sort avg_io_wait_time | streamstats c as severity | eval host = printf("%*s", len(host) + severity, host) | stats max(avg_io_wait_time) as avg_io_wait_time by host    
I am unable to see uploaded tab on Free trial Splunk cloud instance
Do you face this issue for all SAML user? Or only for a specific user? 
My problem was solved by creating a private app with a customized props.conf file, which defines different TZ for different hosts like showed as below: [host::hostA] TZ = xxx [host::hostB] TZ = xxx
Hi @elend , you are working on Datamodels, so the only approach is to creater a calculated field that, when the DM is populated, it takes a value when a field is empty, e.g.: | eval destination=if(... See more...
Hi @elend , you are working on Datamodels, so the only approach is to creater a calculated field that, when the DM is populated, it takes a value when a field is empty, e.g.: | eval destination=if(isempty(destination),"unknown",destination) but you have to do this as a calculated field to use in the population searcjh, not in the same search. Then you have to do this for all your fields. Ciao. Giuseppe
Thanks for the answer. Does scheduled task also start again on splunk restart? 
Hi @bhaskar5428 , it should be already extracted because Splunk recognizes the pais:field=value; anyway, you could try this regex: orderId\=(?<orderId>[^\]]+) that you can test at https://regex101... See more...
Hi @bhaskar5428 , it should be already extracted because Splunk recognizes the pais:field=value; anyway, you could try this regex: orderId\=(?<orderId>[^\]]+) that you can test at https://regex101.com/r/OtSfRS/1 Ciao. Giuseppe 
orderId=(?<orderid>[^\]]+)\]
Process transaction locally [idempotencyId=27cb55d0-3844-4e8f-8c4b-867ed64610a220240821034250387S39258201QE, deliveringApplication=MTNA0002, orderId=8e1d1fc0-5fe2-4643-bc1f-12debe6a7a06]     i wou... See more...
Process transaction locally [idempotencyId=27cb55d0-3844-4e8f-8c4b-867ed64610a220240821034250387S39258201QE, deliveringApplication=MTNA0002, orderId=8e1d1fc0-5fe2-4643-bc1f-12debe6a7a06]     i would like to extract Order Id from above sample data  which is = 8e1d1fc0-5fe2-4643-bc1f-12debe6a7a06   Pls suggest
is it possible to make the null value filled with some value so it still counted?. i search for this option and there is some solution - made change on props conf to eval the null value - use tstat... See more...
is it possible to make the null value filled with some value so it still counted?. i search for this option and there is some solution - made change on props conf to eval the null value - use tstats ... fillnull_value="null" is there other option or best approach for this?
Mmm, I think the problem is that the min/max applies to the entire dataset rather than per series, because if you don't use trellis, there is only min/max for the entire chart, not per series.  
If you wanna filter out all other events please try props.conf [sourcetype] TRANSFORMS-filter = setnull,stanza transforms: [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue [stanz... See more...
If you wanna filter out all other events please try props.conf [sourcetype] TRANSFORMS-filter = setnull,stanza transforms: [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue [stanza] REGEX = "Snapshot created successfully" DEST_KEY = queue FORMAT = indexQueue  
Please specific the parameter for both stanzas as below shown and let me know how did you apply the inputs.conf? Via the deploymentserver or locally? Please share the whole path of the settings.   ... See more...
Please specific the parameter for both stanzas as below shown and let me know how did you apply the inputs.conf? Via the deploymentserver or locally? Please share the whole path of the settings.   [WinEventLog://Directory Service] checkpointInterval = 5 current_only = 0 disabled = 0 index = <your_index> start_from = oldest [WinEventLog://DNS Server] checkpointInterval = 5 current_only = 0 disabled = 0 index = <your_index> start_from = oldest
Hello everyone ,  I want to filter data for a specific keyword "Snapshot created successfully " from a log file but i am getting other events also along with the searched keywords. My entries in pr... See more...
Hello everyone ,  I want to filter data for a specific keyword "Snapshot created successfully " from a log file but i am getting other events also along with the searched keywords. My entries in props.conf and transform.conf is as below :   props.conf [sourcetype] TRANSFORMS-filter = stanza transforms.conf [stanza] REGEX = "Snapshot created successfully" DEST_KEY = queue FORMAT = indexqueue Is there any issue here ?
It is somewhat confusing what that mvexpand is supposed to do and why string merge is necessary.  As I last commented in your other post, there is nothing wrong with Splunk's left join.  Even though ... See more...
It is somewhat confusing what that mvexpand is supposed to do and why string merge is necessary.  As I last commented in your other post, there is nothing wrong with Splunk's left join.  Even though I want to avoid join in general, join is better than doing all that extra work.  Here is my emulation:   | makeresults format=csv data="ip_address, host 10.1.1.1, host1 10.1.1.2, host2 10.1.1.3, host3 10.1.1.4, host4 10.1.1.5, host5 10.1.1.6, host6 10.1.1.7, host7" | rename ip_address as ip | join max=0 type=left ip [makeresults format=csv data="ip, risk, score, contact 10.1.1.1, riskA, 6, , 10.1.1.1, riskB, 7 , 10.1.1.1, ,, person1, 10.1.1.1, riskC, 6,, 10.1.1.2, ,, person2, 10.1.1.3, riskA, 6, person3, 10.1.1.3, riskE, 7, person3, 10.1.1.4, riskF, 8, person4, 10.1.1.8, riskA, 6, person8, 10.1.1.9, riskB, 7, person9"] | table ip, host, risk, score, contact   The output is ip host risk score contact 10.1.1.1 host1 riskA 6   10.1.1.1 host1 riskB 7   10.1.1.1 host1     person1 10.1.1.1 host1 riskC 6   10.1.1.2 host2     person2 10.1.1.3 host3 riskA 6 person3 10.1.1.3 host3 riskE 7 person3 10.1.1.4 host4 riskF 8 person4 10.1.1.5 host5       10.1.1.6 host6       10.1.1.7 host7       Hope this helps. (And thanks for posting data emulation.  That makes things easier.)