All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @Ravi.Rajangam,  I updated the link above. If this ever happens again, just look at the last bit of the URL, it shows you the title of the doc and you can search for the title in docs. https:/... See more...
Hi @Ravi.Rajangam,  I updated the link above. If this ever happens again, just look at the last bit of the URL, it shows you the title of the doc and you can search for the title in docs. https://docs.appdynamics.com/appd/24.x/24.4/en/application-monitoring/administer-app-server-agents/request-agent-log-files
Tried below query:  Where is no data for any Msgs it displaying zero only for 1st 3 rows remaining rows are displaying null. index=app-index source=application.logs |rex field= _raw "application :\... See more...
Tried below query:  Where is no data for any Msgs it displaying zero only for 1st 3 rows remaining rows are displaying null. index=app-index source=application.logs |rex field= _raw "application :\s(?<Application>\w+)" | rex field= _raw "(?<Msgs>Initial message received with below details|Letter published correctley to ATM subject|Letter published correctley to DMM subject|Letter rejected due to: DOUBLE_KEY|Letter rejected due to: UNVALID_LOG|Letter rejected due to: UNVALID_DATA_APP)" |chart count over Application by Msgs |rename "Initial message received with below details" as Income, "Letter published correctley to ATM subject" as ATM, "Letter published correctley to DMM subject" as DMM, "Letter rejected due to: DOUBLE_KEY" as Reject, "Letter rejected due to: UNVALID_LOG" as Rej_log, "Letter rejected due to: UNVALID_DATA_APP" as Rej_app |table Income Rej_app ATM DMM Reject Rej_log Rej_app |appendcols [| makeresults format=csv data="Income, Rej_app, ATM, DMM, Reject, Rej_log, Rej_app ,,,,, ,,,,, ,,,,," | fillnull] output: Application ATM DMM Income Rej_app Rej_log Reject Login 10 0 0 2 0 0 Success 12 0 0 1 0 0 Error 23 0 0 11 0 0 Debug 2     3     logout 1     50     error-state 61     20     normal-state 1     10    
The above link takes you to the root documentation home page of AppDynamics?
It looks like, you have the data, why not go through some training on creating some generic stuff, this is free online guide, good starting point, you learn some basics concepts, you can apply the pr... See more...
It looks like, you have the data, why not go through some training on creating some generic stuff, this is free online guide, good starting point, you learn some basics concepts, you can apply the principles to the AD data, and then further develop your skills by looking formal training ones.  https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/SearchTutorial/WelcometotheSearchTutorial
This appears to be a duplicate of this question splunk dashboard studio result variance - Splunk Community
It is not clear what your events look like, but you could try something like this | stats count by field1, field2
Hi, I have two panels with two different search results. Say, Panel A and Panel B both panels just return/shows single value. I want to get the difference of these panels in other panel but it sho... See more...
Hi, I have two panels with two different search results. Say, Panel A and Panel B both panels just return/shows single value. I want to get the difference of these panels in other panel but it should check whether the  panel A and Panel B finalized results before doing difference. please could you suggest ? Thanks, Selvam.
It worked Thank you @phanTom 
Hello all,   Can someone Please help me, regarding my qwery,  "base | stats count by field 1" I am using this qwery but i would like to add field2 also in this qwery as form of table, Please ... See more...
Hello all,   Can someone Please help me, regarding my qwery,  "base | stats count by field 1" I am using this qwery but i would like to add field2 also in this qwery as form of table, Please provide your valuable suggestions      
Original_host Filed extraction should be aligned if a Syslog server have different date/time format. The current filed extraction is defined based on your syslog server and I am positive that this ap... See more...
Original_host Filed extraction should be aligned if a Syslog server have different date/time format. The current filed extraction is defined based on your syslog server and I am positive that this app works only for a couple of Splunk customers.
@harishlnu just leave the command field empty and put the full SPL in the query field and it will work. It may complain about the command field not being populated but IMO that was a silly addition t... See more...
@harishlnu just leave the command field empty and put the full SPL in the query field and it will work. It may complain about the command field not being populated but IMO that was a silly addition to the app action. -- Hope this helps! If it does please mark as a solution for the future. Happy SOARing! --
Hi Team, Could you please help me on running query in Splunk, The query starts with | ldapsearch. run query only have command search,tstats,eval,savedsearch,stats Could you please guide me on t... See more...
Hi Team, Could you please help me on running query in Splunk, The query starts with | ldapsearch. run query only have command search,tstats,eval,savedsearch,stats Could you please guide me on this Thanks in advance   Regards, Harisha  
The OS requirement is somewhat flexible to allow for OS upgrades, patches, etc.  In my mind, it means Linux vs Windows more than Ubuntu vs CentOS.  That said, every effort should be made to have the ... See more...
The OS requirement is somewhat flexible to allow for OS upgrades, patches, etc.  In my mind, it means Linux vs Windows more than Ubuntu vs CentOS.  That said, every effort should be made to have the CM and indexers on the same release. You should have no problems adding the Ubuntu indexers to the cluster.
@richgalloway wrote: I think you have right idea on all counts.  Migrating the CM is similar to migrating a SH.  Do migrate the CM before the indexers. Working on this project. I have the new... See more...
@richgalloway wrote: I think you have right idea on all counts.  Migrating the CM is similar to migrating a SH.  Do migrate the CM before the indexers. Working on this project. I have the new CM stood up on Ubuntu 22 and it has replaced the Centos 7 CM which is now offline. The Indexers are still on Centos 7. I see in the docs that the CM and indexers need to be the same OS. Is this true? The cluster seems to be working fine so far and I'm working on the new Ubuntu indexers that will be added to the cluster. Still safe to proceed or will I run into issues adding the Ubuntu indexers to the cluster? Found under "Operating system requirements" "All indexer cluster nodes (manager node, peer nodes, and search heads) must run on the same operating system and version." System requirements and other deployment considerations for indexer clusters - Splunk Documentation
What I would say is we would normally send all the data to both indexers (This is data load balancing , portions of the data get spread across the two indexers and you get better performance that way... See more...
What I would say is we would normally send all the data to both indexers (This is data load balancing , portions of the data get spread across the two indexers and you get better performance that way, this also has nothing to do with data clustering in the real sense - Best preactise. But it’s your choice at the end of the day. Heavy Forwarders are typically used for Add-ons – full Splunk instance, from there they send data to indexers. Best practice. You should also send the HF internal logs to the indexers – Best Practise. The SH connects to the indexers and is configured to sends its logs to the indexers, Best Practise. The below is an outputs.conf example  that sends to both indexers and sends the local Splunk internal logs , you can tune it to just send to one indexer if you want, just remove the second indexer from the group list and uncomment the specific indexer setting. Add this to your $SPLUNK_HOME/etc/system/local/outputs.conf on the SH - make the changes to reflect your environment names and test. If you’re using a custom separate app for outputs.conf then add to that. Restart the SH Splunk. You can do the same on a HF. NOTE:(The new internal indexes _ds*, so these need to be created on the indexer's if you are using the latest versions of Splunk) . NOTE: Ensure firewalls, ports, NTP have been configured and I'm assuming your not using TLS - thats another subject.    outputs.conf  Example [indexAndForward] index = false [tcpout] defaultGroup = <my_group_name_indexers> forwardedindex.filter.disable = true indexAndForward = false forwardedindex.2.whitelist = (_audit|_internal|_introspection|_telemetry|_metrics|_metrics_rollup|_configtracker|_dsclient|_dsphonehome|_dsappevent) [tcpout:my_group_name_indexers] #Remove the second indexer if you only want to send to one indexer server = mysplunk_indexer1:9997, mysplunk_indexer2:9997 #This is only for one indexer receiver #[tcpout-server://mysplunk_indexer1:9997]  
Looking to create a dashboard to allow users to lookup usernames, information, and groups within the Active Directory data. How do I create a search for this?
<main index> NOT [search <other source> NoHandlerFoundException | stats count by xCorrelationId | fields xCorrelationId | format] However, depending on how may exceptions you have, you may run into ... See more...
<main index> NOT [search <other source> NoHandlerFoundException | stats count by xCorrelationId | fields xCorrelationId | format] However, depending on how may exceptions you have, you may run into limitations as the sub-search with the format command will essentially return a long string which might be too large to be parsed in the main search. Another way to do it is to search both sources, and correlate by xCorrelationId and exclude those xCorrelationId's which have the exception, but this still means you are retrieving both full sets of events and correlating them before you can filter out any.
Hi, i have a question on Authenticating to IDX Cluster Peer via REST. We have the following Environment: 3 IDX in Cluster 3 SH in Cluster 1 CM (License Manager, IDX Cluster Manager, Deployer &... See more...
Hi, i have a question on Authenticating to IDX Cluster Peer via REST. We have the following Environment: 3 IDX in Cluster 3 SH in Cluster 1 CM (License Manager, IDX Cluster Manager, Deployer & Deploymentserver) Our normal Authentication for Web is currently with LDAP. With my LDAP-User i can directly perform a GET request to an Indexer, but with a local User created over WebUI (tried local user in SHC and on CM) i cant perform any request to an indexer.  The WebUI is disabled on the Indexers and they dont have the LDAP Configuration as the Searchheads does. How does it come, that the Indexer know my LDAP User but not the locally created? And how can i let the indexers to get to know a locally on SH or CM created user?
Hi @yuanliu , Here are my responses below. Please let me know if I missed anything. 1) What is your actual data structure? - It is in JSON format. userActions: [ [-] { [-] apdexCategory: FRUSTR... See more...
Hi @yuanliu , Here are my responses below. Please let me know if I missed anything. 1) What is your actual data structure? - It is in JSON format. userActions: [ [-] { [-] apdexCategory: FRUSTRATED application: xxx cdnBusyTime: null cdnResources: 0 cumulativeLayoutShift: 0.0006 customErrorCount: 0 dateProperties: [ [+] ] documentInteractiveTime: 13175 domCompleteTime: 15421 domContentLoadedTime: 14261 domain: xxx doubleProperties: [ [+] ] duration: 15430 endTime: 1714043044710 firstInputDelay: 1 firstPartyBusyTime: null firstPartyResources: 0 frontendTime: 14578 internalApplicationId: APPLICATION-B9BADE8D75A35E32 internalKeyUserActionId: APPLICATION_METHOD-E3EF5284923A6BA3 javascriptErrorCount: 0 keyUserAction: true largestContentfulPaint: 15423 loadEventEnd: 15430 loadEventStart: 15430 longProperties: [ [+] ] matchingConversionGoals: [ [+] ] name: loading of page /home navigationStart: 1714043029280 networkTime: 804 requestErrorCount: 0 requestStart: 630 responseEnd: 852 responseStart: 678 serverTime: 48 speedIndex: 13866 startTime: 1714043029280 stringProperties: [ [+] ] targetUrl: xxx thirdPartyBusyTime: null thirdPartyResources: 0 totalBlockingTime: null type: Load userActionPropertyCount: 0 visuallyCompleteTime: 15416 } ] 2) Is it close to what I speculated? Further, you have never illustrated what is expected output. So, the two screenshots with only timestamps means nothing to volunteers. How can we help further? -Yes correct, it is relevant to the data what you predicted and here is the expected output _time                                      Application      Action                   Target_URL          Duration       User_Action_Type 2024-04-25 07:39:53     xxx                      loading /home    www.abc.com    0.26                Load 2024-04-25 06:25:50     abc                     loading /wcc/ui/ www.xyz.com   3.00                 Load 2024-04-24 19:00:57     xyz                     keypress  policy   www.bdc.com  3.00                 Load 2024-04-24 17:05:11     abc                    loading /home      www.xyz.com   0.53                 Xhr 2024-04-24 10:14:47     bcd                    loading /prod        www.rst.com     0.02                 Load 3) Specifically, WHY should the output NOT have multiple timestamps after deduping application, name and target url? - I could see multiple timesteps after dedup the application, name and target url. Here is the screenshot below and see the yellow highlighted.   4) In your original SPL, the only dedup is on x, which is Application+Action+Target_URL. How is this different?. My bad, here is the below correct SPL query that I tried. index="dynatrace" sourcetype="dynatrace:usersession" source=saas_prod events{}.application="participantmanagement.thehartford.com" userExperienceScore=FRUSTRATED | rename userActions{}.application as Application, userActions{}.name as Action, userActions{}.targetUrl as Target_URL, userActions{}.duration as Duration, userActions{}.type as User_Action_Type, userActions{}.apdexCategory as useractions_experience_score | eval x=mvzip(mvzip(Application,Action),Target_URL), y=mvzip(mvzip(Duration,User_Action_Type),useractions_experience_score) | mvexpand x | mvexpand y | eval x=split(x,","), y=split(y,",") | eval Application=mvindex(x,0), Action=mvindex(x,1), Target_URL=mvindex(x,2), Duration=mvindex(y,0), User_Action_Type=mvindex(y,1), useractions_experience_score=mvindex(y,2) | eval Duration_in_Mins=Duration/60000 | eval Duration_in_Mins=round(Duration_in_Mins,2) | table _time, Application, Action, Target_URL,Duration_in_Mins,User_Action_Type,useractions_experience_score | search useractions_experience_score=FRUSTRATED | sort - _time | dedup _time | search Application="*" | search Action="*" | fields - useractions_experience_score 5) Anything after mvexpand in my search is based on my reading of your intent based only on that complex SPL sample. Instead of making volunteers to read your mind, how about expressing the actual dataset, the result you are trying to get from the data, and the logic to derive the desired result and dataset in plain language (without SPL)? Here is the actual dataset below. 4/25/24 7:39:53.000 AM { [-] applicationType: WEB_APPLICATION bounce: true browserFamily: Microsoft Edge browserMajorVersion: Microsoft Edge 122 browserType: Desktop Browser clientType: Desktop Browser connectionType: UNKNOWN dateProperties: [ [+] ] displayResolution: HD doubleProperties: [ [+] ] duration: 15430 endReason: TIMEOUT endTime: 1714043044710 errors: [ [+] ] events: [ [-] { [-] application: xxx internalApplicationId: APPLICATION-B9BADE8D75A35E32 name: Page change page: /index.html pageGroup: /index.html startTime: 1714043029280 type: PageChange } { [-] application: xxx internalApplicationId: APPLICATION-B9BADE8D75A35E32 name: Page change page: /home pageGroup: /home pageReferrer: /index.html pageReferrerGroup: /index.html startTime: 1714043043405 type: PageChange } { [-] application: xxx internalApplicationId: APPLICATION-B9BADE8D75A35E32 name: Page change page: /employee/details pageGroup: /employee/details pageReferrer: /employee/coverage-list pageReferrerGroup: /employee/coverage-list startTime: 1714043088821 type: PageChange } { [-] application: xxx internalApplicationId: APPLICATION-B9BADE8D75A35E32 name: Page change page: /employee/coverage-list pageGroup: /employee/coverage-list pageReferrer: /employee/details pageReferrerGroup: /employee/details startTime: 1714043403199 type: PageChange } ] hasCrash: false hasError: false hasSessionReplay: false internalUserId: 17140430425327CQRENIQATV2OT5DV5BTJ2UB3MQF2ALH ip: 10.215.67.0 longProperties: [ [+] ] matchingConversionGoals: [ [+] ] matchingConversionGoalsCount: 0 newUser: true numberOfRageClicks: 0 numberOfRageTaps: 0 osFamily: Windows osVersion: Windows 10 partNumber: 0 screenHeight: 720 screenOrientation: LANDSCAPE screenWidth: 1280 startTime: 1714043029280 stringProperties: [ [+] ] syntheticEvents: [ [+] ] tenantId: vhz76055 totalErrorCount: 0 totalLicenseCreditCount: 0 userActionCount: 1 userActions: [ [-] { [-] apdexCategory: FRUSTRATED application: xxx cdnBusyTime: null cdnResources: 0 cumulativeLayoutShift: 0.0006 customErrorCount: 0 dateProperties: [ [+] ] documentInteractiveTime: 13175 domCompleteTime: 15421 domContentLoadedTime: 14261 domain: xxx doubleProperties: [ [+] ] duration: 15430 endTime: 1714043044710 firstInputDelay: 1 firstPartyBusyTime: null firstPartyResources: 0 frontendTime: 14578 internalApplicationId: APPLICATION-B9BADE8D75A35E32 internalKeyUserActionId: APPLICATION_METHOD-E3EF5284923A6BA3 javascriptErrorCount: 0 keyUserAction: true largestContentfulPaint: 15423 loadEventEnd: 15430 loadEventStart: 15430 longProperties: [ [+] ] matchingConversionGoals: [ [+] ] name: loading of page /home navigationStart: 1714043029280 networkTime: 804 requestErrorCount: 0 requestStart: 630 responseEnd: 852 responseStart: 678 serverTime: 48 speedIndex: 13866 startTime: 1714043029280 stringProperties: [ [+] ] targetUrl: xxx thirdPartyBusyTime: null thirdPartyResources: 0 totalBlockingTime: null type: Load userActionPropertyCount: 0 visuallyCompleteTime: 15416 } ] userExperienceScore: FRUSTRATED userSessionId: UEFUBRDAPRDHURTCPUKFKKPJVORTPPJA-0 userType: REAL_USER  
There are multiple methods to achieve this. However, lets first try it in a simpler way index=mday source="service_status.ps1" sourcetype=service_status os_service="App_Service" host=*papp01 |stats ... See more...
There are multiple methods to achieve this. However, lets first try it in a simpler way index=mday source="service_status.ps1" sourcetype=service_status os_service="App_Service" host=*papp01 |stats latest(status) AS status by host |eventstats values(status) as _status |eval OverallStatus=if(mvcount(_status) < 2 OR isnull(mvfind(_status,"Running")),"Down","Good") Steps - count the status values - If the count is less than 2  : meaning only one of the status from Running/Stopped is present - OR Running status is not available, we are setting the overall status as down. In this way, we can handle multiple situations where one of the server is down or both are reporting down or even both are reporting Running (active & passive) Demonstrated with a dummy search |makeresults|eval host="HostA",status="Running" |append[|makeresults|eval host="HostB",status="Stopped"] |stats latest(status) as status by host |eventstats values(status) as _status |eval OverallStatus=if(mvcount(_status) < 2 OR isnull(mvfind(_status,"Running")),"Down","Good") Try changing the status of HostA or HostB and see the results.